Last week I introduced this illustration as a typical patent drawing and asked if you could decipher the riddle of its functionality.
Patent drawings are static, two dimensional (2D) representations of proposed inventions which are meant to be manufactured in three dimensions (3D). As such they present a lot of complex information on a flat page.
If you don’t have a clue as to what this machine is, I guarantee you’re not alone. The average person wouldn’t. There’s a bunch of lines, shapes and numbers, but what do they signify? How are they meant to all come together and operate?
As a matter of fact, the average person isn’t meant to understand patent drawings. That’s because they’re not what patent courts have defined as a person of ordinary skill in the art, a peculiar term which basically means that the Average Joe or Josephine isn’t meant to be able to interpret them.
Rather, the interpretation of patent drawings is left to individuals with specialized skills and training, a particular educational background and/or work experience. These individuals are typically able to view a static 2D image and visualize how the illustrated device moves, how it operates. Those said to fall within the court’s definition as having ordinary skill in the art are in fact often engineers and scientists.
Since the average person does not have a background in engineering and science, it can be challenging for patent attorneys to present their cases in the courtroom, particularly when relying on 2D representations alone. That’s where animations come in.
Next time we’ll use the magic of animation to transform our cryptic 2D patent illustration into a functional 3D animation of a machine whose operation is easily understood by the average person.
Posts Tagged ‘mechanical engineer’
I remember the first time I saw a blueprint. It was during high school shop class where we learned how to use power tools to make the wooden chairs, tables, and chests shown in blueprints. I was completely confused. The odd paper and blue print, coupled with the liberal use of unfamiliar symbols, dashes and dots, and what appeared to be a mind boggling amount of detail was enough to start me in a cold sweat.
For many people, patent drawings are a lot like that first blueprint I saw. As a static two-dimensional (2D) representation of an operational device which is often complex, they present an immense amount of information on a page. The average person would be hard pressed to interpret them, and in fact, as we’ll learn later, they aren’t supposed to be able to.
We’ve been talking about patent basics in this series of blogs, and we’ll continue that discussion in the following weeks with a concentration on patent drawings. In the meantime, here’s one to ponder. When you look at a patent drawing like the one below, what do you see? What do you think this thing is and what is it supposed to do? We’ll find out next week…
| Rubber bands, plastic food wrap, bandages that conform to knuckles and knees, where would we be without them? These are all fairly recent inventions, but their elastic properties were imagined far before they actually came into existence.
Around the turn of the 19th Century a mathematics genius by the name of Siméon Denis Poisson dabbled in higher level mathematics. He enjoyed working with calculus and probability theories and their applications, and his work eventually led to the discovery of his own special ratio, the “Poisson ratio.” Denoted today by the Greek letter “µ,” his discovery has a great deal to do with elasticity. In fact, much of his work evolved to become the modern study of engineering.
If you’ll remember from last week’s blog, we talked about the elasticity of materials, including materials you generally wouldn’t think of as being elastic. In our steel rod example we saw that when you pull on the ends of a steel rod hard enough, you can actually stretch it and make it longer. But where does this extra length come from?
According to Poisson’s ratio, as the rod lengthens, its diameter decreases proportionately. The rod’s increased length comes at the expense of its diameter. You can see this effect at work by repeatedly stretching that fat rubber band whose task it is to contain your bulging Sunday paper. The more you pull on it, the skinnier the rubber band becomes. It will eventually get to the point were its elastic properties have been so compromised it won’t even be able to hold together Monday’s paper.
Over the decades that have passed since Poisson’s discovery a multitude of laboratory tests have been conducted to determine µ for a vast number of materials. These values have been duly tabulated in engineering reference books, doing away with the tedious task of conducting individualized experimentation by present day design engineers. Steel, for example, has a Poisson’s ratio of around 0.28, and this number is readily available in most strength of materials reference books.
It’s pretty obvious why Poisson’s contribution is important to the world of engineering, but now let’s see how his ratio can be applied.
Last week we saw that a 15-foot long, 2-inch diameter round steel rod stretches by 0.115 inches when it is pulled by a steady 60,000 pound force. Poisson’s ratio tell us that this results in an accompanying decrease in diameter, but by how much? To find out, we simply multiply the stretched length of the rod by Poisson’s ratio for steel (µ = 0.28). Plugging these numbers into an equation we see that the diameter decreases by:
0.115 inches × 0.28 = 0.032 inches
This is approximately the thickness of nine sheets of paper.
So if the rod was 2 inches in diameter before the 60,000 pound force was applied, its new diameter after application of the stretching force would be:
2 inches – 0.032 inches = 1.968 inches
The change of .032 in the rod’s diameter may not seem like much, but in the world of machine parts it could mean the difference between parts fitting properly or becoming loose.
This wraps up our short series on strengths of elastic materials. Next time we’ll move on to discuss coal power plant fundamentals, an arena in which many of the things we’ve been discussing take on real world meaning.
In the past few weeks we’ve taken a look at both mechanical and dynamic brakes. Now it’s time to bring the two together for unparalleled stopping performance.
Have you ever wandered along a railroad track, hopping from tie to tie, daring a train to come roaring along and wondering if you could jump to safety in time? Many have, and many have lost the bet. That’s because a train, once set into motion, is one of the hardest things on Earth to bring to a stop. In this discussion, let’s focus on the locomotive. A large, six-axle variety is shown in Figure 1.
Figure 1 – A Six-Axle Diesel-Electric Locomotive
These massive iron horses are known in the industry as diesel-electric locomotives, and here’s why. As Figure 2 shows, diesel-electric locomotives are powered by huge diesel engines. Their engine spins an electrical generator which effectively converts mechanical energy into electrical energy. That electrical energy is then sent from the generator through wires to electric traction motors which are in turn connected to the locomotive’s wheels by a series of gears. In the case of a six-axle locomotive, there are six traction motors all working together to make the locomotive move. So how do you get this beast to stop?
Figure 2 – The Propulsion System In A Six-Axle Diesel-Electric Locomotive
You probably noticed in Figure 2 that there are resistor grids and cooling fans. As long as you’re powering a locomotive’s traction motors to move a train, these grids and fans won’t come into play. It’s when you want to stop the train that they become important. That’s when the locomotive’s controls will act to disconnect the traction motor wires running from the electrical generator and reconnect them to the resistor grids as shown in Figure 3 below.
Figure 3 – The Dynamic Braking System In A Six-Axle Diesel Electric Locomotive
The traction motors now become generators in a dynamic braking system. These motors take on the properties of a generator, converting the moving train’s mechanical, or kinetic, energy into electrical. The electrical energy is then moved by wires to the resistor grids where it is converted to heat energy. This heat energy is removed by powerful cooling fans and released into the atmosphere. In the process the train is robbed of its kinetic energy, causing it to slow down.
Now you may be thinking that dynamic brakes do all the work, and this is pretty much true, up to a point. Although dynamic brakes may be extremely effective in slowing a fast-moving train, they become increasingly ineffective as the train’s speed decreases. That’s because as speed decreases, the traction motors spin more slowly, and they convert less kinetic energy into electrical energy. In fact, below speeds of about 10 miles per hour dynamic brakes are essentially useless. It is at this point that the mechanical braking system comes into play to bring the train to a complete stop.
Let’s see how this switch from dynamic to mechanical dominance takes place. A basic mechanical braking system for locomotive wheels is shown in Figure 4. This system, also known as a pneumatic braking system, is powered by compressed air that is produced by the locomotive’s air pump. A similar system is used in the train’s railcars, employing hoses to move the compressed air from the locomotive to each car.
Figure 4 – Locomotive Pneumatic Braking System
In the locomotive pneumatic braking system, pressurized air enters an air cylinder. Once inside, the air bears against a spring-loaded piston, as shown in Figure 4(a). The piston moves, causing brake rods to pivot and clamp the brake shoes to the locomotive’s wheel with great force, slowing the locomotive. When you want to get the locomotive moving again, you vent the air out of the cylinder as shown in Figure 4(b). This takes the pressure off the piston, releasing the force from the brake shoes. The spring in the cylinder is now free to move the shoes away from the wheel so they can turn freely. We have now returned to the situation present in Figure 2, and the locomotive starts moving again.
Next week we’ll talk about regenerative braking, a variation on the dynamic braking concept used in railway vehicles like electric locomotives and subway trains.
Last week we looked at how a mechanical brake stopped a rotating wheel by converting its mechanical energy, namely kinetic energy, into heat energy. This week, we’ll see how a dynamic brake works.
Chances are you have directly benefited by a dynamic braking system the last time you rode in an elevator. But, to understand the basic principle behind an elevator’s dynamic brake system, let’s first take a look at the electric braking system in Figure 1 below.
Figure 1 – A Simple Electric Braking System
Here the brake consists of an electric generator wired via an open switch to an electrical component called a resistor. The weight is attached to a cable that is wound around a pulley on the generator’s shaft. As the weight freefalls, the cable unwinds on the pulley, causing the pulley to turn the generator’s shaft.
Unlike last week’s mechanical brake which required a good deal of effort to employ, a dynamic braking system requires very little. All that needs to be done is to close a switch as shown in Figure 2 below. When the switch is closed, an electrical circuit is created where the resistor gets connected to the generator. The resistor does as its name implies: it resists (but doesn’t stop) the electrical current flowing through it from the generator. As the electrical current fights its way through the resistor to get back to the generator, the resistor gets hot like an electric heater. This heat is dissipated to the cooler surrounding air. At the same time, the weight begins to slow down in its descent. But how is this happening?
The electric braking system can be thought of as an energy conversion process. We start out with the kinetic, or motion energy, of the freefalling weight. This kinetic energy is transmitted to the electrical generator by the cable, which spins the generator’s shaft as the cable unwinds. Electrical generators are machines that convert kinetic energy into electrical energy. This energy travels from the electric generator through wires and a closed switch to the resistor. In the process the resistor converts the electrical energy into heat energy. So, kinetic energy is drawn from the falling weight through the conversion process and leaves the process in the form of heat. As the falling weight is drained of kinetic energy, it slows down.
Figure 2 – Applying the Electric Brake
Okay, now let’s get back to dynamic brakes on elevators. An elevator is attached by a cable to a hoist that is powered by an electric motor. When it’s time to stop at the desired floor, the automatic control system disconnects the elevator’s electric motor from its power source and turns the motor into a generator. The generator is then automatically connected to a resistor like the one shown in the electric brake above. The kinetic energy of the moving elevator is converted by the generator into electrical energy. The resistor converts the electrical energy into heat energy which is then dissipated into the surrounding environment. The elevator slows down in the process because it’s being robbed of kinetic energy. When the dynamic brake slows the elevator down enough, a mechanical brake is introduced, taking over to bring the elevator to a complete stop. This two-fold process serves to reduce wear and tear on the mechanical brake’s parts, lengthening the operational lifespan of the system as a whole.
Next time, we’ll tie everything together and show how mechanical and dynamic brakes work together in a diesel locomotive.
Pumps are all around us. They keep our drinking water flowing, the cooling water circulating in your car’s engine, and even your blood flowing. They’re essential in many aspects of our lives, but most of us don’t think too much about them. For our discussion let’s put them into two categories: positive displacement pumps and centrifugal pumps. This week, we’ll focus on positive displacement pumps.
Positive displacement pumps, as their name implies, displace a quantity of liquid with each complete cycle of movement. This takes place when moving parts of the pump take “bites” out of the liquid at the inlet, then force them to exit through the outlet. A familiar example of a positive displacement pump is the type of hand operated water pump that’s commonly found in campgrounds. See Figure 1.
Figure 1 – A Positive Displacement Pump
This type of pump is known as a reciprocating positive displacement pump. By reciprocating, I mean that the moving parts travel back and forth in a straight line during its operation. Let’s see how it works by referring to the cutaway view in Figure 2.
Figure 2 – Cutaway View of the Pump Shown in Figure 1
In the cutaway view, the pump’s piston and internal check valve are shown, and there’s another check valve in the bottom of the pump housing. When you pull up on the handle, the piston moves down into the water in the pump housing, and the pressure caused by this movement forces the check valve in the bottom to slam closed, while the check valve above is forced open. This causes water movement to flood through the open check valve and fill up the space above the piston. When you push down on the handle, the opposite happens. The piston is made to move upward. The upward acceleration of the water above the piston causes the check valve on the piston to slam shut, and this traps the water above it. As the piston moves back up, a suction is created below, which causes the check valve in the bottom of the housing to pop open and more water is drawn up into the space below the piston. Eventually, when the piston gets high enough, the water trapped on top of it will flow out of the spigot.
Another type of positive displacement pump is represented by a rotary pump. These pumps operate in a circular motion to move a volume of liquid with each revolution of the pump shaft. This is done by trapping liquid between moving parts, such as gears, lobes, vanes, or screws, and the stationary pump housing itself.
To show how this works, refer to the gear pump shown in Figure 3. Its gear teeth mesh together in the middle of the pump, blocking the flow from going straight through and trapping it within the spaces formed by rotating gear teeth and the pump housing. It’s like the water is being forced through a turnstile.
Figure 3 – A Cutaway View of a Gear Pump
Next week, we’ll talk about centrifugal pumps and how they move liquids along using centrifugal force.
The last few weeks we’ve been discussing some of the technical and environmental drawbacks of alternative sources of electrical energy and nuclear power generation. This week we’ll take a look at another drawback, that of energy sprawl.
So what exactly is “energy sprawl?” It’s an easily understand concept, but one that is often overlooked by proponents of the alternative energy movement. Energy sprawl is simply the amount of land which is taken over by alternative power sources in order to generate a given amount of electricity, and that number is dauntingly large.
For example, let’s revisit the subject of wind turbines. According to the National Renewable Energy Laboratory (NREL) of the U.S. Department of Energy, each turbine is to be spaced five to ten turbine diameters apart in a wind farm, depending on local conditions. Now the blades of a 2 megawatt (2 million watt) wind turbine are about 260 feet in diameter, and for our example we’ll space them at the prescribed minimum distance of five diameters. The math for this one is easy, 260 times five, which equates to spacing of 1333 feet, or just over a quarter of a mile. That’s right, if you build a wind turbine farm with a whole bunch of these 2 megawatt turbines, they’ll have to be spaced a minimum of a quarter mile apart. You’ll need a lot of acreage.
So based on the calculations above, we’d have to build a wind farm where each 2 megawatt turbine is surrounded by a circle of empty land 1333 feet in radius. We know from geometry that the area of a circle can be calculated by multiplying pi, that is 3.1416, times its radius squared, and this translates into a minimum area of about 5.6 million square feet per 2 megawatts of power generated, or about 2.8 million square feet per megawatt. Just to put this into perspective, a football field has an area of 57,564 square feet. So what we’re actually talking about here is a little more than 48 football fields worth of land per megawatt of electricity generated!
Let’s turn our attention now to solar power generation. We want to generate electricity with their photo-voltaic (PV) panels, and these panels are made of special materials that convert the sun’s energy directly into electricity. Great concept, but here again we’re talking a lot of land. According to the NREL, it’s estimated that 6.4 acres are required to generate 1 megawatt of electricity using PV panels. Since one acre equals 43,560 square feet, we’d need a total of 278,784 square feet of land area per megawatt. After we’ve done the math we discover that this equates to almost five football fields of area per megawatt of electricity generated.
We’ve now established that loads of land space is required to operate multiple options for alternative energy, and you’re probably wondering how this all compares to land usage for fossil fuel (i.e. coal, oil, natural gas) and nuclear power generation. Well, a typical 1000 megawatt coal fired power plant occupies about 148 million square feet. This translates to around 148,000 square feet per megawatt, which is just over two and a half football fields per megawatt. As for a 1000 megawatt nuclear power plant, we’re talking about 28 million square feet that’s typically occupied by an operating plant, and that translates to almost 28,000 square feet per megawatt, or a little less than half of a football field per megawatt.
Math established, it’s a hands down victory for fossil fuel and nuclear plants compared to wind turbine and solar energies when it comes to land usage. Last time I checked tillable land acreage was going down, not up, around cities where electricity demand is highest. Do we start pushing farther outward to build wind turbine and PV farms on vast expanses of land currently occupied by forests or used to grow our food? Which would you rather do, eat or have electricity?
This week we’re continuing our discussion on alternative energy by introducing another voice. A couple of articles ago you were urged to seek a second opinion on the realities of alternative energy. The following article in the Washington Post by Robert Bryce will constitute another attempt to get a full understanding of the picture. Consider it your third professional opinion on the matter…
Five myths about green energy
By Robert Bryce
Americans are being inundated with claims about renewable and alternative energy. Advocates for these technologies say that if we jettison fossil fuels, we’ll breathe easier, stop global warming and revolutionize our economy. Yes, “green” energy has great emotional and political appeal. But before we wrap all our hopes — and subsidies — in it, let’s take a hard look at some common misconceptions about what “green” means.
1. Solar and wind power are the greenest of them all.
Unfortunately, solar and wind technologies require huge amounts of land to deliver relatively small amounts of energy, disrupting natural habitats. Even an aging natural gas well producing 60,000 cubic feet per day generates more than 20 times the watts per square meter of a wind turbine. A nuclear power plant cranks out about 56 watts per square meter, eight times as much as is derived from solar photovoltaic installations. The real estate that wind and solar energy demand led the Nature Conservancy to issue a report last year critical of “energy sprawl,” including tens of thousands of miles of high-voltage transmission lines needed to carry electricity from wind and solar installations to distant cities.
Nor does wind energy substantially reduce CO2 emissions. Since the wind doesn’t always blow, utilities must use gas- or coal-fired generators to offset wind’s unreliability. The result is minimal — or no — carbon dioxide reduction.
Denmark, the poster child for wind energy boosters, more than doubled its production of wind energy between 1999 and 2007. Yet data from Energinet.dk, the operator of Denmark’s natural gas and electricity grids, show that carbon dioxide emissions from electricity generation in 2007 were at about the same level as they were back in 1990, before the country began its frenzied construction of turbines. Denmark has done a good job of keeping its overall carbon dioxide emissions flat, but that is in large part because of near-zero population growth and exorbitant energy taxes, not wind energy. And through 2017, the Danes foresee no decrease in carbon dioxide emissions from electricity generation.
2. Going green will reduce our dependence on imports from unsavory regimes.
In the new green economy, batteries are not included. Neither are many of the “rare earth” elements that are essential ingredients in most alternative energy technologies. Instead of relying on the diversity of the global oil market — about 20 countries each produce at least 1 million barrels of crude per day — the United States will be increasingly reliant on just one supplier, China, for elements known as lanthanides. Lanthanum, neodymium, dysprosium and other rare earth elements are used in products from high-capacity batteries and hybrid-electric vehicles to wind turbines and oil refinery catalysts.
China controls between 95 and 100 percent of the global market in these elements. And the Chinese government is reducing its exports of lanthanides to ensure an adequate supply for its domestic manufacturers. Politicians love to demonize oil-exporting countries such as Saudi Arabia and Iran, but adopting the technologies needed to drastically cut U.S. oil consumption will dramatically increase America’s dependence on China.
3. A green American economy will create green American jobs.
In a global market, American wind turbine manufacturers face the same problem as American shoe manufacturers: high domestic labor costs. If U.S. companies want to make turbines, they will have to compete with China, which not only controls the market for neodymium, a critical ingredient in turbine magnets, but has access to very cheap employees.
The Chinese have also signaled their willingness to lose money on solar panels in order to gain market share. China’s share of the world’s solar module business has grown from about 7 percent in 2005 to about 25 percent in 2009.
Meanwhile, the very concept of a green job is not well defined. Is a job still green if it’s created not by the market, but by subsidy or mandate? Consider the claims being made by the subsidy-dependent corn ethanol industry. Growth Energy, an industry lobby group, says increasing the percentage of ethanol blended into the U.S. gasoline supply would create 136,000 jobs. But an analysis by the Environmental Working Group found that no more than 27,000 jobs would be created, and each one could cost taxpayers as much as $446,000 per year. Sure, the government can create more green jobs. But at what cost?
4. Electric cars will substantially reduce demand for oil.
Nissan and Tesla are just two of the manufacturers that are increasing production of all-electric cars. But in the electric car’s century-long history, failure tailgates failure. In 1911, the New York Times declared that the electric car “has long been recognized as the ideal” because it “is cleaner and quieter” and “much more economical” than its gasoline-fueled cousins. But the same unreliability of electric car batteries that flummoxed Thomas Edison persists today.
Those who believe that Detroit unplugged the electric car are mistaken. Electric cars haven’t been sidelined by a cabal to sell internal combustion engines or a lack of political will, but by physics and math. Gasoline contains about 80 times as much energy, by weight, as the best lithium-ion battery. Sure, the electric motor is more efficient than the internal combustion engine, but can we depend on batteries that are notoriously finicky, short-lived and take hours to recharge? Speaking of recharging, last June, the Government Accountability Office reported that about 40 percent of consumers do not have access to an outlet near their vehicle at home. The electric car is the next big thing — and it always will be.
5. The United States lags behind other rich countries in going green.
Over the past three decades, the United States has improved its energy efficiency as much as or more than other developed countries. According to data from the Energy Information Administration, average per capita energy consumption in the United States fell by 2.5 percent from 1980 through 2006. That reduction was greater than in any other developed country except Switzerland and Denmark, and the United States achieved it without participating in the Kyoto Protocol or creating an emissions trading system like the one employed in Europe. EIA data also show that the United States has been among the best at reducing the amount of carbon dioxide emitted per $1 of GDP and the amount of energy consumed per $1 of GDP.
America’s move toward a more service-based economy that is less dependent on heavy industry and manufacturing is driving this improvement. In addition, the proliferation of computer chips in everything from automobiles to programmable thermostats is wringing more useful work out of each unit of energy consumed. The United States will continue going green by simply allowing engineers and entrepreneurs to do what they do best: make products that are faster, cheaper and more efficient than the ones they made the year before.
Robert Bryce is a senior fellow at the Manhattan Institute. His fourth book, “Power Hungry: The Myths of ‘Green’ Energy and the Real Fuels of the Future,” will be out Tuesday, April 27.
To visit the Washington Post article above go to: http://www.washingtonpost.com/wp-dyn/content/article/2010/04/23/AR2010042302220.html
Last week we talked about convective heat transfer and how hot pavement in a parking lot gives up its heat to the environment. But how does the pavement get hot to begin with? This week we’ll discuss radiant heat transfer to find out.
The sun is a huge nuclear furnace, separated from the earth by 93 million miles of space. The space between is just a vacuum, almost completely devoid of matter. Without contiguous solid or liquid matter between the two, heat transfer by conduction or convection can’t occur. The heat that we feel on earth is actually generated when surfaces here absorb electromagnetic energy waves emitted by the sun. Although these waves have traveled through millions of miles of space, they have not lost their punch. Our eyes perceive some of them as sunshine, but many others are not visible. But even if we can’t see them, our bodies often perceive them as heat.
But radiant heat transfer isn’t a phenomenon exclusive to the sun. It can also occur when something is on fire. Intense fires can transmit tremendous amounts of radiant energy across significant distances. They can even cause combustible materials nearby to burst into flame without any direct contact being necessary. A line of sight between the source of heat and the receiving object is all that is required, and this is because radiation moves in straight lines, it can’t bend around corners.
In order to calculate radiant heat transfer, we still must consider the temperature difference between the bodies, as well as the area of heat transfer, just as we did when considering the cases of conductive and convective heat transfer. But since there is no conduction or convection activity taking place, we need not concern ourselves with thermal conductivity or convection coefficients. Instead, we have to consider something called the Stefan-Boltzmann constant, a nifty little number that looks like this: 0.000000057 Watts/m2K4. It was discovered in 1879 by a scientist named Jozef Stefan. It was later derived by his student, Ludwig Boltzmann, in his work on thermodynamics and quantum mechanics. Now remember from our discussion last week the unit “K” means Kelvin (°C +273.15).
Now, ideal radiant heat transfer problems involve calculations that need only consider the Stefan-Boltzmann constant. By “ideal,” I mean that there is perfect emission of radiation by one object and perfect absorption of that radiation by another. But reality is not typically so kind, and radiant heat transfer problems typically involve calculations that involve more than just the Stefan-Boltzmann constant. They involve additional calculations of terms like emissivity factors and geometric factors. What’s that? Read on.
Emissivity factors relate to how well objects actually emit and absorb radiation compared to an ideal case. For example, a shiny object doesn’t absorb radiant energy as well as a dull, black object. Geometric factors are included in radiant heat transfer calculations to account for the shapes and relative orientation of the objects emitting and receiving radiation. For example, do you ever notice how the sun is hotter at noon than it is at sunset? Well, that’s because an object with a surface that’s parallel to the surface emitting radiation will receive more radiation than one that isn’t.
Just to give you a basic idea of how radiant heat transfer calculations work, let’s consider an ideal situation. Suppose you own a store building with a flat roof. The store is right on the equator and it’s the vernal equinox. The roofing material is dull black, measures 20 meters by 10 meters, and it absorbs radiant energy like a sponge. But today is a dark, cloudy day, and the temperature of the roof is a cool 25°C. Now, at some point in your life I’m sure you’ve seen a documentary where scientists declared that the surface temperature of the sun is a blistering 5,400°C. Keeping this in mind, if the sun were to suddenly pop out of the clouds directly overhead at high noon, what would be the amount of radiant heat it would transfer to the roof?
Well, according to Jozef Stefan, the radiant heat transfer rate can be calculated to be:
Heat Flow = (The Stefan-Boltzmann Constant) x
(The Area of the Roof) x
((The Sun’s Temperature)4 – (The Initial Roof Temperature)4)
Now the terms we’ll need to plug into the heat flow calculation above are found as follows: The sun’s temperature is: 5,400°C + 273.15 = 5,673.15K. The temperature of the roof is: 25°C + 273.15 = 298.15K. The area of the roof is: 20 meters x 10 meters = 200 meters2. So, the heat transfer rate is:
Heat Flow = (0.000000057 Watts/m2K4) x (200 m2) x ((5,673.15K) 4 – (298.15K) 4)
= 11,808,605,250 Watts
This would be the maximum rate of heat transfer that the roof could absorb at the instant the sun popped out of the clouds.
From our example we can conclude that even in less than ideal conditions, radiant energy from the sun has the potential to generate tremendous amounts of heat on the surface of the earth. Much of this heat drives our weather as a result of convective heat transfer that takes place between the earth’s surface and our atmosphere.
That wraps things up for our discussion of heat transfer in mechanical engineering. Next week we’ll talk about the importance of vibration analysis in design. Remember what happened to Jody Foster’s character in the movie Contact when the space/time device she was in began to shake violently? That’s the kind of thing vibration analysis seeks to correct!