Didaktikogenic Physics Misconceptions
Student misconceptions induced by teachers and textbooks.
by Donald E. Simanek
In medicine there's a term "iatrogenic disease" for "doctor-caused ailments". In institutionalized education there's a similar phenomena, the "teacher and textbook-caused misconceptions" that we will call "didaktikogenic misconceptions."
Student misconceptions can arise at any level of formal instruction in any subject. They are more pervasive than some realize. Why do they persist? Why do they often persist strongly in spite of later instructional attempts to correct them? Here are a few reasons.
I am not concerned here with misconceptions that students have before they even take academic courses, only with those that they acquire when taking a physics course.
Empty spaceI recall from my childhood schooling that textbooks often defined space as "the absence of matter". Later they defined matter as "that which fills space". How enlightening! You might as well say "space is nothing" and "matter is something". These slogans are no more than "empty" profundities. Yet I can only imagine the number of students who have given these responses on exams and were pleased to get credit for "right" answers.
Much confusion arises from the way some textbooks define "weight" inconsistently in different situations. Early on, many books define weight as "the size of the gravitational force acting on the body," or W = mg at the surface of the earth. Then in a later chapter an astronaut in a space shuttle orbiting a few hundred miles above the earth's surface is said to be experiencing "weightlessness." A simple calculation shows that the gravitational force at that height (200 miles) is not zero, but is only about 6% less than at the earth's surface. So by the above definition of weight, weights of objects in the shuttle are also only 6% less than they were on the surface of the earth.
To make matters worse, in the chapters on fluids, when discussing Archimedes' principle, the textbook may speak of measuring the "weight of a body in air" and the "weight of the same body immersed in liquid." Then the difference between these two, called the "loss of weight", is used in calculations leading to the calculation of the density of the body. But the gravitational force on the body in the lab is the same whether it's immersed in air, or in any liquid. It doesn't "lose weight" in the W = mg sense in these experiments.
Weight can be defined in several ways, but textbooks ought not to use incompatible definitions without warning or explanation. The last two examples make sense only if we define weight as "the size of the force required to support a body in equilibrium in its environment (frame of reference)." In my opinion, this should be the definition introduced first, and then the textbook should stick to it.
Perhaps the resolution of this dilemma lies in giving distinguishing names to different definitions of weight. John Denker proposes new names to distinguish two definitions of gravity: framative gravity defined by reference to a particular reference frame and barogenic gravaity for that solely due to Newton's gravitational law. Framative gravity is measured by noting the acceleration of a falling body in a particular reference frame. Barogenic gravity is measured from the acceleration of a falling body in an intertial (non-accelerating) reference frame. See Weight, Gravitational Force, Gravity, g, Latitude, et cetera.
Has anyone ever seen a visible force? All forces are invisible, since they aren't "things" they are concepts. We see only phenomena, such as motion, and use the force concept to account for them. Yet textbooks say such things as "Gravity is an invisible force..." Is the word "invisible" supposed to tell us anything useful here?
Canceling forcesWe read in a high school text that a body is acted upon by two equal and oppositely directed forces, which "cancel each other". Now such a situation does result in a zero net force on the body, but what does "cancel" mean? It does not mean that the individual forces are removed or destroyed, which is the colloquial meaning of "cancel". Though the net force is zero, the individual forces are still acting, as can be readily demonstrated by removing one of them and observing what happens to the body under the action of the other one. Someone could probably defend this use of "cancel", but why bother? It's a phrase we can easily get along without in physics. Just say that the two forces sum to zero, and it is the net (sum) of all forces acting upon a body that determines its acceleration.
Action and Reaction forcesNewton's third law says "When body A exerts a force on body B, then B exerts an equal and oppositely directed force on A". However some books "clarify" this perfectly good statement of the law by saying that these two forces are called an "action-reaction" pair. No real harm here, but what use is this name? Lately I've had correspondence with people who speak of "action forces" and "reaction forces" as if they were individually something separate and special. I ask them to define "action force" and they can't. Of course they can't. If you have two forces that qualify as an action/reaction pair, how can you say which is the action force and which is the reaction force? They have equal status. Neither is more "important" than the other, they are at all times equal and oppositely directed. Neither can be thought of as "causing" the other. But later in this document I address "cause and effect" language as a useless and unnecessary concept in physics.
Spooky light rays
Many introductory physics books show lens ray diagrams in which the light rays mysteriously change direction along the midplane of the lens. In reality the change in direction always occurs at the lens surfaces, where there's a discontinuity in the index of refraction.
This example is from a text often used in introductory college physics courses, and also comes in a nearly identical "high school edition". We don't identify the title, author, or publisher, to protect the guilty.
In your eye
Textbooks frequently misrepresent the optics of the eye. They often show rays bending at the eyelens, but not at the cornea. In fact, most of the bending is at the cornea. The eye lens acts only as a corrective element. After all, body tissues are mostly water, including the transparent tissues and fluids of the eye, so their refractive index is nearly the same as that of water. Most of the refraction occurs at the air-cornea interface, not at the lens because the change in index of refraction at the lens surfaces is only about 1/10 as great there as it is at the front surface of the cornea. The eye lens functions as a "correction" element, the strongest lens action is at the cornea.
This horrible example is typical, and comes from a college level physics text. None of the rays shown in this eye diagram exhibit refraction, yet clearly they ought to refract, since none of them crosses an interface normally. ["Normal" means perpendicular to the interface.] The diagram purports to show the inversion of the image, and the size reduction of the image. But if a complete ray diagram is constructed accurately, the sizes are seen to be greatly in error.
The diagram above compares the eye with the camera. Yet the diagram for the camera doesn't show the same sort of rays. The camera diagram shows no rays passing through the center of the lens; the eye diagram shows only rays through the center of the lens. The relative sizes of object and image are also vastly incorrect in the camera diagram.
The diagram to the right is curious. It seems to be showing the letter E in perspective, so one can't tell whether the image is inverted up/down on the retina. But it certainly appears that it is not inverted left-right, and it ought to be. The rays are shown bending more at the cornea than at the lens, and that's as it should be. The object is unreasonably close to the eye, but for that distance the ratio of image size to object size is reasonable. And what's that extra "E" behind the retina?
Here's some more bad examples:
Our final bad example, shown to the right, is one of the most screwed-up I have seen in a published book. It is from p. 167 of The Way Science Works, Macmillan, 1995 (The Science Museum, London).
The red ray passes from air to cornea and through the lens without deviation, which is correct, for it is an axial ray and crosses all the surfaces at an angle of incidence of 90°. Then this ray mysteriously decides to refract upward! It should have continued without refraction. The other two rays do mysteriously unphysical deviations. The point of this picture was apparently to talk about the color-sensitive receptor cells in the retina. These are shown neatly organized in the inset, though they are not so neatly arrayed in the actual retina. But that's no excuse for getting the rest of the picture wrong, too.
In the lower part of the picture we see again the common blunder of drawing just two rays, both of which are shown with no deviation occuring at any of the surfaces! If one draws the diagram properly, you will find that the size of the image of the candle on the retina (actually shown a bit in front of the retina) is incorrect. It would be larger than shown. Also, the candle itself is of a size about one inch high (where do you buy such candles?) and only about three inches in front of the eye. Most eyes can't focus that close. What is this diagram trying to show? Only one thing, that the image on the retina is inverted. But it totally fails to tell the reader why the image is inverted.
Mistaken and misleading drawings of the eye have been around for a very long time. Here's one, redrawn for clarity, from Leonardo daVinci's notebooks. Poor Leonardo struggled his whole life to understand why the image on the retina is inverted, yet we "see" it right side up. He returns to this many times in his notebooks. This drawing is wrong in nearly every possible way.
After all of these bad examples, we should tell you how eye diagrams ought to be presented. A more realistic diagram is shown below, from Hecht and Zajac’s Optics, Addison-Wesley, 1979. (I've added color for clarity.)
Even this shows in (b) an object unrealistically close to the eye. Now you may argue that such pictures are “schematic” only. If so, textbooks should label them as such and explain what’s unrealistic about them. Authors owe it to students who may enter the medical professions to communicate this information correctly.
This nice picture (shown to the right) from Coletta's College Physics is one of the best I've found in introductory textbooks. It should by an inspiration to other authors. It solves the problem of object distance by showing two pictures at different scales. The rays from point P at the top of the tree are nearly parallel as they enter the eye. It correctly shows only a very small amount of refraction at the lens surfaces, but most of the refraction at the cornea.
The index of refraction of the aqueous humor (AC) and vitreous humor (VC) is 1.333, while the index of refraction of the lens is 1.45. The refractive power of the cornea is 41.6 diopter while the refractive power of the lens is only 30.5. Combined, the entire system of the eye has a refractive power of 66.6.
Good pictures can be found in Optics and Vision by Pedrotti, Leno S. and Pedrotti, Frank L., McGraw-Hill, 1998, p. 201-2. See also Mathew Alpern, “The Eyes and Vision,” Section 12 in Handbook of Optics, McGraw Hill, 1978.
The lost rays
Diagrams illustrating passage of light through lenses and light reflected from curved mirrors often show thick lenses, but draw the rays as if the lens were thin. Some even go so far as to deceive by showing only two rays. As soon as you draw some of the "missing" rays, you discover that the diagram becomes inconsistent. The diagram is lying to you.
This example purports to show reflection of light rays from a spherical mirror. It looks good, until you try to draw a ray from the tip of the candle flame, then through the focal point, F, and then emerging parallel to the optic axis (CF). Go ahead, draw it on your computer monitor screen, or print this page and draw it on the paper. Whoops! This ray should then pass through the image of the tip of the candle flame, but it doesn't! This mirror has severe spherical aberration (of course). But the author doesn't mention spherical aberration until six pages later.
Errors of this sort may in some cases be the result of superimposition of a "thin lens" ray diagram onto a schematic "icon" representing a lens or mirror. If so, the accompanying text ought to say so, to avoid student misconceptions.
Seeing the light of the spectrum
Diagrams of the electromagnetic spectrum are usually drawn with a logarithmic scale of wavelength. Students often miss that fact, and think that the scale is linear. Even worse, they think the picture would look much the same if it were plotted against frequency. Some couldn't tell you, with the textbook closed, whether it is frequency or wavelength on that axis.
Textbooks and teachers that go in for "gee-whiz" trivia sometimes suggest that it's not a coincidence that our eyes are most sensitive to the wavelengths at the peak of the solar spectrum (generally given as at the color yellow-green). This suggests one of two simplistic interpretations to the student mind. (1) The human eye evolved to take most efficient advantage of sunlight, or (2) It's an example of the wisdom of a creator who made the sun and the eye and everything else as "perfect" or "efficient" as possible. Both conclusions are unwarranted, and not in accordance with facts. Many animals have their vision peaked at other wavelengths than do humans. And which peak, anyway? The peak on a linear frequency scale, or the peak on a log scale of frequency or on some other kind of scale, such as a linear or log scale of wavelength? Should it be a scale of total radiant energy of a given color, or should be on a scale representing individual photon's energies? All have different locations of the maximum. As for the efficiency of design argument, one only has to note that plants are generally green in color. This means they reject (reflect and scatter) green light, near the peak radiation band of the sun usually given in textbooks.
The reasons for seasonsThe video A Private Universe shows interviews with students just emerging from Harvard's graduation ceremonies, having just received degrees. They were asked some elementary questions about science. One graduate, who said his degree was in Physics, was asked "Why is it generally warmer in Boston in the summer than in the winter?" That question may never have been posed in any of his classes, even college classes. But let's examine this question. Many persons answer "because the earth is closer to the sun in the summer". In fact, the earth is closest to the sun in early January. Let's examine some other answers:
Ok, so let's accept that the earth's axis direction is nearly constant, and tilted with respect to the ecliptic plane. Now what? We follow with a geometric analysis to show that this causes the duration of daylight to be greater in the summer. It also causes the average angle of the sun's rays relative to the earth's surface to be greater during summer days. Both of these contribute to greater warming of the earth's surface. Which is most important? More geometry math, and not easy to do. It turns out that they are both significant, so you can't say one is the full explanation. You can say, "the reason is the axial tilt" for both of them, but that's a cop-out to avoid understanding how the tilt is responsible. Who said this was a simple question, suitable for elementary school students?
The explanation of how angle of the rays affects warming of earth's surface must be based on concepts of energy and the importance of energy/surface area, not to mention energy conservation and the inverse square law of illumination. Perhaps it would be appropriate to do a little experiment with a heat lamp, blackened cardboard, with a thermometer glued on the back (unilluminated side) of the cardboard. Then read the initial rate of temperature rise for different angles of the cardboard with respect to the lamp. Now that's an appropriate exercise for pre-high school science courses.
And then there are details such as the exchange of thermal energy by the atmosphere, oceans and land surfaces. There's the well-known "lag" effect due to these (especially the oceans) acting as heat sinks. This causes the seasonal (and daily) variations of temperature to lag behind the variations in insolation (radiative power received at earth's surface from the sun).
Fortunately these are relatively minor perturbations when considering the question of approximate average temperatures at different seasons and different hemispheres, and the question posed above certainly didn't expect anyone to go into that detail. Sometimes you have to know when to terminate an answer to a general question.
Now think back to your own education. Was this question ever presented in class with this much attention to critical and essential details? Certainly not in the elementary grades. Can you point to even one textbook (at any level) that examines the question this carefully? Just try to find out from a literature search the numeric value of relative contribution of (1) ray tilt, and (2) duration of daylight, to the surface warming and average temperature variations of the seasons. By the time you get to the university, Profs assume that you already know all about this so they don't bother to discuss it. (Too trivial!) No wonder misconceptions persist.
An interesting related question comes to mind: "Why is it generally cooler in the mountains than on a flat desert?" Most will say "Altitude is the reason." Ok, but exactly how is altitude causing this? What if the desert is at a high altitude? Few students will consider the fact that in a rugged mountain range the solar energy is distributed over a much larger area of rocks and soil than it is on a relatively flat desert surface. How large a factor might this be? Vegetation must be considered as well, and reflective properties of surface cover such as snow.
Relatively incorrectOne often reads in textbook discussions of relativity (and in those gee-whiz books for laypersons) that a person in a constantly accelerating spaceship far removed from gravity fields is in a frame of reference indistinguishable from being in a gravitational field. Some textbooks like an example of a person in a falling elevator assuming that gravity has suddenly been shut off, and having no way to tell. These examples are used to illustrate the equivalence of gravity to acceleration in a frame of reference, called "Einstein's equivalence principle".
Einstein's equivalence principle is important and useful, but these examples of it are flawed. The two situations may be indistinguishable to the passenger's direct senses, but not to sensitive acceleration sensors. The gravitational field has divergence—its field lines aren't parallel and its strength isn't quite constant over space. The accelerating spaceship has an acceleration field that can be uniformly constant within the vehicle. The difference between these two cases is detectable with appropriate instruments. So when textbooks say that a person in a closed box frame of reference has no way to tell the difference between the effects of constant acceleration and a gravitational field they are perpetrating yet another didaktikogenic misconception.
Likewise, the person in a freely falling elevator may feel weightless, because the human proprioceptive sensory system is not very sensitive to small forces and small force differences. But with sensitive instruments in the elevator the passenger could detect that there's still a very small relative difference in acceleration of objects inside the elevator as a function of position within the elevator, which would conclusively rule out a field-free situation. This is due to the divergence of the gravitational field lines.
If you move far away from the gravitating object, so the field line divergence is very small, then the field itself is very small, so it's difficult to "save the explanation" by that ploy. In the real universe you can't get away from some gravitational fields from something, and they always have some divergence.
Perhaps the textbook should have said "a uniform gravitational field". But how would you even create a gedanken experiment with a perfectly uniform zero divergence gravitational field? You'd need an infinite amount of mass!
One of my e-mail correspondents suggested that maybe it's impossible to create a perfectly uniform non-divergent acceleration field in a gravity free environment with real material objects. I suspect that's correct, but I'm not prepared to do the math here.
Albert Einstein used this example in one of his popular books on relativity. I think (hope?) he was only using the example as an illustrative analogy. (Another of those darned analogies!) Some textbook writers have morphed the analogy into a dogmatic principle, to be memorized for exams.
Most such examples seen in elementary courses aren't entirely wrong, and some could be "saved" with sufficient caveats and discussion. But if the explanation of a simple example gets too long or complex, the whole purpose of the simplifying example is lost.
I've had a number of teachers defend these textbooks with some rather far-fetched rationalizations. For a refreshing change, a correspondent just informed me (June 2007) that in Richard Feynman's "Six Not-So-Easy Pieces", page 130, Feynman addresses this very issue:
Now let's compare that with the situation in a spaceship sitting at rest on the surface of the earth. Everything is the same! You would be pressed toward the floor, a ball would fall with an acceleration of 1g, and so on. In fact, how could you tell inside a space ship whether you are sitting on the earth or are accelerating in free space? According to Einstein's equivalence principle there is no way to tell if you only make measurements of what happens to things inside!I should read that book. Feynman was one sharp fellow. He thought before he spoke or wrote.
The spotted balloon.
Elementary astronomy books like to illustrate cosmic expansion with a demonstration in which spots are painted on a balloon, and the balloon inflated. The spots represent galaxy clusters. This is presented as a curved two-dimensional analogy to what happens in a curved three-dimensional universe. It is useful to show that in such a universe, whatever galaxy you happen to be in, all the others recede from you, and their speed of recession is greater the farther away they are. Thus there is no preferred "center" in this process, and no way to determine one.
Then some books characterize this as an example of the expansion of the space metric itself. That raises obvious and serious questions, and can give rise to misconceptions. The student wonders "Does everything in this universe expand? Do measuring sticks expand? Do we, the observers, expand? Does the wavelength of light expand? If meter sticks expand also, how could we ever observe or measure the space expansion?
John Denker provides a resolution of this. Expansion of the Universe. "The bottom line is that we can use rulers to define what we mean by distance, since the size of the ruler is determined by the size of the atoms that make it up. The size of atoms is determined by things like Planck’s constant and the mass of the electron, which are not affected by gravity."
The answer of course is that the correct model is to use pennies taped to the balloon. When the rubber expands, the pennies do not. The rubber may place an itsy bitsy stress on the pennies, but they are strong enough to withstand it. The size of objects such as pennies and measuring sticks is determined by the size of atoms, and the number of atoms in each object. It is not determined by the size of the universe. So measuring the size of the expanding universe is in fact a perfectly well-defined exercise.
Deviant misconceptionsWhen my copy of the October 2002 issue of The American Journal of Physics arrived I noticed the very first item, a letter to the editor by Jan Hilgevoord: "The standard deviation is not an adequate measure of quantum uncertainty." Jan points out that the famous Kennard inequality ΔxΔq ≥ h/4π is not always an adequate expression of the Heisenberg uncertainty principle.
The problem arises from the fact that the uncertainties in this equation are often non-gaussian in profile. But I think the problem goes much deeper, and is not confined to quantum mechanics. There's a widespread misconception among students and teachers that the standard deviation is the best, proper, and only "right" way to express experimental uncertainties. They do not get this misconception from everyday experience; they get it from textbooks and teachers. Too many people look at the "bottom line" equations that they can "use" without bothering to read the fine print of how these things were derived and on what assumptions they were based. If they did so, they'd find that the equations using standard deviation in error analysis are based on the assumption of "normal" or "gaussian" distributions of measurements. They'd also find out from reading more carefully, that if a distribution is somewhat (but not greatly) different from gaussian, that the error equations still work rather well, because "the hypothesis of the normal distribution is a 'robust' hypothesis." That simply means that if the hypothesis isn't quite true, that fact doesn't affect the calculated results much, and in error analysis we usually don't need high precision in error estimates.
But... There's a sordid fact not generally appreciated by students. Certain common functions of quantities whose distribution is gaussian have distributions very significantly non-gaussian. If x has a gaussian distribution, x2 does not. And 1/x certainly does not. So in such cases, the use of the error rules based on gaussian distributions are not appropriate, and not correct.
Then there's the more obvious point that if you haven't taken many repeated measurements, you simply do not know whether the data has a gaussian distribution, and you are not justified in using the formulae (often built into computerized data analysis programs) that assume gaussian distributions. Most students, and even some teachers, seem totally unaware of this inconvenient fact.
Much of the "error analysis" actually done (using standard deviations) by students in biology, chemistry, social science, and physics courses is improperly done, and is so much garbage. Reason: No one's brain was engaged. It had become a "by the rules" unthinking automated process. And I go further. Some of the published research papers are similarly flawed, because somewhere along the line the researchers received an inadequate education in this subject. Books on statistical treatment of errors usually discuss this thoroughly, but those summaries you find in the front of undergraduate lab manuals don't warn you about the limitations of the formulae they present, placing great emphasis on the "bottom line" equations. So students have this silly notion that if you don't use standard deviations in every case, it isn't "professional".
A juggler who weighs 252 pounds carries with him three juggling pins that weigh 6 pounds each. He comes to a bridge that has a precisely determined load limit of 260 pounds. How can he get himself and the juggling pins across the river without causing the bridge to collapse?This version's use of "juggler" and "juggling pins" practically shouts that the answer is:
He juggles the pins while crossing the bridge, at any given time he will be supporting only one pin (with the other two in the air), so his net weight should be only 252 + 6 = 258 pounds. Therefore he can safely cross the bridge.
Unfortunately this obvious answer is wrong. Each time the juggler tosses a pin, his hand exerts an upward force on the pin, which is actually greater than the weight of the pin, for this force must accelerate the pin. Whenever he catches a pin he must also exert a force greater than the weight of the pin. Therefore during each toss and catch, the bridge must exert an upward force on the juggler greater than the combined weight of the juggler and one pin. If this force is more than two pounds greater than the pin weight, the bridge will surely collapse.
It is also true that the time-average force the bridge must exert on the juggler is just 270 pounds, the same as it would be if the juggler were simply carrying the pins across the bridge. That also suggests to us the bridge will collapse.
Correct answers are:
Fly away, birdie.
A clever farmer is transporting a large number of small birds in a cage loaded on the back of his pickup truck. He comes to a bridge that has a posted weight limit. He knows the weight of his truck and that's under the limit. Even if you add the weight of the cage, it's ok. But the additional weight of the birds would make it overweight. So, knowing physics, he gets out of the truck, scares the birds so they all fly around in the cage, and quickly drives across the bridge before they come down again to sit in the bottom of the cage.It's an interesting problem, even though it's a contrived example of a "practical" physics application. Some of the books that use this problem give a very misleading answer. They say that it wouldn't work because whether flying or sitting the bird's weight is still "in" the cage that is "in" the truck, so the bird's weight must be added to the total weight. Some go on to "explain" that the force the bird's wings exert downward on the air gets "transmitted" somehow to the bottom of the cage, perhaps by downdrafts, so the effective weight of birds and cage is the same whether they are flying or standing.
There's often a picture, showing an "open" cage with chicken-wire sides, so you can see the birds inside. Is all that downdraft due to flapping wings effective in transferring momentum to the bed of the truck? Or does some of the momentum get transferred elsewhere? That problem could be eliminated with a closed and sealed box of birds.
The explanation leaves students with the impression that whatever is going on in a closed box, its weight remains the same. But that simply is not so. Consider a beaker of liquid sitting on a scale. There's a lighter object stuck to the bottom. It comes unstuck and rises to the top. During the initial acceleration and final deceleration, the force of the beaker on the scale changes, and if the scale were sensitive enough and responsive enough, a slight "weight" change would be registered during those accelerations.
One need look no further than those little novelty toys with a ball containing a battery driven motor with eccentric weight. The ball moves around erratically, showing us that what's going on inside a closed system does matter. Put one of these on a balance scale and watch the weight fluctuate. [You may have to fasten or confine the ball it to keep it on the pan.] Any process inside a closed system that changes the system center of mass will also cause fluctuations in its weight as registered on a balance scale.
Baron Munchaussen's Principle
Somewhat related to the previous example is the question of what would happen if you had a battery-driven electric fan in the back of an iceboat, to blow air on the sails to make the boat "go" on windless days. Some textbooks say this is an example of action/reaction forces, explaining it this way:
It won't work because of Newton's third law, and the fact that forces "internal" to the system add to zero by the third law, and therefore cannot affect the net force on the system. Only the net external force on the system determines he motion of the system.Well, you only have to go into the lab and do some experiments to find out that this explanation doesn't fit the experimental facts. Put the fan and sail on a low-friction cart and see what happens. While this method of propulsion is rather inefficient, it sometimes does move the cart backward, sometimes forward, and sometimes no motion is seen, depending on the geometry of fan and sail. So what's wrong with the simple action/reaction explanation given in the previous paragraph?
Again, this isn't a cleanly closed system. Some air from the fan flows around the edges of the sail. Surrounding air is also "sucked into" the back of the fan. Also, if the boat is to move, it must move through the huge mass of surrounding air, and air drag must be considered in analyzing the problem. But there's an even simpler way to realize that these textbook explanations are simplistic. Suppose there were no sail at all, and the fan were blowing forward. Which way would the boat move? Backward. This is the principle of the propulsion system of swamp hovercraft that use a fan for propulsion. [Also consider jet aircraft.] It illustrates the importance of the surrounding air in such problems.
Friction is a DragTwo assertions about friction are sometimes seen in textbooks:
Assertion (2) is "sort of" true in the informal sense of "Whenever there's friction acting at a sliding surface, some thermal energy will be produced."
A bit unrealistic
This example is from J. W. Warren's book. Here are his comments:
Figure 11 is one variation of one of the most universally taught diagrams in elementary physics. It purports to represent the paths of the radiation from a radioactive substance when a magnetic field is directed downward through the plane of the figure. No such result could possibly be obtained. For a material that emits both α- and β-rays the radius of curvature of the path of the most energetic β-particles is typically a hundredth of that of the α-particles, whilst nearly all the β-particles will follow even more sharply curved paths. If the shield is thick enough and the hole through it is narrow enough to collimate the γ-rays then the β-rays will not pass through it, as a result of their deflection in the fringing magnetic fields. The diagram is very schematic indeed, and in practice is very misleading.
The leaky reservoirA few textbooks still misinterpret the old experiment designed to illustrate the relation of liquid pressure to depth. It consists of a tank of water with several holes in the sides. Water streams out from these holes in parabolic trajectories. Leonardo daVinci illustrated this in his notebooks.
The accompanying text explains that since the pressure is greatest at the bottom of the reservoir, the velocity water emerging from the lowest hole is greatest and therfore that stream "goes the farthest". This error is compounded if the illustration actually shows that stream going farthest.
Here's an example from a web site offering "Good explanations for free downloading."
Liquids exert pressure due to distribution of their own weight.
To see how liquids exert pressure, try the
following experiment. Take three tins of different sizes or diameters. On tin
number I, make three holes at the same height. On tin numbers II and III, make
three holes at different heights. Place three long tapes to close the three
holes on each on the tins. Now fill the tin with water. Remove the tapes
quickly and observe the streams coming out of each of the holes.
You will observe the following
Liquids exert pressure due to distribution of their own weight. To see how liquids exert pressure, try the following experiment. Take three tins of different sizes or diameters. On tin number I, make three holes at the same height. On tin numbers II and III, make three holes at different heights. Place three long tapes to close the three holes on each on the tins. Now fill the tin with water. Remove the tapes quickly and observe the streams coming out of each of the holes.
You will observe the following
The error here is to incorrectly simplify a situation in which several variables influence the result. The speed of the stream from an orifice is given by Bernoulli's equation, and is s = (h1)1/2. The horizontal distance of the stream from the reservoir depends not only on the horizontal component of velocity, but how far it falls h2 under the force due to gravity. The result is correctly given and illustrated in Sutton, Richard Manlife, Demonstration Experiments in Physics. American Association of Physics Teachers, McGraw-Hill, 1938.
However, the top stream is drawn a bit strangely. All streams should exit horizontally, so the parabola should have a horizontal tangent at the exit point. Yet the diagram does correctly show where the streams hit the table.
While this is a deficient lecture-demonstration, because it can so easily mislead students into conceptual misunderstanding, it would make an excellent laboratory experiment. There students could take the time to deal with each of the relevant variables, and perhaps even design a strategy that would actually reveal the relation of pressure to depth.
The photo from another web source shows this experiment, using "commercial" apparatus.
Where do such wrongheaded ideas come from? They have been around quite a while. Compare this drawing from Leonardo daVinci's notebook with the actual photo. (The photo shows some distortion due to the age and curvature of the original page, especially at the bottom of the container.) This is just one of many incorrect or misleading drawings you will find in Leonardo's notebooks if you look carefully. One could write a book about them.
Odd EndsHere's a few other common textbook errors that I may get around to discussing here sometime, with appropriate diagrams. Or I may just refer readers to good treatments elsewhere on the Web.
I'm always on the lookout for more examples, including examples of outright lies textbooks tell. Send your favorites to: me at the address shown to the right. Some of the examples above don't include answers yet. If you'd like your solution posted here, email it to me.
Thanks to John Denker for helpful discussions about misconceptions. Any remaining misconceptions are entirely my own. This page created 2008. Edited December 2016.
Return to Donald Simanek's