Tuesday, April 29, 2008
Bubba Gets It Right! (Not Really)
Too bad he wasn't serious.
Over at Knox Views, R. Neal links
(Here's the url in case Mr. Neal is still trying to hide his ravings from careful critique: http://www.knoxviews.com/node/7772
) to a post
showing atmospheric data that indicates that the tropical troposphere isn't heating up the way the best AGW models predict. Following that post back to its source leads you to Steve McIntyre's blog Climate Audit
, and the post Tropical Troposphere
Now before I get too deep into this thing, let's make something very clear. The issue at hand is NOT whether we are in the midst of an increase in global temperature due to man; the issue in this post is simply, "Is the troposphere reacting as the models predict?" You have to remember that all of the fuss about global warming comes from the predictions of computer models, models whose accuracy has been spotty at best.
The simplest way to put it is this: The models, based on thermodynamics predict that tropospheric temperatures will rise sooner and faster than surface temperatures. The models go on to say that the earliest indicators and the greatest movement will occur in the tropics.
So far, they aren't.
This doesn't mean that AGW isn't happening, although it does raise several important questions. What it does mean is that the predictive models we are using to forecast the extent and the severity of climate change incorrectly model the actual physical processes occurring.
In short, they're wrong.
Yet it is these very models that Al Gore preaches on every time he gets out of his Gulfstream.
Instead of reading the article, picking up the background knowledge required to understand it, and reading through the extensive discussion in the comments, Mr. Neal, would rather dismiss the problems generated by real world data out of hand, primarily because it doesn't fit within his orthodoxy.
Put simply, in deference to Mr. Neal, he'd rather stick his fingers in his ears and say "La la la la la," rather than hear the facts.
Here are the facts.
Thermodynamics is the study of how heat (thermo) moves (dynamics). Obviously, an understanding of thermodynamics is essential to understanding the greenhouse effect, and how CO2 emissions affects it. I learned basic thermodynamics while learning how to run a nuclear reactor. While the system is different, the laws governing the transfer of heat are the same.
OK, say you want to make a pot of tea. The first thing you have to do is boil water. You want to raise the temperature of the water from room temperature, around 70F to the boiling point of water, at 212F. In order to do that, you turn on the stove. Now, if the burner only heats up to 70F, will your water boil?
Obviously not. In order to transfer heat from one body to another, the body losing heat must be at a higher temperature than the body gaining the heat.
This is crucial to understanding how heat is transferred. It always must go down to a lower temperature body. In order to raise water to 212F, the stove burner is going to have to get hotter than 212F. It will also have to heat up the pan holding the water to greater than 212F.
The second thing to understand is that the rate of heat transfer is directly proportional to the difference in temperature between the two bodies.
The greater the temperature difference, the faster the heat will transfer.
The final thing to understand is exactly how heat is transferred. There are three methods;
- Conduction. The two bodies are in physical contact.
- Convection. The two bodies are separated, but are surrounded by a gas or liquid.
- Radiation. The two bodies are separated, but heat is transferred directly by photons.
Now let's take a look at the greenhouse effect. Now, let's look at our thermodynamic system. The Sun radiates heat to the Earth in a steady, constant stream. (For the purposes of this discussion, will ignore the various solar cycles.) This heat passes through the atmosphere unhindered, and reaches the surface and warms it. Heat is sent back up through the troposphere but things have changed. Where before the outer layer of the Earth's atmosphere was transparent to heat radiation, this is no longer the case. By giving up energy to the surface, the photons carrying the energy are now blocked by stratospheric CO2 and CO. These gases act almost like a one way blanket around our atmosphere. They allow energy in, but not back out again.
Remember what we talked about before? The rate of heat transfer is dependent on a difference in temperature. Since the temperature of the troposphere has gone up while the surface temperature has remained constant, the difference between the two has dropped, resulting in slower removal of heat from the surface. But the Sun is still pumping out the same amount of energy, so what happens to surface temperature? Well, if you're putting more energy in than you're taking out, the only thing that can happen is that surface temperature will go up. And as surface temperature goes up, the differential between the surface and the troposphere goes up, and the rate of heat transfer goes up until we reach a new equilibrium.
That's the greenhouse effect folks, and it makes life possible here on earth.
Now then, let's look at AGW, man made global warming. According to the theory, humans have dumped massive quantities of CO2 into our atmosphere, which has caused a pronounced increase in the ability of our atmosphere to trap heat. This mean that the troposphere is now trapping more heat, causing a rise in surface temperatures until we reach a new, higher equilibrium.
And now you see why tropospheric temperatures are so critical to AGW modeling. If the troposphere is not heating up, and by the data in this post it isn't, then the proposed mechanism for AGW is in big trouble. Not only that, but the models forecasting gloom and despair are also completely off.
What Steve McIntyre's data shows is that tropical tropospheric temperatures are not increasing significantly. In fact, they're down compared to reference temperatures.
Does this mean AGW is not happening? No, but it raises questions both about the mechanics of our climate, and about the predictive value of the models. Asking those questions is the first step to learning the truth about our effect on this planet.
Saturday, April 19, 2008
Expelled: No Intelligence Allowed
Rarely do you find a movie so courteous as to write a review into the title, but Ben Stein's crockumentary Expelled
does just that. Attacking the education establishment and modern science based on their refusal to accept Intelligent Design is like attacking the Metropolitan Opera Company for refusing to sing "I Think I Love You."
Intelligent Design is not science. Period. I say this as a committed Christian who firmly believes that God created the earth, the heavens and all the things in them.
with Intelligent Design many times
over the course
of this blog
,so there's no point in recovering that ground in depth. Follow the links if you're interested. The short version is that ID rests on a couple of logical flaws:
- Problems with the Neo Darwinian evolutionary model do not automatically confirm ID.
- Complexity does not require design,since natural forces produce complex systems all the time.
By basing his attack on science and education on their rejection of non-science, Stein has hamstrung himself before he ever started.
Wednesday, February 06, 2008
What is HD? 1080p or 1080i? Why do I care?
As I wrote the last post, it occurred to me that there are probably a lot of people out there who don't understand what all of the fuss about HD is about, and what all the numbers mean, and I figured that since we're a bit over a year away from the end of television as we knew it, a quick primer couldn't hurt.
First, the quick version. HD produces a sharper, clearer picture with more details, and it does so using a digital signal, so you'll need a new ATSC tuner if you want to keep your old TV. Or you can buy a new TV with an ATSC tuner, and you're all set.
OK, for you non-tech types, that's really all you need to know.
For the rest of us, let's take a look at the old TV so we can understand how the new TV works.
The stronger the beam, the brighter the phosphor glows. The beam starts at the top of the screen and paints a picture all the way across the screen, but just at the very top of the screen. When it reaches the side, it flashes back to the other side, only it drops down a bit and draws another line across the screen. This continues until the beam draws the last line across the bottom of the screen, then it flashes back up to the top of the screen, ready to begin this whole process again.
The sequence above describes the drawing of one single frame of video, and it takes place almost 30 times a second in an old tube type TV. Think of it like an Etch-a-Sketch with three controls instead of two. In addition to left/right and up/down, you also have a knob that controls how hard the scraper touches the screen. The scraper starts in the upper left corner of the screen, and as you wind it across the screen, you vary the pressure with the third knob to create one line of your picture. When you reach the end of the screen, you use the third knob to lift the scraper off of the screen, and then return it to the beginning position. Next, you use the up/down to move the scraper down one line length, and repeat the whole process until you get to the bottom of the screen.
This is how your old, tube type TV's worked. Greatly simplifying things, an electron gun squirts electrons to the screen which is covered with a phosphor, a chemical that glows when it is hit by electrons. Two magnets are used like the buttons of the Etch a Sketch, to push the beam from side to side and up and down. The TV signal is our third knob, and it varies the strength of the electron beam, which affects how brightly the phosphor glows.
When TV first started, a bunch of people, the National Television System Committee, got together and decided we needed a broadcast standard, and they decided a few things. First, they decided that the number of lines across the screen should be 525. They also decided on a frame rate, the number of times a complete image is formed on the screen, should by 30 frames per second, that the aspect ratio, width vs height, should be 4:3, and that the signal should be interlaced, meaning that every other line is skipped on one pass, then the skipped lines are picked up on the second pass. To understand all this, let's pull out our Etch a Sketch again.
The first part of the NTSC standard is that the aspect ratio is 4:3. All that means is that a screen 4 inches wide must be 3 inches tall. If it's 8 inches wide, it has to be 6 inches tall, and so on. In short, an NTSC screen should always be 3/4 as tall as it is wide. So if our Etch a Sketch screen is 12 inches wide, it must be 9 inches tall. Next, we know that our screen is divided into 525 horizontal lines from top to bottom. And finally, we know that our scanning sequence is not a straight top to bottom like we used in our first example. Instead, we'll scan the first, third, fifth, etc lines until we get to the bottom, then we'll scan the second, fourth, sixth etc lines. What this means is we have to scan the screen top to bottom twice for one full frame, which means that our scan rate must be 60 times per minute, or Hz, for a frame rate of 30 frames per minute.
So that's how things started off. We've changed a few things since then. We added color, replaced the electron gun and screen with LCDs or plasmas, but the NTSC standard remained pretty much the same.
Until HD came along.
The new standard for HDTV is called the ATSC standard, for the Advanced Television Systems Committee. Unlike the NTSC which set a single standard, the ATSC left the door open for multiple standards and resolutions, and that's where some of the confusion has crept in. SO lets clear it up a bit.
Let's start with the aspect ratio. The ATSC specifies an aspect ratio of 16:9, instead of the old 4:3. This means that our Etch a Sketch has to be 9 inches tall for every 16 inches in width. Going back to our earlier example, a 12 inch wide Etch a Sketch would now be just a shade under 7 inches tall, instead of 9. Also, the number of horizontal lines changed. the ATSC standard accepts either 720 or 1080 lines of horizontal resolution. And finally, the ATSC accepts either interlaced scanning, like the NTSC, or progressive scanning, where every line is scanned in sequence, rather than the odds then evens in two passes.
OK, so let's put all of this together and see where we stand.
The first thing you should look for is an ATSC tuner. That will give you the ability to pick up your local HD digital TV stations without paying a dime to the cable company. For Knoxville folks, I live about 20 miles from Sharp's Ridge in Kodak, and I can pick up all of the Knoxville stations with an amplified indoor antenna. Reception gets a little spotty at times, so I'll probably invest in an outdoor antenna eventually.
The next thing to look for is 720 vs 1080. Personally, I go for the 1080. There is a visible difference in the resolution, especially when you get up to the larger screen sizes. As for progressive vs interlaced, it's really getting harder to find an interlaced set anymore. Progressive scan isn't much more expensive and it results in a much nicer picture.
We haven't really talked about size much yet. Don't worry; we're about to.
Obviously it depends on where you're going to use the set, and what you're going to use it for, but in general, go big. There's talk of a new, 1440 line standards coming down the pike in the next year or so, but cable, satellite and over the air stations are struggling just trying to pump out a 1080i signal now. They won't switch up to a higher resolution for quite some time, so the only thing the higher resolution will work for is DVD players and the like. Your 1080p will be a good value for years to come.
So go big. I've got a 60" 1080p in my room, and it's just like being at the movies, without the annoying kids, and the sticky floors.
The one other thing you should look for is plenty of connection in the back. The HD standard connection is called an HDMI cable. It carries the video and the audio, and are the best way to connect an HD source, like a cable box, PS-3 or XBox 360, or Blu-ray DVD player, to your TV set. They are also freakishly expensive. If you don't want to pony up for the HDMI cable, you can use component cables. These break up the video signal into three parts and usually come with stereo audio cables as well. For your TV, look for 2 HDMI connections and 2 component connections.
And that's it. If you read this far, I hope you have a better understanding of what all the numbers and symbols mean. If you have any questions, feel free to leave them in the comments. If I don't know the answer, I'll make something up!
Saturday, May 05, 2007
Mercury and Compact Flourescent Lightbulbs
links to a KnoxViews post
about Compact Fluorescent Lightbulbs.
In the comments to the post, Justin
links to treehugger
who gives the following scenario:
A CFL containing 5 mg of mercury breaks in your childâ€™s bedroom that has a volume of about 25 m3 (which corresponds to a medium sized bedroom). The entire 5 mg of mercury vaporizes immediately (an unlikely occurrence), resulting in an airborne mercury concentration in this room of 0.2 mg/m3. This concentration will decrease with time, as air in the room leaves and is replaced by air from outside or from a different room. As a result, concentrations of mercury in the room will likely approach zero after about an hour or so.
Under these relatively conservative assumptions, this level and duration of mercury exposure is not likely to be dangerous, as it is lower than the US Occupational Safety and Health Administration (OSHA) standard of 0.05 mg/m3 of metallic mercury vapor averaged over eight hours.
Professor Helen Suh MacIntosh has made a couple of key errors in her analysis,surprising for a professor of envronmental health. By the way, my credentials include 2 years on a mercury cleanup team in the Navy, another 8 years as a trained Hazardous Waste Operations and Response worker and 4 years as an OSHA 29CFR1910.120 40 hour HAZWOPER instructor.
I've done this stuff for a living.
The OSHA PEL for mercury vapor is 0.1mg/m3, not 0.05mg/m3. The latter is the NIOSH REL and it applies to skin contact only. You can look these values up at http://www.osha.gov
, or check out the NIOSH pocket guide
online, resources any professor of environmental health should be intimately familiar with.
Now then, take an average room at 10'x10'x8', which comes out to 21.6m3. Assume the 5 mg of mercury all vaporizes, giving an initial concentration of 0.23mg/m3. This is roughly twice the OSHA PEL for Hg, and working in this area would require respiratory protection.
Her second error is to assume that the contaminate in the room would be replaced with a single air change, a completely unwarranted assumption. Standard environmental calculations assume that mixing occurs during the air exchange and the rule of thumb in the field is that one complete air exchange only reduces the contaminate by 50%. Again, I'm surprised that a Professor of Environmental Health would make that mistake. In the real world, starting with .23mg/m3 of Hg vapor, after one air exchange, we would expect to see 0.12mg/m3 Hg vapor. After the next exchange, we would expect to see 0.06mg/m3 and so on. Basically, a person in this room would be breathing air in excess of the OSHA PEL for about 1.5 hours.
The professor's scenario is flawed in another way, assuming that all of the Hg will vaporize immediately. She may have made this assumption honestly thinking that it would lead to the worst case scenario, but it doesn't. Let's take a more realistic case, one where only half the metallic Hg goes airborne each hour. The first hour, 2.5 mg would go airborne. The next hour, 1.25 mg go airborne, the third hour, 0.6 mg go airborne, and so on. In this more likely scenario, and using the 50% dilution rate per air change, Hg exposure will exceed the OSHA PEL for almost 3 hours, or twice as long as her test case.
Finally, while the OSHA PEL is an 8 hour TWA (Time weighted average) that doesn't mean you can exceed the limit for a short period of time. As an analogy, you can't drive 100mph in a 55mph zone and use the excuse that you were only going to be out for 15 minutes. Doses above the PEL result in a higher concentration of Hg in the body, leading to more damage, regardless of the duration of the exposure. There hasn't been enough study on the effects of short term exposures to determine non occupational exposure limits, which is why environmental scientists routinely defer to OSHA PELs or NIOSH/ACGIH RELs in assessing exposure for members of the public.
So, what's the bottom line? CFLs result in energy savings, but do require special handling for disposal, and do represent a slightly increased health risk, particularly to young children and pregnant mothers. The magnitude of that increase is very small, much smaller than risks we take for granted every day, like driving to the supermarket for example, but it does exist.
Given what I know about the risk,and that I hate replacing lightbulbs, I'm converting over to CFLs in my house. But to suggest that there is no increased risk is dishonest.
Tuesday, March 20, 2007
This is the Truth
God created the world in 6 days and rested on the 7th.
Now some folks find that a bit hard to swallow, so they say instead that this is the truth
Nothing existed. Zip, Nil, Nada. No time, no space, no thing at all.
According to quantum theory, we can't be sure that nothing was there because uncertainty is too big when you get down into the sub atomic world. It's not that we can't measure precisely at that range, it's that there's no such thing as measuring precisely at that range. (There's a difference between accurate and precise. Accurate means you've measured correctly. Precise means you can duplicate the measurement. The distinction is crucial in the quantum world.) Without getting too technical, at the sub atomic level, the universe itself is uncertain about where and how much stuff there is. We can measure it accurately 100 different times and come up with 100 different answers, and each one is accurate, but the precision of the measurement is limited by the quantum nature of the universe. Now you have to understand that we are not limited by our ability to measure; we are limited by the construction of the universe.
So, since we can't know what the actual state of nothing is, whether it's actually nothing, or maybe a little something, or a little less than nothing, (yes, less than nothing is an actual possibility in the quantum world) then all we can say is that statistically, there was nothing, or on average, there was nothing. Now this gets kinda funky, because remember, it's not our measurements that are in error, it's an uncertainty inherent in the structure of the universe. This means that it's not a measurement of nothing that is varying, it's actually nothingness that is varying.
A provocative concept.
I see a question from the back?
Variance is a measure of change over time. If there was nothing, no matter, no space, and no time, then how could nothingness vary?
Ah, very good, very perceptive question!
In order for us to have variance in a system, we have to have time. But in 0 dimensional space, you can't have time. Just ask Mr. Einstein. So we have to invent a different kind of time. Physicists call this new kind of time imaginary time and according to them, it runs at a right angle to our conventional time, so we don't notice it.
So, getting on with our story, nothing was fluctuating through imaginary time, when suddenly, due to a mathematical anomaly in the statistical variations of nothing, the value of nothing explodes into something, and we get a universe filled with matter, energy, space and, oh yeah, real time. Like most imaginary playmates, imaginary time disappeared, having been replaced by real time.
Let's sum up: According to our best and brightest scientists, in the beginning, there was nothing, but that nothing might have been something, or less than nothing, and it varied between those three states in imaginary time, until something happened, and nothing became something, at which point, we had our universe in real time.
And they say scientists lack faith...
Friday, February 02, 2007
Ok, No More Fighting It
The researchers with the most funding have won the war, science be damned. I give up.
Global Warming is real and it's all human induced.
The sun plays no role, nor does anything other than human activity.
In fact, our carbon emissions are so out of control that we're causing the surface of Mars to heat up as well, placing both planets in jeopardy.
However, only emissions from the United States cause global warming. The incredibly dirty industrial plants of the developing nations do not contribute to global warming, nor does the clear cutting of their forests reduce their carbon sink. Needless to say, the US and its increasingly large forest area does not act as a carbon sink.
In fact, global warming really is a US plot to reduce the rest of the world to starvation level poverty so we can continue to drive our SUVs and eat at McDonalds..
Fine. I believe.
And as soon as all those global warming true believers scrap their gulfstream jets and stop chartering flights across the globe to discuss how excessive consumption of hydrocarbon fuels by guys like me driving pick-up trucks is ruining the planet for everybody, then I'll change my way of life. But as long as these hypocritical twits continue to preach conservation while driving Escalades and traveling by private charter jets, then I'll continue to drive my Dodge and enjoy the benefits of global warming.
Thursday, November 16, 2006
But Bush Outlawed Stem Cell Research, Didn’t He?
The News Sentinel has an AP story
in today's paper about a successful study using stem cell therapy to treat Muscular Dystrophy in dogs.
Stem-cell injections worked remarkably well at easing symptoms of muscular dystrophy in a group of golden retrievers, a result that experts call a significant step toward treating people.
Sharon Hesterlee, vice president of translational research at the Muscular Dystrophy Association, called the result one of the most exciting she's seen in her eight years with the organization. Her group helped pay for the work.
She stressed that it's not yet clear whether such a treatment would work in people but said she had "cautious optimism" about it.
Weren't we told in campaign commercials that President Bush and the heartless Republicans had outlawed stem cell research?
The study used stem cells taken from the affected dogs or other dogs, rather than from embryos. For human use, the idea of using such "adult" stem cells from humans would avoid the controversial method of destroying human embryos to obtain stem cells.
The scientists worked with golden retrievers that suffer a crippling form of dystrophy very much like the human one. Researchers studied the effect of repeated injections into the bloodstream of a kind of stem cell extracted from blood-vessel walls.
Weren't we told that adult stem cells weren't as flexible as embryonic stem cells, and couldn't be used to generate different tissues? Here, we've got blood vessel wall cells being used to cultivate nerve tissues.
Cossu said he hopes to start a small experiment in children in the next year or two.
Weren't we told that embryonic stem cells were the best hope for treatments for muscular dystrophy, Parkinsons, multiple sclerosis and other diseases? And that adult stem cells just weren't promising enough?
I guess we were told wrong.
Wednesday, October 25, 2006
Resolving a Paradox: Part 1 The Separation of Church and Science
The Earth was created in 6 days and is about 10,000 years old.
The Earth formed over millions of years and is about 4.5 billion years old.
One is an article of faith, the other a matter of science, right?
One is true, the other patently false, right?
Well, maybe not. What if I told you that there is a way to use science to show that both statements could be accurate simultaneously? Note carefully that I said could be true, not are true. I can't say for sure if the following series of conjectures is true or not. What I can say is that the second statement does not automatically render the first one false.
It all depends on using the proper framework.
For starters, you have to understand that by design, science does not answer any questions about religion and the existence of God. The idea of supernatural forces/beings/actions is automatically excluded from consideration in the scientific framework. This makes scientists very happy, as they can make and test their mechanistic theories about how the world works without worrying about divinity mucking things up.
But there's a consequence to operating in this framework that many scientists have a tendency to forget. Since they start out by choosing postulates that exclude the idea of God, none of their results can be construed to say anything about God, whether positive or negative.
Yes, you in the back, you have a question?
"But if the equations work without a God, then that means there's no need for a God, so doesn't that prove He doesn't exist?"
A good question, and one I'm glad you asked because it perfectly illustrates the fallacy of trying to disprove the existence o God using science. The short answer is "No, it doesn't." To more fully demonstrate, let's construct a proof. Your argument goes like this:
I can describe this process without resorting to God.
Therefore God does not exist.
It is very clear that the conclusion is not supported by the proposition. In order for the conclusion to be true, you would have to include a first postulate, making the proof look like this:
If God exists, He must be crucial to every process description.
I can describe this process without God.
Therefore God does not exist.
While the proof is logically sound, there is no basis for the first postulate, which makes the proof invalid.
Now, getting back to the subject at hand, the first postulate of science can be stated like this:
All processes in nature can be described without recourse to supernatural actors/forces.
Note that this statement says nothing about whether or not those forces actually exist or not. And because it doesn't, no proof constructed from that first postulate can have anything to say about the existence of those forces.
So, since science by definition cannot say anything about God, then why would I call this essay "The Mathematics of Divinity?"
Because math is not science; math is a descriptive language. It uses abstract symbols manipulated in a rigid, logical fashion, in an attempt to describe the real world. The results are haphazard at best.
Ahh, you in the back again. Math major, I take it?
"Come on, professor! You expect us to believe that math is haphazard? Math is perfectly designed, always logical, always repeatable, and makes perfect sense! How can you call that haphazard?"
Ok, allow me to demonstrate. Come up to the front of the class. On my desk, you'll find 5 pencils. Take 7 of them and bring them here to the podium.
"I can't! If there are only 5, then how can I bring you 7?"
What, you can't bring me -2 pencils?
As I said, haphazard. Following the rigid logical rules of mathematics can easily result in answers that have no real world counterpart. The point I'm trying to make here is that math is only useful when it provides an accurate description of the real world and its processes. It has no intrinsic value of its own.
So, what does all of this have to do with reconciling the biblical age of the earth with the scientific age?
We'll talk about age as a dimension in Part 2.
Tuesday, January 03, 2006
One More Time for Intelligent Design
A few weeks ago
, I used Intelligent Design theory to demonstrate the weakness of Global Warming theory. In the comments on that post, Steven has presented an impassioned, albeit flawed, defense of ID. Since the post is a couple of weeks old, I decided to move my response up here.
First of all Rich, you don’t have the information to assume that ID proponents such as Dr Behe and myself are incorrect. To suggest that you KNOW we are in error is not at all truthful.
Actually, I do. One of the core principles of scientific research is that all aspects of our world result from natural, mechanical processes. As soon as you step away from that principle, you've left science behind and entered philosophy. The only way for ID to be considered science is to prove
that life could not have occurred without divine intervention. The burden is on IDers to prove that life couldn't have evolved and they haven't come close to meeting that burden. As long as natural, mechanistic explanations can be found for the development of life, then the leap to ID is simply not justified. The fact that all of the explanations haven't been found in no way means that they won't.
Additionally, if you make the assumption that life was designed, where does that get you as far as science is concerned? What new knowledge comes from it? What experiments can you perform that would have any scientific meaning? The answer, of course, is none. Once you invoke a Designer as your answer, it becomes the answer to every question.
For these two reasons, ID is not science. Period.
Scientists would like to believe that Humans are the pinnacle of the Universal Food chain (not just terrestrial.)
This is simply not true. No biologist I know believes that there even is a top of the food chain, much less that man is at the top of it. In fact, many of the ones I've talked to can reel off several better candidates for top-of-the-heap honors, nearly all of them bacteria. Single celled organisms totally changed the atmosphere of the planet, a feat that dwarfs building the pyramids. And let‘s not forget that those single cell organisms will dine on our remains at the end of the day. The food chain is a cycle, not a ladder.
This leaves PLENTY of time for another culture to have evolved so to speak, beyond our imagination. But I can easly imagine that terraforming is their means of gardening, and perhaps even replicating.
Ah. Somebody has been reading their Francis Crick. There's a problem with the idea that life on Earth was seeded by some incredible advanced and ancient culture; it fails to answer the question of how life evolved, merely pushes it back one more generation. Put simply, how did that incredibly advanced and ancient culture (IAAA)come to be? An even more IAAA culture? Ok, where did they come from?
You know, it really helps when your answer actually answers the question you're asking. Panspermy doesn't.
As for survival of the fittest providing the mechanism for natural selection. How can anyone with half a mind believe that?” If survival were the only criteria, blue algae is doing quite nicely. Why change at all????
There are two critical errors in this one short statement and it's worth taking a bit of time to address them both.
First of all, to answer the question "Why change at all?" organisms do not change in order to survive; they survive because they change. (Thanks, Mrs. J!) You're putting the cart before the horse. Change occurs whether it brings a survival advantage or not; it occurs because of random mutations caused by multiple factors including transcription errors, cosmic radiation, etc. Change due to mutation is inevitable, regardless of whether it enhances or detracts the organisms ability to survive.
Second, "survival of the fittest" has been junked for decades now, and for the reason Steven suggests. It doesn't explain the thousands and thousands of species that make up the biosphere. As a simple example, if survival of the fittest were the only organizing principle, then the giraffe and zebra never would have evolved from their common ancestor. And looked at this way, it's clear that "survival of the fittest" suffers the same flaw as Steven's question of "Why change at all?" As we know now, the answer is that change is constant.
Biologists today favor natural selection as the mechanism of Darwin's evolutionary model, and that is quite a different beast altogether. Natural selection states that as organisms change, those changes may enhance that organism's ability to survive, which increases the probability that those changes will be passed on to the organism's offspring because the organism has a better chance of surviving long enough to reproduce and pass the changes on. Simultaneously, changes that reduce an organisms ability to survive are going to be less likely to get passed on to the organisms offspring, and will not spread through the population. Reproductive isolation limits the spread of positive change, creating drift between different populations, eventually resulting in the formation of two separate species. Note that neither species changed in order to improve their survival; instead, certain members of the original population reaped an advantage because of the change. Also note that while evolution is examined in terms of species, natural selection occurs at the individual organism level.
Going back to my example of the zebra and the giraffe, in a population of their common ancestor, some had a mutation that gave them a longer neck. Others had a mutation that gave them stripes. Both of these changes produced a survival advantage, (greater access to food and better camouflage), therefore both mutations had a higher probability of being passed on through offspring. This simple example demonstrates how a random process, mutation, can produce a highly structured result, and how that result can be so mindbogglingly diverse. And all without the need for an external designer.
As for “selected for”. ………This tired phrase has yet to have any definition or meaning…selected by what?”???…
“Selected for” is short hand for the process of natural selection. It does not apply that anyone or anything is doing the selection, but that the selection occurs through the action of natural forces, hence, “natural selection.”
Unless you speak from the paradigm of ID, there is surely no mechanism for selecting the proper order of amino acids to make the first protein.
Actually, there is. Lab experiments reproducing the conditions on earth prior to the emergence of life, (CO2 atmosphere, high temperature, lots of water, volcanic activity, electrical storms, etc) have produced all 20 amino acids with no design action required. The laws of chemistry are all that is needed, which leads me to a very important point. In the extreme conditions that existed prior to the origin of life, it is possible that other organizing principles were in operation. Just as Einsteinian physics supplanted Newtonian physics without invalidating it, so too may chaos theory, self-organizing systems, and complexity theory expand the reach of Neo-Darwinism in explaining the origins of life.
Steven’s question also contains another error in perception, that the first protein had a “right” order. At that point in time, any order was the right order simply because it represented order out of chaos, demonstrating the principle of self organization arising naturally. Once that is established, then ID is no longer needed.
Your alternate explanation of irreducible complexity is quite puzzling. Please understand that life makes far too many unbelievable choices too fast to employ this strategy. When a molecular machine doesn’t work out its not stored in the (Cell’s) garage, the organism dies.
Which is why men have nipples, and humans have vestigial tails, and an appendix, right?
The existence of these anatomical irrelevancies illustrate that extra parts do not automatically mean a broken machine. That kind of absolutism isn't seen very often in nature for the very reason that broken machines do in fact die. But the inclusion of junk biochemical chains doesn't automatically mean that the process won't function. Instead, we see mutations that weaken the organism’s ability to survive, like sickle cell anemia. Redundant complexity means that biochemical machines start out functioning, but inefficiently, as they are nearly random collections of biochemical processes. However, as mutation occurs, and unnecessary pieces are dropped, the biochemical machine functions more efficiently. Higher efficiency gives a survival advantage, and is more likely to be passed along. This process of refinement will eventually result in an irreducibly complex biochemical machine that operates at maximum efficiency. Interestingly enough, we have a very good example of this kind of junk redundant complexity that, according to Steven, should not happen. We have literally hundreds of thousands of amino acids tied up in junk DNA in our own genome. And we're still here.
Apparently redundancy is not automatically fatal as Steven would have us believe.
The theory of evolution has NOTHING to do with bio engineering (WHICH FYI, IS ID BY DEFINITION.) Turning genes on and off, spicing, and reverse engineering don’t require or employ an origin theory at all
There's another very common misconception buried in this paragraph, that evolution is an origin of life theory. It isn't. Never claimed to be. It was and is a description of how life evolved once it existed. There have been attempts to apply evolutionary theory to the origin of life, with greater and lesser degrees of success, but as I mentioned earlier, there are indications that other organizing principles may have been operative, along with natural selection. So Steven is right, biotech does not require an origin theory, but his point is irrelevant since evolution is not an origin theory. However, considering that genetics and evolution are inextricably tied together, to say that evolutionary theory has nothing to do with bio engineering is extremely naďve.
And there really is nothing more to say on the issue. Intelligent Design is simply not science, because as soon as you propose the existence of an agent outside the naturalistic universe, whether it be divine or an IAAA civilization, you’ve left science behind.
Saturday, December 10, 2005
Non Specific Theories
One of the knocks against Intelligent design is that it isn't specific to a question, that is, it can be used to answer multiple unrelated questions. For example:
- Why do our eyes have blind spots? Because the designer made them that way.
- Why do we have 5 fingers instead of 6? Because the designer made us that way.
- Why do women need 37 pairs of shoes? Because...but you get the point.
In science, a theory that can answer questions that don't go together is regarded as suspect; it's more of a black box than a real theory, particularly if there's no mechanism behind the explanations, and especially if that answer yields no new knowledge, as is the case with Intelligent Design. In essence, appealing to ID is saying, "Hey, I don't know how or why this happened, so somebody must have planned for it to happen."
For this reason among others, most scientists dismiss ID, and quite rightly so. Ironically, there's another popular theory making the rounds that suffers the same defect, yet many scientist are happy to climb onto the band wagon on this one.
Of course, I'm talking about global warming theory, which is currently used to predict:
- Rising global temperatures leading to a tropical climate
- Falling global temperatures leading to an ice age
- More and stronger hurricanes and storms
- Fewer hurricanes and strong storms
- Massive short term changes to the environment
- Only long term changes to the environment
- Heating of the troposphere
- Cooling of the troposphere
- and chronic halitosis among garlic eaters
You get the point. It doesn't matter what happens with the weather; some scientist somewhere will make the case that it's all due to global warming caused by greenhouse gas emissions.
There's an old saying that if the only tool you have is a hammer, every problem looks like a nail. Maybe it's time we looked for another tool, one based on the facts.
Saturday, August 06, 2005
I'm moving this up from the comments for two reasons. One, I think it deserves more exposure and two, after writing this, I'm not going to feel like writing another post.
I get lazy sometimes.
A couple of days ago, I wrote a short article
explaining exactly why Michael Behe's defense of Intelligent Design using irreducible complexity is hog wash.
In the comments, Rob Huddleston disagreed, claiming that Behe had thoroughly discredited the idea of Neo-Darwinian evolution as applied to biochemical systems, and therefore had good reason to claim intelligent design was in action. I disagreed, and laid out th arguments in a response.
Rob then claimed that nobody has been able to refute Mr. Behe, and questioned the origin of the universe, a quite different subject.
OK, enough review.
Let's take this one step at a time.
1)Refutation of Michael Behe. First off, let me point out that my argument effectively refutes Behe. Rob chooses not to respond to my argument, instead linking to a quote from Chuck Colson
, claiming that nobody had successfully refuted Behe. Unfortunately for Rob, Mr. Colson did not provide any evidence for his assertions, just a bald statement. I can do better than that.
A quick google search found several pages worth of valid challenges to Behe's irreducible complexity model. One of the best and most accessible to the layman is this one
, which explores the concept of redundant complexity in far more detail than I did. This one
delves deeper into the clotting mechanism, explaining potential pathways for its evolution. It's a bit more technical but the determined reader should be able to follow the gist of the argument. For a very long, thorough, and devastating critique of Behe by a fellow biochemist, go here
. And finally for a list of links of challenges to Behe's hypothesis, go here.
So much for Mr. Colson's comment.
Now, regarding Darwinism vs ID, Rob wants to know where the earth came from.
It's irrelevant to the discussion. We're talking about how life evolved on earth, not how the earth came to be. If you want to talk about the origin of the universe, that's cool, but don't ask an evolutionary biologist; he won't have the faintest clue because the two processes are unrelated. It's important in a debate to stay on track, and cosmology is certainly beyond the scope of a discussion of evolution.
Now then, let's get to the basic structure of the debate. Behe's argument can be reduced to the following set of statements:
- There exist certain biochemical systems that exhibit irreducible complexity defined as follows:
By irreducible complexity I mean a single system which is composed of several interacting parts that contribute to the basic function, and where the removal of any one of the parts causes the system to effectively cease functioning.
- An irreducibly complex system cannot have evolved by the gradual addition of new parts since by definition, lack of any one part would result in a non-functional system.
- Since biochemical systems are the basis for all life as we know it, they must be subject to the same evolutionary processes and restrictions in order for evolution to be valid.
- Since by definition irreducibly complex systems cannot have evolved through the gradual addition of components, Darwinian evolution cannot explain how biochemical systems arose.
- Since evolution cannot explain biochemical systems, then they must have been designed.
Now then, for Behe's argument to be true, each of the above statments and the relationships between them must be true, or like one of Behe's irreducibly complex systems, the argument fails. So let's deal with each step one at a time.
- By following the links I gave above the reader will discover that many systems Behe claims are irreducibly complex are in fact reducible, albeit with an attendent loss in efficiency. The system may function without the component, or a homologue may replace the missing component. As an example, one of Behe's examples is the blood clotting process. He claims that los of any one component will cause the entire process to fail completely. If this were a true statement, no hemophiliac would survive their first shot.
So what does this mean for his argument? If irreducibly complex systems are actually reducible, then they might have possibly evolved, removing Behe's impediment to evolution, removing the need for ID. However, since we don't know every biochemical system on the planet, let's say for the sake of argument that there are irreducibly complex biochemical systems that fit Behe's definition.
- At least three separate, observed, mechanisms are proposed for the evolution of irreducibly complex systems, those three being (A)redundant complexity, (B)improvements becoming necessities due to other changes, and (C)duplication and divergence.
- I've discussed redundant complexity before, where more chemicals are around than are absolutely necessary, but they support the process as more efficient systems gradually develop. Once the more efficient system is developed, the former system atrophies.
- H. Allen Orr describes this one, using the development of the lung as an example. In short, the first critters developed lungs not as necessities, but as an advantage. However later evolutionary adaptations: legs, arms, hands, etc. that made living on land easier made lungs a necessity. A similar process occurs in the cell.
- Gene duplication occurs regularly in the cell, resulting in an extra gene, one that can mutate without deleterious effect on the organism. Should that mutation prove advantageous, it will be selected and passed on, leading to divergence. That divergence, as noted in the previous example, could make the advantage necessary.
In short, this statement is simply not true. It is very possible for systems that are defined as irreducibly complex could have developed gradually, through Darwinian evolution.
But let's press on.
- This statement in not necessarily true. In fact, it would be very surprising if it were true because throughout nature, when you change scale, the balance of operating forces change, meaning the rules also change. Newtonian physics works very well until your velocity achieves a significant fraction of c, at which point it breaks down completely. It is a reasonable approximation of the truth, but only within certain limits. Similarly, when you get into sub atomic physics, and quantum mechanics in particular, the physical "laws" that govern our mundane existence appear to go out for a bite of lunch, and all hell breaks loose.
In short, the assumption that the same forces that control macro-evolution may not be the same forces that control micro or cellular evolution. This statmeent is not fact, but an as yet untested hypothesis.
- This statment is a conditional, and since the first part of the statement is untrue, then so is the second. Since Darwinian evolution can explain irreducible complexity, then it can explain biochemical systems.
- This final statement is a complete logical fallacy. Not only is it a conditional whose premise is false, even if the premise were true, the conclusion is invalid. Disproof of one theory does not constitute proof of another.
So, for those of you keping score at home, the final tally is one "maybe", and four "no"s.
Behe's argument does not stand up under scrutiny.
Wednesday, August 03, 2005
OK, look. I have as many problems with darwinian evolution as the next guy, probably more. I have almost as many problems with the neo-Darwinists as basically all they did was try to define the problems away. I mean, Darwin stated that in order for his theory to be correct, evolution would have to occur in small changes spread fairly uniformly over vast lengths of time. Now we know that evolution does not occur that way, that change comes rapidly, then slows, and for reasons we can't even begin to make a good guess at. Neo Darwinists have had to tinker so much with the theory of evolution that I am comfortable in saying that Darwin himself would reject the bastard progeny of his rather elegant theory were he alive today.
However, it's the best working theory we've got, and the holes in it just show areas where we need more knowledge.
ID on the other hand, is snake oil pure and simple. There is no science behind it because it makes an invalid logical leap. Put simply, IDers claim that since evolution is flawed and there are aspects of life that evolution as currently constructed cannot explain, then Intelligent Design is the only alternative.
Wrongo, me buckos. Disproof of one theory is not proof of another, no matter how much you might wish it to be so in your pointy little heads.
Here is a quick and dirty rundown of ID, including the fatal flaw, as gleaned from Michael Behe's Book, Darwin's Black Box
Behe claims that certain biochemical systems in cells are so complex and so sensitive to change that they couldn't possibly have evolved since even one small change would render the system non-functional. He calls this the principle of irreducible complexity, and uses a simple mousetrap as an example. If you remove one piece of a mousetrap, he says, it no longer functions as a mousetrap. Therefore the different pieces could not have been assembled through the series of small changes described by evolutionary theory, therefore a mousetrap could not evolve. And since it could not have evolved, it must have been designed and manufactured.
Holy Creationism, Batman! He's got a good point.
This is why Robin is destined to maintain his sidekick status and never make it to full fledged crimefighter even if he does change his name to Nightwing. Mr. Behe glosses over several other possibilities evident even to a casual science observer like myself.
Example the first. Current theories allow for a primordial soup of complex chemicals globbing together in self replicating bunches, then encapsulating and so on and so forth. The details are kind of sketchy since it's tough to recreate a million or so years of time under laboratory conditions unless you have a spare solar system and a few million years to play with and since we're not hyperdimensional mice with that kind of time on our hands, we'll have to muddle through.
So, assuming the soup theory is correct, how hard is it to conceive of an encapsulation of not only all the necessary parts for a mousetrap, or in the case of evolution, a complex biochemical system, but also about a hundred thousand extra chemicals hanging out in the globule as well? Apparently not that hard since I can conceive of it. Now all evolution has to do is get rid of the redundant parts, refining a process that already exists until the system reaches maximum efficiency, defined as no extraneous parts. We now have a complex biochemical system where each component is absolutely vital to the functioning of the system and we got there using the small successive changes required by Darwinian theory.
Thanks you, please hold all applause until the presentation is complete.
I call this redundant complexity and it blows irreducible complexity out of the water.
Example the second. Behe said that any change in the components of a mousetrap render it unusable as a mousetrap. He never mentions that perhaps the intermediate forms had a use other than as a mousetrap? A mouse trap with a broken spring can be used as a paperweight, no? Many biochemical systems share components; the same chemicals are used over and over in different applications. Isn't it possible that incompletely formed biochemical systems may have served other functions, even if not very efficiently?
Yes it is and thank you for asking.
And that about wraps it up for irreducible complexity, which is a mortal wound for ID, since we no longer require (and never actually did) a designer to create those irreducibly complex systems; they very well could have arisen from Darwinian changes.
PS: This attack on ID is much more robust than the conventional "ID is not science because it's not a falsifiable hypothesis" argument that most make because once you get past the near religious fervor which many evolution scientists profess, Neo-Darwinism is no more falsifiable than is ID.
Thursday, January 08, 2004
The Numbers Didn’t Lie…
Week 52 data
for the flu epidemic is now available at the CDC website, and it is an eye-opener.
The number of states reporting widespread influenza activity decreased during week 52 (December 21 - 27, 2003). However, the percentage of outpatient visits for influenza-like illness (ILI) continued to increase (9.4%), and pneumonia and influenza (P & I) mortality (9.0%) exceeded the epidemic threshold for week 52 (7.9%).
What does this mean to you and me? Just that, when tracking an infectious disease, it's probably better to listen to actual epidemiologists than to a guy who's just been writing about them.
As you can see from the latest chart, the dip from last week did not indicate a peak, but a momentary lull. Possible factors in the lull are incomplete reporting due to the holidays, fewer people going to the doctor because of the holidays, or an actual dip in the rate of spread. In any case, it is clear that the flu continues to spread further and faster than the comparison year of 1999-00.
The good news is that several states are reporting that they have peaked, indicating that the national peak may come soon, perhaps in 2-3 weeks.
To continue the analysis, we also need to look at the current mortality chart. As I predicted last week, the mortality rate is climbing rapidly, and is on a pace to exceed 1999-00 by a significant margin. This was easily predictable based on the history of past flu epidemics, and it amazes me that Fumento could have missed it.
But maybe that's why he is a reporter instead of a doctor.
(And what does that say about you, Mr. Hailey? Hmmmm?)
Hey, the same applies to me as well. I'm not a doctor; I'm not a scientist; I'm not even a professional writer, since I do this stuff for free. But, unlike Mr. Fumento, I don't have a vested interest in being an ass. I can write whatever I like without worrying about whether it will sell or not, and that gives me an objectivity that he can't afford.
Of course, there is another, important difference between us.
I was right!
New data was released today, just after I wrote the above. As the charts above show, I was too pessimistic in my assessment, as it appears the peak is behind us. Week 53
data shows a steep drop in the rate of new cases, although mortality due to the flu is still climbing (from 9 to 9.4%) and well above epidemic levels.
One thing I want to point out is that the latest data confirms an observation Stormy Dragon made, that there is an inverse relationship between the severity and the duration of a flu outbreak. This simply makes sense if we consider that the population susceptible to the flu is limited, and the faster the flu spreads, the quicker that pool of potential victems is exhausted.
But it's nice to see theory play out in practice.
Wednesday, December 31, 2003
Some People Never Learn
As I was browsing through my e-mail last Saturday, a title leapt out at me. "You're famous again for being wrong!"
Odd way to start a letter, but then I saw who it was from, and all was clear. Yep, Mike Fumento sent me a Christmas card.
For those who weren't around last summer, Mike and I went a couple of rounds on SARS and the Atkins Diet, culminating in a rather spectacular meltdown by Mike, as documented here
I excerpted a couple of the choicer insults and posted them over to the left as an ironic trophy, then forgot the whole thing.
Here's what he wrote:
SARS and your know-it-all attitude have come back to haunt you. See the bottom of this column.
Of course, I realize that to a true blue blogger such as you any traffic is good traffic. But to those of us who occupy the real world and not the blogosphere, looking like an idiot is not a pleasant experience. So while you bathe in the glory of increased hits, other people are going to be laughing their heads off at your "East Tennessee perspective."
Have a nice day!
Oh, and don't bother trying to yank your comments from your archives. I made a snapshot. Your shots that fell on your own bow are part of history!
Now, everybody knows I enjoy a good dustup, so I went over to Mike's place to read his latest column, and found that once again, he gets it wrong. The main thrust of Mike's argument is that this current flu season is serious, but not that big of a deal, and is being overhyped by the media and the CDC, a standard Fumento target. But, as usual, his argument is filled with misleading information, selective citations, and a disregard for facts.
He starts by claiming that the CDC has "arbitrarily" declared the flu an epidemic, and then defines epidemic as "higher than usual rate." This definition is fine as far as it goes, but he leaves out an important part; time. As his provided link
shows, an epidemic is a higher than expected number of cases during a given time.
This is crucial to our understanding of what an epidemic is, and whether or not the current flu season qualifies.
For example, 10 deaths of flu in a large town in January might not be considered an epidemic. The same number deaths in June would be. The CDC classifies an epidemic not just by number of deaths, but by when they occur as well.
Next Fumento hauls out a graph from the CDC, and claims that it shows that this season is the same as the 1999-00 season, and therefore there is not an epidemic. This comparison fails for a number of reasons.
First, as you look at the chart, you see that the flu season started much earlier this year, and that flu rates are running as much as 500% higher than the same time last year, and 250% higher than the same time for the 1999-00 flu season.
Next, we see that the shape of the curve is different. Comparing 1999-00 to 2003-04, the curves are diverging. The rate of increase in flu cases is higher now than in 1999-00.
Next, even though we are about 2 weeks ahead of the 1999-00 peak, we've already exceeded that peak by about 0.3%. I know that seems small, but it becomes important in a minute.
Finally, we have to discuss the overall shape of the curves. Note the sharp drop in the 1999-00 curve. That drop is the result of two factors. First, the vulnerable population is limited, and unless the strain is particularly virulent, exhausted fairly quickly in a large outbreak. Second, vaccinations begin to take hold, reducing the vulnerability of the general population. This year, as Mike points out, the vaccine will be only partially effective in preventing this year's dominant strain. Not only did this contribute to the early start to the season, it also means that we can expect a more gradual tapering of the rate of increase of new cases, rather than the sharp drop we've seen in the past.
So, let's add up these factors. The dominant strain is a Type A strain, one of the more virulent; the vaccine is only partially effective at limiting the spread of the flu; supplies of the vaccine are, for all intents and purposes, exhausted; and we've already observed more cases and higher rates of infection when compared to the last Type A flu outbreak. Take these factors together and you have a strong case for predicting an epidemic.
In fact, the article Mike cites
spells it out for us.
The number of deaths caused by pneumonia and influenza is just under a statistically determined "epidemic threshold." But, she (Julie Gerberding, director of the Centers for Disease Control and Prevention) said, from a practical standpoint, "it's safe to call it an epidemic."
Now, let's go to the CDC, and see what the most recent numbers
Looking at this chart, you can see several interesting things. First, the curve has broken, showing that while the rate at which new flu cases are reported is still increasing, the rate of that increase has slowed. Also, you can see that we haven't peaked yet, and that the peak will be significantly higher than 1999-00. And finally, you can see that the break is nowhere near as sharp as 1999-00, which indicates that the rate of flu may continue to increase for a few more weeks, instead of dropping off sharply, leading to a significantly higher number of cases. My prediction has been born out by the actual numbers.
But is it an epidemic?
Well, there's one more chart we need to look at; the mortality rate chart. This one tells us whether we are in an actual epidemic or not, not arbitrarily, but by statistical analysis of deaths related to flu/pneumonia.
A couple of quick observations about the chart. Note that the seasonal norm and the epidemic level vary over time and track with each other. That's why it's important to include time in the definition of epidemic, and why Mike's contention that the 1999-00 chart and this year's chart are identical is bogus.
Now, look closely at the last weeks of 2003. You can see that last week we were rapidly approaching epidemic levels, and this week, we crossed it. Based on the information we had, it was fairly easy to predict that this would happen, and it wasn't arbitrary
at all, as Mike suggests.
But we really need to look at one more thing, something that shows just how wrong Mike is. Mike is telling us that there is no epidemic because the current curve is tracking the 1999-00 curve, but the 1999-00 curve resulted in epidemic flu conditions for most of a year! Look at the other end of the curve, for weeks 0-20 of 2000. The mortality due to flu peaked at 11.3%, well above the epidemic levels, and remained above the seasonal norms for most of the year, spending a significant amount of time above epidemic levels, even during the summer months. Add in the factors we've discussed for this year, the early start to the season, the weakened ability of the vaccine to protect against infection, and the fact that we're already exceeding 1999-00 levels and the only reasonable prediction to make is that this season will be significantly worse than 1999-00.
In other words, an epidemic.
PS: My mail program marked the e-mail from Mike as spam.
Sunday, December 28, 2003
They Got Another One!
The Beagle 2, Europe's first mars lander, is still silent
Now y'all are gonna think I'm paranoid, but if we can send probes to Jupiter, Saturn, Uranus, Neptune and Pluto, not to mention clear out of the solar system, and still maintain contact with them for years and years, long after we thought they'd crap out on us, then why in the world do we keep screqwing up the relatively short trip to our next door neighbor? I mean, it's simply ridiculous, like saying we can drive a car blindfolded from Maine to California with no problems but can't cross the street to get the mail.
Should we believe that we every space agency on the planet keeps making stupid mistakes like inadvertantly using the wrong measuring scale on one
calculation out of millions? That there's something about a Mars mission that makes highly qualified engineers mysteriously turn into incompetent hacks?
Nope, there's a simpler explanation.
The Martians are just better at shooting down UFO's than we are.
Page 2 of 4 pages < 1 2 3 4 >