Theory of Intelligent Design, the best explanation of Origins

This is my personal virtual library, where i collect information, which leads in my view to Intelligent Design as the best explanation of the origin of the physical Universe, life, and biodiversity

You are not connected. Please login or register

Theory of Intelligent Design, the best explanation of Origins » Intelligent Design » Information Theory, Coded Information in the cell » Intelligent design qualifies as a scientific theory

Intelligent design qualifies as a scientific theory

View previous topic View next topic Go down  Message [Page 1 of 1]


Intelligent design qualifies as a scientific theory

Hypothesis (Prediction): Natural structures will be found that contain many parts arranged in intricate patterns, metabolic pathways similar to electronic circuits, sophisticated language, and translation systems,  indicating high levels of  Information,  and interdependence, like hard/software. No instructional information, nor codes or ciphers ( translators ) can exist and will be found without an initial mental source, since complex specified information is, by its nature, a mental and not a material quantity. 

Observation: The genetic code system is a language, and the universal code is nearly optimal and maximally efficient ( only 1 in every million random alternative codes generated is more efficient than the genetic code ).  Genomes use the genetic code to write two separate languages, besides a cipher, to translate codon triplets into an amino acid "alphabet" constituent of 20 different left-handed amino acids to make proteins.  One code describes how proteins are made, and the other instructs the cell on how genes are controlled and when expressed. There are 13 characteristics of human language. DNA shares 10 of them ( all LEVELS of Information: Statistics, Syntax, Semantics, Pragmatics, Apobetics are used )  This comparison between cell language and human language is not a loose analogy - it’s  literal. Human language and cell language both employ multilayered symbols to produce a blueprint - required to describe an artifact and in biology, an organism. 

   DNA sequences that code for proteins needs to convey, in addition to the protein-coding information, several different signals at the same time. These “parallel codes” include binding sequences for regulatory and structural proteins, signals for splicing, and RNA secondary structure. The universal genetic code can efficiently carry arbitrary parallel codes much better than the vast majority of other possible genetic codes. This property is related to the identity of the stop codons. We find that the ability to support parallel codes is strongly tied to another useful property of the genetic code—minimization of the effects of frameshift translation errors. Whereas many of the known regulatory codes reside in nontranslated regions of the genome, the present findings suggest that protein-coding regions can readily carry abundant additional information. The output are thousands of essential proteins and enzymes required for life;   sophisticated molecular machinery is needed to replicate the code (for inheritance/perpetuation), transcribe it, translate it into protein with many intermediate steps requiring highly specific operations, and to repair it in the foreseen event that it is damaged (to preserve/protect it) or destroy it in the event that it suffers irreparable damage (to forestall cancer).  besides the standard genetic code, other 23 genetic codes are known, besides epigenetic codes, that is, the Splicing Code, the Metabolic Code, the Signal Transduction Codes,  the Signal Integration Codes , the Histone Code,  the Tubulin Code, the Sugar Code, and the Glycomic Code, all essential to define a organism and phenotype. 

Experiment :
One language is written on top of the other, which is why the second language remained hidden for so long. Biologic systems and processes cannot be fully accounted for in terms of the principles and laws of physics and chemistry alone, but they require in addition the principles of semiotics—the science of symbols and signs, including linguistics.  Summarizing the state of the art in the study of the code evolution,  science cannot escape considerable skepticism. It seems that the two-pronged fundamental questions: “why is the genetic code the way it is and how did it come to be?”, that was asked over 50 years ago, at the dawn of molecular biology, might remain pertinent even in another 50 years, if methodological naturalism is adopted, and only natural explanations for its origin are permitted. The consolation is that scientists cannot think of a more fundamental problem in biology. Despite extensive and, in many cases, elaborate attempts to model code optimization, ingenious theorizing along the lines of the coevolution theory, and considerable experimentation, very little definitive progress has been made.

The British biologist John Maynard Smith has described the origin of the code as the most perplexing problem in evolutionary biology. With collaborator Eörs Szathmáry he writes: “The existing translational machinery is at the same time so complex, so universal, and so essential that it is hard to see how it could have come into existence, or how life could have existed without it.” To get some idea of why the code is such an enigma, consider whether there is anything special about the numbers involved. Why do life use twenty amino acids and four nucleotide bases? It would be far simpler to employ, say, sixteen amino acids and package the four bases into doublets rather than triplets. Easier still would be to have just two bases and use a binary code, like a computer. If a simpler system had evolved, it is hard to see how the more complicated triplet code would ever take over. The answer could be a case of “It was a good idea at the time.” A good idea of whom?  If the code evolved at a very early stage in the history of life, perhaps even during its prebiotic phase, the numbers four and twenty may have been the best way to go for chemical reasons relevant at that stage. Life simply got stuck with these numbers thereafter, their original purpose lost. Or perhaps the use of four and twenty is the optimum way to do it. There is an advantage in life’s employing many varieties of amino acid because they can be strung together in more ways to offer a wider selection of proteins. But there is also a price: with increasing numbers of amino acids, the risk of translation errors grows. With too many amino acids around, there would be a greater likelihood that the wrong one would be hooked onto the protein chain. So maybe twenty is a good compromise. Do random chemical reactions have the knowledge to arrive at an optimal conclusion or a " good compromise"?  

An even tougher problem concerns the coding assignments—i.e., which triplets code for which amino acids. How did these designations come about? Because nucleic-acid bases and amino acids don’t recognize each other directly but have to deal via chemical intermediaries, there is no obvious reason why particular triplets should go with particular amino acids. Other translations are conceivable. Coded instructions are a good idea, but the actual code seems to be pretty arbitrary. Perhaps it is simply a frozen accident, a random choice that just locked itself in, with no deeper significance.

 Intelligent agents act frequently with an end goal in mind, constructing complex  multipart-machines, that require a blueprint to build the object. Furthermore, Computers integrate  software/hardware and store high levels of instructional complex coded information. In our experience, systems that either a) require or b) store large amounts of specified/instructed complex information such as codes and languages, and which are constructed in an interdependence of hard and software invariably originate from an intelligent source. No exception. There is a presence of an identical feature in both DNA and intelligently designed codes, languages, and artifacts. Because we know intelligent agents can (and do) produce complex and functionally specified sequences of symbols and arrangements of matter, intelligent agency qualifies as an adequate causal explanation for the origin of this effect. Since, in addition, materialistic theories have proven universally inadequate for explaining the origin of coded information and translation systems, intelligent causation stands as the only entity with the causal power known to produce this feature of living systems. 

Falsification:  Nobody has yet been able to demonstrate naturally emerging informational systems based on codes and ciphers, and instructional information stored within these codes to produce a defined complex specified outcome. Perry Marshall, the author of the book Evolution 2.0, has yet to pay the prize to someone that will come up and will meet the challenge.
Natural Code LLC is a Private Equity Investment group formed to identify a naturally occurring code. Our mission is to discover, develop and commercialize core principles of nature which give rise to information,  consciousness, and intelligence. Natural Code LLC will pay the researcher $100,000 for the initial discovery of such a code. If the newly discovered process is defensible patentable, we will secure the patent(s). Once patents are granted, we will pay the full prize amount to the discoverer in exchange for the rights. Our investment group will locate or develop commercial applications for the technology. The discoverer will retain a small percentage of ongoing ownership of the technology. Prize amount as of July 2016 is $3 million. The prize caps at $10 million.

Objection: Intelligent design is not science.  "The Intelligent Design hypothesis is untestable by science, exactly because we can never empirically know or understand the actions of ... any ... Intelligent Designer."
Answer: In order to make design predictions, it must be established what can be recognized as design in nature - Something having the PROPERTIES that we might attribute to that of an intelligently designed system. We can use observations of how intelligent agents act when designing things. By observing human intelligent agents, there is actually quite a bit we can learn to know and understand the actions of intelligent designers Intelligent design  is falsifiable: any positive demonstration that instructional complex coded information as stored in DNA and epigenetic codes can easily be generated by non design mechanisms is a potential falsification of the ID theory

Objection: the evidence in the Dover trial disproves ID as a scientific theory
Answer: ask ANY real scientist, if he thinks the best way to proceed in scientific truth, is to have courts of law decide what scientific theories should be accepted.

Objection: Well, we don’t have another universe to compare ours to, and as Hume points out, that’s exactly the problem.
Answer: It's not a problem at all. We just need to define what distinguished designed objects from things that emerge randomly.
1. High information content (or specified complexity) and irreducible complexity constitute strong indicators or hallmarks of (past) intelligent design.
2. Biological systems have a high information content (or specified complexity) and utilize subsystems that manifest irreducible complexity.
3. Naturalistic mechanisms or undirected causes do not suffice to explain the origin of information (specified complexity) or irreducible complexity.
4. Therefore, intelligent design constitutes the best explanations for the origin of information and irreducible complexity in biological systems.

Further readings:

Confirmation of intelligent design predictions

The genetic code cannot arise through natural selection

Coded information comes always from a mind

DNA stores literally coded information

The genetic code, insurmountable problem for non-intelligent origin

Origin of  translation of the 4 nucleic acid bases and the 20 amino acids, and the universal assignment of codons to amino acids

The origin of the genetic cipher, the most perplexing problem in biology

A landmark book and cornerstone of ID theory from Prof.Dr. Werner Gitt
In the Beginning was Information

Last edited by Admin on Sun May 21, 2017 10:33 pm; edited 6 times in total

View user profile


Intelligent design is science

Hypothesis - a supposition or proposed explanation made on the basis of limited evidence as a starting point for further investigation.

Theory - A scientific theory is a well-substantiated explanation of some aspect of the natural world, based on a body of facts that have been repeatedly confirmed through observation and experiment. Such fact-supported theories are not "guesses" but reliable accounts of the real world

Law - A scientific law is a statement based on repeated experimental observations that describes some aspects of the universe.

Dr William Dembski, a leading intelligent design researcher, has aptly stated:
“Intelligent Design is . . . a scientific investigation into how patterns exhibited by finite arrangements of matter can signify intelligence.”
At its best, science is an unfettered (but ethically and intellectually responsible) and progressive search for the truth about our world based on reasoned analysis of empirical observations. The very antithesis of an unfettered search for truth occurs when scientists don intellectual blinkers and assert dogmatically that all conclusions must conform to “materialist” philosophy.  Such an approach prevents the facts from speaking for themselves.  The search for truth can only suffer when it is artificially constrained by those who would impose materialist orthodoxy by authoritarian fiat before the investigation has even begun. This approach obviously begs the question, but, sadly, it is all too common among those who would cloak their metaphysical prejudices with the authority of institutional science or the law.
This is especially unfortunate, because just a moment’s reflection is enough to conclude that it is untrue true that science must necessarily be limited to the investigation of material causes only.  Material causes consist of chance and mechanical necessity (the so called “laws of nature”) or a combination of the two.  Yet investigators of the world as far back as Plato have recognized a third type of cause exists – acts by an intelligent agent (i.e., “design”).  Experience confirms beyond the slightest doubt that acts by intelligent agents frequently result in empirically observable signs of intelligence.  Indeed, if this were not so, we would have to jettison forensics, to cite just one of many examples, from the rubric of “science.”
Just look all around you.  The very fact that you are reading this sentence confirms that you are able to distinguish it from noise.
Moreover, ID satisfies all the conditions usually required for scientific inquiry (i.e., observation, hypothesis, experiment, conclusion):
1.  It is based on empirical data: the empirical observation of the process of human design, and specific properties common to human design and biological information (CSI).
2.  It is a quantitative and internally consistent model.
3.  It is falsifiable: any positive demonstration that CSI can easily be generated by non design mechanisms is a potential falsification of the ID theory.
4.  It makes empirically testable and fruitful predictions (see point 4)

Basic Intelligent Design: 1

i. Observation:
The ways that intelligent agents act can be observed in the natural world and described. When intelligent agents act, it is observed that they produce high levels of "complex-specified information" (CSI). CSI is basically a scenario which is unlikely to happen (making it complex), and conforms to a pattern (making it specified). Language and machines are good examples of things with much CSI. From our understanding of the world, high levels of CSI are always the product of intelligent design.

ii. Hypothesis:
If an object in the natural world was designed, then we should be able to examine that object and find the same high levels of CSI in the natural world as we find in human-designed objects.

iii. Experiment:
We can examine biological structures to test if high CSI exists. When we look at natural objects in biology, we find many machine-like structures which are specified, because they have a particular arrangement of parts which is necessary for them to function, and complex because they have an unlikely arrangement of many interacting parts. These biological machines are "irreducibly complex," for any change in the nature or arrangement of these parts would destroy their function. Irreducibly complex structures cannot be built up through an alternative theory, such as Darwinian evolution, because Darwinian evolution requires that a biological structure be functional along every small-step of its evolution. "Reverse engineering" of these structures shows that they cease to function if changed even slightly.

iv. Conclusion:
Because they exhibit high levels of CSI, a quality known to be produced only by intelligent design, and because there is no other known mechanism to explain the origin of these "irreducibly complex" biological structures, we conclude that they were intelligently designed.

Lee Strobel, A case for a creator page 139,  :

"One reason some scientists are reluctant," I said, "is because they claim intelligent design is not falsifiable." I was referring to the belief among many philosophers and scientists that atheory cannot truly be scientific unless there are potential ways to prove it false through experiments or other means.
"That's silly," Behe replied.
"But I hear it over and over," I insisted. "The National Academy of Sciences said: `Intelligent design ... [is] not science because [it's] not testable by the methods of science."'
"Yes, I know," he said, "but what's really ironic is that intelligent design is routinely called unfalsifiable by the very people who are busy trying to falsify it! As you just pointed out, Miller proposed a test that would falsify the claim that intelligence is needed to produce an irreducibly complex system. So I don't see the problem. Intelligent design's strong point is that it's falsifiable, just like a good scientific theory should be. Frankly, I'd say it's more falsifiable than Darwinism is."
"Come on," I said. "Do you really believe that?"
"Yes, I do, and I'll give you an example," he replied. "My claim is that there is no unintelligent process that could produce the bacterial flagellum. To falsify that claim, all you would have to do would be to find one unintelligent process that could produce that system. On the other hand, Darwinists claim that some unintelligent process could produce the flagellum. To falsify that, you'd have to show that the system could not possibly have been created by any of a potentially infinite number of possible unintelligent processes. That's impossible to do. So which claim is falsifiable? I'd say the claim for intelligent design."
That isn't the only objection that Behe has turned on its head. While Darwinists often accuse intelligent design proponents of letting their religious beliefs color their science, Behe once told a newspaper reporter: "It has been my experience ... that the ones who oppose the theory of design most vociferously do so for religious reasons.""8
"What did you mean by that?" I asked.
"It seems that the folks who get the most animated when talking about Darwinian evolution are the ones most concerned with the philosophical and theological ramifications of the theory, not the science itself," he explained.
"Scientists propose hypotheses all the time. No big deal. But if I say, `I don't think natural selection is the driving force for the development of life; I think it was intelligent design,' people don't just disagree; many of them jump up and down and get red in the face. When you talk to them about it, invariably they're not excited because they disagree with the science; it's because they see the extra-scientific implications of intelligent design and they don't like where it's leading."
Behe shrugged. "I guess that's okay," he added. "These are important issues and people can get emotional about them. But we should not use what we want to be true to dismiss arguments or try to avoid them."

Knock out experiments and tests provide empirical evidence that the flagellum is irreducibly complex, as Scott Minnich  testified at the Dover process:

Kitzmiller Transcript of Testimony of Scott Minnich pgs. 99-108, Nov. 3, 2005, emphasis added

We have a mutation in a drive shaft protein or the U joint, and they can't swim. Now, to confirm that that's the only part that we've affected, you know, is that we can identify this mutation, clone the gene from the wild type and reintroduce it by mechanism of genetic complementation. So this is, these cells up here are derived from this mutant where we have complemented with a good copy of the gene. One mutation, one part knock out, it can't swim. Put that single gene back in we restore motility. Same thing over here. We put, knock out one part, put a good copy of the gene back in, and they can swim. By definition the system is irreducibly complex. We've done that with all 35 components of the flagellum, and we get the same effect.
(Kitzmiller Transcript of Testimony of Scott Minnich pgs. 99-108, Nov. 3, 2005, emphasis added)

ID is a theory of design detection, and it proposes intelligent agency as a mechanism causing the origin of the universe, origin of life, and biodiversity. ID allows us to explain how aspects of observed  complexity in nature arose. And it uses the scientific method to make its claims

ID proposes that intelligent agency is the best explanation for the origins of: 2

the origin of the fine-tuning of the cosmos for advanced life.
the origin of extremely high levels of complex and specified information in DNA.
the origin of integrated systems required for animal body plans.
the origin of many irreducibly complex systems found in living organisms.

ID "incorporates many facts, laws and tested hypotheses."

ID uses the physical laws and finely tuned constants of the universe as premise and explains through design why they are coordinated to produce the universe, chemical elements, a life supporting planet, life, and biodiversity.
ID incorporates many known facts about the existence of  genetic and epigenetic codes, translation systems ( ciphers ), and  instructional complex coded information encountered in living cells,  as well as tested hypotheses showing they are finely tuned to perform biological functions.
ID incorporates a myriad of tested hypotheses about the geologically abrupt appearance of body plans in the fossil record, as well as numerous facts from biochemistry and animal biology regarding the kind and amount of integrated information necessary to coordinate new types of proteins, cell types, tissues, and organs into new functional body plans.
ID incorporates many tested hypotheses about the presence of irreducible complexity in biological systems, evidenced by genetic knockout experiments which have shown that irreducible complexity is a real phenomenon.
ID does all of this by proposing new laws such as the law of conservation of information, new principles about the causes of high CSI, new methods of measuring functional information and complexity, and new hypotheses about the ubiquity of fine-tuning throughout both cosmology and biology.

ID is "well-substantiated" and "supported by a vast body of evidence."

Studies of physics and cosmology continue to uncover deeper and deeper levels of fine-tuning. Many examples could be given, but this one is striking: the initial entropy of the universe must have been fine-tuned to within 1 part in 10(10^123) to render the universe life-friendly. That blows other fine-tuning constants away. New cosmological theories like string theory or multiverse theories just push back questions about fine-tuning, and exacerbate the need for fine-tuning.
Mutational sensitivity tests increasingly show that DNA sequences are highly fine-tuned to generate functional proteins and perform other biological functions.
Studies of epigenetics and systems biology are revealing more and more how integrated organisms are, from biochemistry to macrobiology, and showing incredible finely-tuned basic cellular functions.
Genetic knockout experiments are showing irreducible complexity, such as in the flagellum, or multi-mutation features where many simultaneous mutations would be necessary to gain an advantage. This is more fine-tuning.

Confirmation of intelligent design predictions

Coded Information which is complex and instructional/specified found in epigenetic systems  and genes, and irreducible , interdependent molecular machines and biosynthetic and metabolic pathways in biological systems point to a intelligent agent as best explanation of their setup and  origins.

Observation: Intelligent agents  act frequently  with an end goal in mind, constructing functional irreducibly complex  multipart-machines, and  make  exquisitely integrated circuits that require a blueprint to build the object. Furthermore, Computers   integrate  software/hardware and store  high levels of instructional complex coded information. In our experience, systems that either a)require or b)store  large amounts of specified/instructed complex information  such as codes and languages, and which are constructed in a interdependence of hard and software invariably originate from an intelligent source. No exception.
Hypothesis (Prediction): Natural structures will be found that contain many parts arranged in intricate patterns, metabolic pathways similar to electronic circuits, and irreducible structures  that perform  specific functions -- indicating high levels of  Information, irreducible complexity, and interdependence, like hard/software.
Experiment: Experimental investigations of DNA, epigenetic codes, and metabolic circuits indicate that biological molecular machines and factories ( Cells ) are full of information-rich, language-based codes and code/blueprint-based structures. Biologists have performed mutational sensitivity tests in proteins and determined that their amino acid sequences, in order to provide  function, require highly instructional complex coded information stored in the Genome.   Additionally, it has been found out, that cells require and use various epigenetic codes, namely  Splicing Codes,  Metabolic Codes,  Signal Transduction Codes,  Signal Integration Codes Histone Codes, Tubulin Codes, Sugar Codes , and The Glycomic Code. Furthermore, all kind of irreducible complex molecular machines and biosynthesis performing  and metabolic pathways have been found, which could not keep their basic functions without a minimal number of parts and complex inter wined and interdependent structures. That indicates these biological machines and pathways had to emerge fully operational, all at once. A step wise evolutionary manner is not possible. Furthermore, knock out experiments of all components of the flagellum have shown that the flagellum is irreducible complex.
Conclusion: Unless someone can falsify the prediction, and  point out a non-intelligent source  of  Information as found in the cell, the high levels of instructional complex coded information, irreducible complex and interdependent molecular systems and complex metabolic circuits and biosynthesis pathways, their origin is   best explained by the action of an intelligent agent.

Further readings:[/size]


Last edited by Admin on Sun Jan 29, 2017 6:17 am; edited 5 times in total

View user profile


Another Creationist Prediction Confirmed

A cluster of the bacteria discussed in the article
Dr. Richard Lenski, an evolutionary biologist at Michigan State University, has been running a long-term experiment on evolution. Indeed, it has been named the LTEE (Long-Term Evolution Experiment). It started back in 1988 and is still running today. It has followed 12 populations of the bacterium Escherichia coli through more than 50,000 generations, examining how environmental stress changes the bacteria’s genetic and physiological characteristics. More than 6 years ago, I discussed how the project was confirming the creationist view of the genome, and it continues to do just that. In addition, it has inspired another experiment that specifically confirmed a creationist prediction while, at the same time, falsifying an evolutionary one.

To understand what has happened, we need to go back to 2008. In that year, the LTEE showed that even though Escherichia coli normally can’t make use of a chemical called citrate when oxygen is present, one of the their populations developed that ability after 31,500 generations of existence.1 As a result, it was dubbed the “citrate plus” population. How did that happen? At the time, no one knew. However, evolutionists thought it was the result of some rare event or combination of events, exactly the kind upon which evolution depends. New Scientist put it this way:

By this time, Lenski calculated, enough bacterial cells had lived and died that all simple mutations must already have occurred several times over.

That meant the “citrate-plus” trait must have been something special – either it was a single mutation of an unusually improbable sort, a rare chromosome inversion, say, or else gaining the ability to use citrate required the accumulation of several mutations in sequence.

Lenski himself was bold enough to write:

So the bacteria in this simple flask-world have split into two lineages that coexist by exploiting their common environment in different ways. And one of the lineages makes its living by doing something brand-new, something that its ancestor could not do.

That sounds a lot like the origin of species to me. What do you think?

Not surprisingly, a recent experiment has shown that the evolutionary predictions of Lenski and New Scientist are wrong. At the same time, it demonstrated that the predictions of both intelligent design advocates and creationists were correct.

What was the experiment? It was an attempt to reproduce the LTEE’s results – repeatedly. If one of the evolution-inspired predictions was correct, it should be very, very hard to evolve a population of citrate-using Escherichia coli in an oxygen environment. After all, Lenski believed that this was the origin of a new species, and that should be very rare. Indeed, as New Scientist indicated, the ability to use citrate in the presence of oxygen was “something special” that required “rare” conditions. However, there were scientists with another view of the situation. Intelligent design advocates suggested it as well, but I will quote a creationist (Dr. Georgia Purdom) who suggested it:

Mutations which lead to adaptation, termed adaptive mutations, can readily fit within a creation model where adaptive mechanisms are a designed feature of bacteria allowing them to survive in a fallen world. Since E. coli already possess the ability to transport and utilize citrate under certain conditions, it is conceivable that they could adapt and gain the ability to utilize citrate under broader conditions. This does not require the addition of new genetic information or functional systems… (emphasis mine)

If Dr. Purdom is correct, it shouldn’t be at all difficult to produce citrate-using Escherichia coli in an oxygen environment. Adaptive mutations are part of a designed mechanism that produces predictable results in the presence of environmental stress. Thus, given the correct environmental stress, you should repeatedly get the same adaptive result. What did the experiment find? It repeatedly got the Escherichia coli to develop the ability to use citrate in the presence of oxygen, as long as the environmental stress was consistent. Here is how the authors put it:2

Using similar media, 46 independent citrate-utilizing mutants were isolated in as few as 12 to 100 generations. Genomic DNA sequencing revealed an amplification of the citT and dctA loci and DNA rearrangements to capture a promoter to express CitT, aerobically. These are members of the same class of mutations identified by the LTEE. We conclude that the rarity of the LTEE mutant was an artifact of the experimental conditions and not a unique evolutionary event. No new genetic information (novel gene function) evolved. (emphasis mine)

In other words, they were able to repeatedly get the same results as the LTEE very quickly. That means there was no “rare” set of events leading to the ability to use citrate in the presence of oxygen, and this was definitely not any kind of speciation event. Instead, the same genetic changes seen in the LTEE were achieved repeatedly after a short amount of time. This tells us that the ability to use citrate in the presence of oxygen is the result of adaptive mutation, as predicted by Dr. Purdom nearly 8 years ago.

I would also point out the conclusion that I highlighted with boldface type. The researchers conclude that no new genetic information was needed to produce this change. This had been determined back in 2012, confirming what both intelligent design advocates and creationists had predicted, but it is nice to see it clearly spelled out in the scientific literature.


1. Zachary D. Blount, Christina Z. Borland, and Richard E. Lenski, “Historical Contingency and the Evolution of a Key Innovation in an Experimental Population of Escherichia coli,” Proceeding of the National Academy of Sciences, 105:7899–7906, 2008
Return to Text

2. Van Hofwegen DJ, Hovde CJ, and Minnich SA, “Rapid Evolution of Citrate Utilization by Escherichia coli by Direct Selection Requires citT and dctA,” Journal of Bacteriology 198:1022-34, 2016

more: Successful Predictions by Creation Scientists

View user profile


Oposing views :

The key to ID is the notion that many of the basic parts that all organisms share are too complex to have arisen from gradual change. ID proposes that some external agent or intelligence is responsible for making these critical bits.

Unlike a true scientific theory, the existence of an “intelligent” agent can not be tested, nor is it falsifiable.
Someone should do this, of course, but the downside is that it suggests to the general public that the ID claims are a serious challenge to science.
ID has been called an "argument from ignorance," as it relies upon a lack of knowledge for its conclusion: Lacking a natural explanation, we assume intelligent cause.
ID proponents may argue that a neutral-sounding "intelligence" is responsible for design, but it is clear from the "cultural renewal" aspect of ID that a deity — in particular, God as He is conceived of by certain conservative Christians — is envisioned as the agent of design. While schools can take no position on this view as religion, it cannot be regarded as science.
Intelligent Design explains the existence of one type of bacterial flagellum with the action of an Intelligent Designer, but fails to offer any information on how the designer might have constructed the flagellum or on who that designer might be.
because Intelligent Design doesn't specify what the Designer is or how the Designer operates, it cannot generate expectations specific enough to help us figure out whether the basic premises of Intelligent Design are correct or incorrect. Intelligent Design is untestable.
Intelligent Design proponents have rarely published on Intelligent Design in established scientific journals and resist modifying their ideas in response to the scrutiny of the scientific community.
So far, there are no documented cases of Intelligent Design research contributing to a new scientific discovery.
Perhaps most importantly, because Intelligent Design is untestable, proponents are unable to expose their ideas to testing in a meaningful way and cannot evaluate whether their ideas are supported by evidence.

For a theory to qualify as scientific, it is expected to be:
Parsimonious (sparing in its proposed entities or explanations, see Occam's Razor)
Useful (describes and explains observed phenomena, and can be used predictively)
Empirically testable and falsifiable (see Falsifiability)
Based on multiple observations, often in the form of controlled, repeated experiments
Correctable and dynamic (modified in the light of observations that do not support it)
Progressive (refines previous theories)
Provisional or tentative (is open to experimental checking, and does not assert certainty)

Critics also say that the intelligent design doctrine does not meet the Daubert Standard,[48] the criteria for scientific evidence mandated by the US Supreme Court. The Daubert Standard governs which evidence can be considered scientific in United States federal courts and most state courts. Its four criteria are:

The theoretical underpinnings of the methods must yield testable predictions by means of which the theory could be falsified.
The methods should preferably be published in a peer-reviewed journal.
There should be a known rate of error that can be used in evaluating the results.
The methods should be generally accepted within the relevant scientific community.

Scientists approach their work by asking testable questions (hypotheses), running the tests (experiments), and by always providing within the hypothesis some means by which the hypothesis can be unequivocally disproved. Most experiments test the predictive power of the hypothesis: "If I mix chemical A and chemical B, I should get chemical C and a flash of light", or "People who hate tomatoes also hate ketchup."

If a hypothesis is subjected to test after test over many years and by many different people and does not fail, it will most likely be elevated to the level of "Theory." The term "Theory" is science-ese for "we are pretty darn sure this is absolutely true, but since absolute proof is impossible by the nature of science, we'll just call it something besides 'absolute truth.'" This is basic scientific honesty; you can't run every experiment or make every observation.

But is ID Science? Should it be taught in a science classroom alongside the Theory of Evolution? Well, can it be tested? Are there falsifying observations? ID could potentially be disproved by observing a more primitive intermediate form of some part that has been touted as 'too complex' to be natural. But then, the individual running the ID experiment can alter his hypothesis to say that this new structure is that which was installed by the Intelligent Designer. Because of this, there is no part of ID that can be unequivocally falsified by material science.

One of the most important characteristics of scientific hypotheses and theories is the predictive power they provide. ID does not offer any new explanation or observation about these complex structures that the Theory of Evolution does not already provide. The observation that some structures in organisms are too complex to have originated from gradual change will not help scientists to develop a better antibiotic, for example. In fact, the idea that "some things are too complex" is anti-scientific, since it seems to suggest that we shouldn't try to understand the origins of the complex structures.ID discourages us from looking and asking questions. True science, however, moves on. If it is later found to be the case that some structures in organisms do not have more primitive counterparts, science will observe and recognize this fact, and the new knowledge will be incorporated into evolutionary theory.

View user profile

Sponsored content

View previous topic View next topic Back to top  Message [Page 1 of 1]

Permissions in this forum:
You cannot reply to topics in this forum