Theory of Intelligent Design, the best explanation of Origins

This is my personal virtual library, where i collect information, which leads in my view to Intelligent Design as the best explanation of the origin of the physical Universe, life, and biodiversity

You are not connected. Please login or register

Theory of Intelligent Design, the best explanation of Origins » Intelligent Design » Information Theory, Coded Information in the cell » DNA stores literally coded information

DNA stores literally coded information

View previous topic View next topic Go down  Message [Page 1 of 1]

1 DNA stores literally coded information on Mon Nov 11, 2013 7:53 pm


DNA stores literally coded information

The five levels of information in DNA

The language of the genetic code

The scientific laws of information

Wanna Build a Cell? A DVD Player Might Be Easier

What is DNA?
The information in DNA is stored as a code made up of four chemical bases: adenine (A), guanine (G), cytosine (C), and thymine (T).

[The genetic language: grammar, semantics, evolution]
The genetic language is a collection of rules and regularities of genetic information coding for genetic texts. It is defined by alphabet, grammar, collection of punctuation marks and regulatory sites, semantics.

The digital code of DNA
Two features of DNA structure account for much of its remarkable impact on science: its digital nature and its complementarity, whereby one strand of the helix binds perfectly with its partner. DNA has two types of digital information--the genes that encode proteins, which are the molecular machines of life, and the gene regulatory networks that specify the behaviour of the genes.

DNA information: from digital code to analogue structure
The digital linear coding carried by the base pairs in the DNA double helix is now known to have an important component that acts by altering, along its length, the natural shape and stiffness of the molecule.

Next-generation digital information storage in DNA
DNA is among the most dense and stable information media known. The development of new technologies in both DNA synthesis and sequencing make DNA an increasingly feasible digital storage medium. We developed a strategy to encode arbitrary digital information in DNA, wrote a 5.27-megabit book using DNA microchips, and read the book by using next-generation DNA sequencing.

The code, the text and the language of God
In his book The Language of Life, George Beadle wrote: “... the deciphering of the DNA code has revealed a language... as old as life itself, a language that is the most living language of all” (Beadle & Beadle, 1966).

Biological organisms contain genetic material that is used to control their function and development. This is DNA which contains units named genes that can produce proteins through a code (genetic code) in which a series of triplets (codons) of four possible nucleotides are translated into one of twenty possible amino acids. A sequence of codons results in a corresponding sequence of amino acids that form a protein.

The Genetic Code
The sequence of bases in DNA operates as a true code in that it contains the information necessary to build a protein expressed in a four-letter alphabet of bases which is transcribed to mRNA and then translated to the twenty-amino-acid alphabet necessary to build the protein. Saying that it is a true code involves the idea that the code is free and unconstrained; any of the four bases can be placed in any of the positions in the sequence of bases. Their sequence is not determined by the chemical bonding. There are hydrogen bonds between the base pairs and each base is bonded to the sugar phosphate backbone, but there are no bonds along the longitudional axis of DNA. The bases occur in the complementary base pairs A-T and G-C, but along the sequence on one side the bases can occur in any order, like the letters of a language used to compose words and sentences.
To further illustrate what is meant by a true code, consider the magnetic letters fixed to the magnetic board at right. The letters are held to the board by the magnetic forces, but those forces do not impose any specific ordering of the letters. The letters can be arranged to spell out a meaningful message in the English language (code) or to form a meaningless sequence like the one at bottom.

DNA Is Multibillion-Year-Old Software
Nature invented (sic) software billions of years before we did. “The origin of life is really the origin of software,” says Gregory Chaitin. Life requires what software does (it’s foundationally algorithmic).
1. “DNA is multibillion-year-old software,” says Chaitin (inventor of mathematical metabiology). We’re surrounded by software, but couldn’t see it until we had suitable thinking tools.
2. Alan Turing described modern software in 1936, inspiring John Von Neumann to connect software to biology. Before DNA was understood, Von Neumann saw that self-reproducing automata needed software. We now know DNA stores information; it's a biochemical version of Turning’s software tape, but more generally: All that lives must process information. Biology's basic building blocks are processes that make decisions.

Paul Davies reinforced the point that obtaining the building blocks would not explain their arrangement:
‘… just as bricks alone don’t make a house, so it takes more than a random collection of amino acids to make life. Like house bricks, the building blocks of life have to be assembled in a very specific and exceedingly elaborate way before they have the desired function.’63

An analogy is written language. Natural objects in forms resembling the English alphabet (circles, straight lines, etc.) abound in nature, but this fact does not help to understand the origin of information (such as that in Shakespeare’s plays). The reason is that this task requires intelligence both to create the information (the play) and then to design and build the machinery required to translate that information into symbols (the written text). What must be explained is the source of the information in the text (the words and ideas), not the existence of circles and straight lines. Likewise, it is not enough to explain the origin of the amino acids, which correspond to the letters. Rather, even if they were produced readily, the source of the information that directs the assembly of the amino acids contained in the genome must be explained.

“DNA is not a special life-giving molecule, but a genetic databank that transmits its information using a mathematical code. Most of the workings of the cell are best described, not in terms of material stuff — hardware — but as information, or software. Trying to make life by mixing chemicals in a test tube is like soldering switches and wires in an attempt to produce Windows 98. It won’t work because it addresses the problem at the wrong conceptual level.” Inside each and every one of us lies a message. It is inscribed in an ancient code, its beginnings lost in the mists of time. Decrypted, the message contains instructions on how to make a human beingAlthough DNA is a material structure, it is pregnant with meaning. The arrangement of the atoms along the helical strands of your DNA determines how you look and even, to a certain extent, how you feel and behave. DNA is nothing less than a blueprint—or, more accurately, an algorithm or instruction manual—for building a living, breathing, thinking human being. So far, I have been somewhat cavalier in the use of the term “information.” Computer scientists draw a distinction between syntax and semantics. Syntactic information is simply raw data, perhaps arranged according to rules of grammar, whereas semantic information has some sort of context or meaning. Information per se doesn’t have to mean anything. Snowflakes contain syntactic information in the specific arrangement of their hexagonal shapes, but these patterns have no semantic content, no meaning for anything beyond the structure itself. By contrast, the distinctive feature of biological information is that it is replete with meaning. DNA stores the instructions needed to build a functioning organism; it is a blueprint or an algorithm for a specified, predetermined product. Snowflakes don’t code for, or symbolize, anything, whereas genes most definitely do. To explain life fully, it is not enough simply to identify a source of free energy, or negative entropy, to provide biological information. We also have to understand how semantic information comes into being. It is the quality, not the mere existence, of information that is the real mystery here. All that stuff about conflict with the second law of thermodynamics was mostly a red herring.

In a living organism we see the power of software, or information processing, refined to an incredible degree. Cells are not hard-wired, like kites. Rather, the information flow couples the chalk of nucleic acids to the cheese of proteins using the genetic code. Stored energy is then released and forces are harnessed to carry out the programmed instructions, as with the radio-controlled plane. Viewed this way, the problem of the origin of life reduces to one of understanding how encoded software emerged spontaneously from hardware. How did it happen? How did nature “go digital”? We are dealing here not with a simple matter of refinement and adaptation, an amplification of complexity, or even the husbanding of information, but a fundamental change of concept. It is like trying to explain how a kite can evolve into a radio-controlled aircraft. Can the laws of nature as we presently comprehend them account for such a transition? I do not believe they can.
 Fact two: not all random sequences are potential genomes. Far from it. In fact, only a tiny, tiny fraction of all possible random sequences would be even remotely biologically functional. A functioning genome is a random sequence, but it is not just any random sequence. It belongs to a very, very special subset of random sequences—namely, those that encode biologically relevant information. All random sequences of the same length encode about the same amount of information, but the quality of that information is crucial: in the vast majority of cases it would be, biologically speaking, complete gobbledygook.

“DNA is not a special life-giving molecule, but a genetic databank that transmits its information using a mathematical code. Most of the workings of the cell are best described, not in terms of material stuff — hardware — but as information, or software. Trying to make life by mixing chemicals in a test tube is like soldering switches and wires in an attempt to produce Windows 98. It won’t work because it addresses the problem at the wrong conceptual level.”
Inside each and every one of us lies a message. It is inscribed in an ancient code, its beginnings lost in the mists of time. Decrypted, the message contains instructions on how to make a human being.
Although DNA is a material structure, it is pregnant with meaning. The arrangement of the atoms along the helical strands of your DNA determines how you look and even, to a certain extent, how you feel and behave. DNA is nothing less than a blueprint—or, more accurately, an algorithm or instruction manual—for building a living, breathing, thinking human being.

Nucleic acids store life’s  software; the proteins are the real workers and constitute the hardware. The two chemical realms can support each other only  because there is a highly specific and refined communication channel between them mediated by a code, the so-called genetic  code.

Genetic Entropy: Sanford 2005 page 52 and 53:
This “complex interwoven (poly-fuctional) network” throughout the entire DNA code makes the human genome severely poly-constrained to random mutations (Sanford; Genetic Entropy, 2005; page 141). This means the DNA code is now much more severely limited in its chance of ever having a hypothetical beneficial mutation since almost the entire DNA code is now proven to be intimately connected to many other parts of the DNA code. Thus even though a random mutation to DNA may be able to change one part of an organism for the better, it is now proven much more likely to harm many other parts of the organism that depend on that one particular part being as it originally was. Since evolution was forced, by the established proof of Mendelian genetics, to no longer view the whole organism as to what natural selection works upon, but to view the whole organism as a multiple independent collection of genes that can be selected or discarded as natural selection sees fit, this “complex interwoven network” finding is extremely bad news, if not absolutely crushing, for the “Junk DNA” population genetics scenario of evolution (modern neo-Darwinian synthesis) developed by Haldane, Fisher and Wright 

We now know that in yeast DNA alone there are more than 300 nano machines at work performing various tasks in the cell, many of which are performed concurrently. Yet concurrency in info processing systems cannot arise without pre-knowledge of tasks requiring coordinated action!
Literature from those who posture in favor of creation abounds with examples of the tremendous odds against chance producing a meaningful code. For instance, the estimated number of elementary particles in the universe is 10^80. The most rapid events occur at an amazing 10^45 per second. Thirty billion years contains only 10^18 seconds. By totaling those, we find that the maximum elementary particle events in 30 billion years could only be 10^143. Yet, the simplest known free-living organism, Mycoplasma genitalium, has 470 genes that code for 470 proteins that average 347 amino acids in length. The odds against just one specified protein of that length are 1:10^451.
The probability of useful DNA, RNA, or proteins occurring by chance is extremely small. Calculations vary somewhat but all are extremely small (highly improbable). If one is to assume a hypothetical prebiotic soup to start there are at least three combinational hurdles (requirements) to overcome. Each of these requirements decreases the chance of forming a workable protein. First, all amino acids must form a chemical bond (peptide bond) when joining with other amino acids in the protein chain. Assuming, for example a short protein molecule of 150 amino acids, the probability of building a 150 amino acids chain in which all linkages are peptide linkages would be roughly 1 chance in 10^45. The second requirement is that functioning proteins tolerate only left-handed amino acids, yet in abiotic amino acid production the right-handed and left-handed isomers are produced in nearly the same frequency. The probability of building a 150-amino-acid chain at random in which all bonds are peptide bonds and all amino acids are L-form is roughly 1 chance in 10^90. The third requirement for functioning proteins is that the amino acids must link up like letters in a meaningful sentence, i.e. in a functionally specified sequential arrangement. The chance for this happening at random for a 150 amino acid chain is approximately 1 chance in 10^195. It would appear impossible for chance to build even one functional protein considering how small the likelihood is. By way of comparison to get a feeling of just how low this probability is consider that there are only 10^65 atoms in our galaxy..
We can quantify the information carrying capacity of nucleic acids in the following way. Each position can be one of four bases, corresponding to two bits of information (22 = 4). Thus, a chain of 5100 nucleotides corresponds to 2 × 5100 = 10,200 bits, or 1275 bytes (1 byte = 8 bits). The E. coli genome is a single DNA molecule consisting of two chains of 4.6 million nucleotides, corresponding to 9.2 million bits, or 1.15 megabytes, of information
Scientists have been looking to unlock the memory storage potential of DNA strands for a decade now. Over at Harvard it looks like they've finally cracked it with a breakthrough that allows over 700 terabytes of data to be stored on a single gram of DNA. Treating the genetic code much like the binary system traditional computer memory uses, they've successfully replicated the storage capacity of over 14,000 Bluray discs, or 151 kilograms of hard drives on a surface area smaller than the tip of your little finger.

Theist: The DNA code is written by a intelligent mind.
Atheist : Emergent properties, and physical reactions, are perfectly capable to produce the code stored in DNA.
Theist : There is no known natural mechanism ( aka no intelligence involved ) to encode the information stored in DNA
Atheist: God of the gaps argument. Argument from ignorance. Because we don't know yet, does not mean, Godidit.
Theist : "The  sentence you are reading now was written by a intelligent mind"
Atheist: "Emergent properties, and physical reactions  are perfectly capable to screen these letters to the monitor"
Theist : "There is no known natural mechanism ( aka no intelligence involved ) to type these letters and they to appear on the screen"
Atheist: "Argument of the gaps. Argument from ignorance. Because we don't know yet, that does not mean, a intelligence did it"

Norbert Weiner - MIT Mathematician - Father of Cybernetics
"Information is information, not matter or energy. No materialism which does not admit this can survive at the present day."

DNA and RNA: Providential Coding to 'Revere' God
When accurately describing what happens inside a eukaryotic cell’s nucleus or mitochondrion, evolutionary geneticists routinely describe what they see using terms like code (e.g., genetic code, protein coding, coding regions), encode, codon, anti-codon, decode, transcription, translation, blueprint, program, information, instruction, control, edit, decipher, messenger, reading, proofreading, signal, alphabet, letter, language, gene expression, information, surveillance (for detecting nonsense), etc. It is important to recognize that these genetic message-oriented terms were not imposed on the evolutionists by the creationists!
genetic science reveals God’s purposeful encoding of genetic messages, with mind-bogglingly complex instructions on how to build living things from the biomolecular level upward, with those same encoded messages being efficiently decoded and recognized with sufficient accuracy to produce responsive compliance with those biomolecular instructions!
 a coded message is no good at all if the intended recipient cannot understand its encoded meaning. Accordingly, every code-based message must be informationally devised (i.e., created), encoded, and sent to the intended readers. The readers must then decode the message, recognize the information it contains, and act on that information in a way that corresponds to the original purpose of the message’s creator. It is vital that the intended recipient understand the sender’s meaning, because the message itself is unrecognizable unless both sender and receiver share a common understanding of what the words (or other symbols) mean.
Consider the following message: “One if by land, two if by sea.” What does that sequence of words signify? Because that message used a language shared by the sender (Robert Newman, with the help of John Pulling) and receivers (those awaiting word on the movement of British troops), it provided a recognizable warning that “the Regulars [British soldiers] are coming” by water, not by land. Two lanterns lit in the Old North Church on the night of April 18, 1775, provided a signal—but it was recognizable as such only to those who knew the “language” shared by Paul Revere and his allies.
This principle of coded information transfer is illustrated at the sub-cellular level. If a protein-coding “message” borne by a portion of DNA cannot be transferred by RNA and translated on ribosomes providentially fitted for the task, the DNA’s instructions cannot be complied with, and that would mean no protein synthesis—which can be a fatal failure for whatever life form is involved, whether girl or gecko, boy or bacterium.
If amino acids were randomly assigned to triplet codons, then there would be 1.5 x 10^84 possible genetic codes to choose from. However, the genetic code used by all known forms of life is nearly universal with few minor variations. This suggests that a single evolutionary history underlies the origin of the genetic code. Many hypotheses on the evolutionary origins of the universal genetic code have been proposed.
In responding to the “code skeptics,” we need to keep in mind that they are bound by their own methodology to explain the origin of the genetic code in non-teleological, causal terms. They need to explain how things happened in the way that they suppose. Thus if a code-skeptic were to argue that living things have the code they do because it is one which accurately and efficiently translates information in a way that withstands the impact of noise, then he/she is illicitly substituting a teleological explanation for an efficient causal one. We need to ask the skeptic: how did Nature arrive at such an ideal code as the one we find in living things today?
By contrast, a “top-down” explanation of life goes beyond such reductionistic accounts. On a top-down account, it makes perfect sense to say that the genetic code has the properties it has because they help it to withstand the impact of noise while accurately and efficiently translating information. The “because” here is a teleological one. A teleological explanation like this ties in perfectly well with intelligent agency: normally the question we ask an agent when they do something is: “Why did you do it that way?” The question of how the agent did it is of secondary importance, and it may be the case that if the agent is a very intelligent one, we might not even understand his/her “How” explanation. But we would still want to know “Why?” And in the case of the genetic code, we have an answer to that question.
We currently lack even a plausible natural process which could have generated the genetic code. On the other hand, we know that intelligent agents can generate codes. The default hypothesis should therefore be that the code we find in living things is the product of an Intelligent Agent.

River Out of Eden: A Darwinian View of Life, Dawkins writes:
What is truly revolutionary about molecular biology in the post-Watson-Crick era is that it has become digital.   After Watson and Crick, we know that genes themselves, within their minute internal structure, are long strings of pure digital information. What is more, they are truly digital, in the full and strong sense of computers and compact disks, not in the weak sense of the nervous system. The genetic code is not a binary code as in computers, nor an eight-level code as in some telephone systems, but a quaternary code, with four symbols. The machine code of the genes is uncannily computerlike. Apart from differences in jargon, the pages of a molecular-biology journal might be interchanged with those of a computer-engineering journal. . . .
Our genetic system, which is the universal system of all life on the planet, is digital to the core. With word-for-word accuracy, you could encode the whole of the New Testament in those parts of the human genome that are at present filled with “junk” DNA – that is, DNA not used, at least in the ordinary way, by the body. Every cell in your body contains the equivalent of forty-six immense data tapes, reeling off digital characters via numerous reading heads working simultaneously. In every cell, these tapes – the chromosomes – contain the same information, but the reading heads in different kinds of cells seek out different parts of the database for their own specialist purposes. . . .
Genes are pure information – information that can be encoded, recoded and decoded, without any degradation or change of meaning. Pure information can be copied and, since it is digital information, the fidelity of the copying can be immense. DNA characters are copied with an accuracy that rivals anything modern engineers can do.

What lies at the heart of every living thing is not a fire, warm breath, not a ‘spark of life’. It is information, words, instructions…Think of a billion discrete digital characters…If you want to understand life think about technology – Richard Dawkins (Dawkins 1996, 112)
Afther the seventh minute of his speech, Dawkins admits that : Can you think of any other class of molecule, that has that property, of folding itself up, into a uniquely characteristic enzyme, of which there is a enormous repertoire, capable of catalyzing a enormous repertoir of chemical reactions, and this is in itself to be absolutely determined by a digital code.
DNA is a communication system because the triplets are encoded into Messenger RNA and decoded into amino acids and proteins. For example the base pairs GGG (Guanine-Guanine-Guanine) are instructions to make the amino acid Glycine which is then assembled into proteins by the ribosomes.

The organism follows the rules of the Genetic Code. GGG = Glycine, CGG = Arginine, AGC = Serine, etc. Note that GGG is not literally Glycine, it is symbolic instructions 
to make Glycine.

Just like computer codes, the genetic code is arbitrary. There is no law of physics that says “1” has to mean “on” and “0” has to mean “off.” There’s no law of physics that says 10000001 has to code for the letter “A.” Similarly, there is no law of physics that says three Guanine molecules in a row have to code for Glycine. In both cases, the communication system operates from a freely chosen, fixed set of rules.
In all communication systems it is possible to label the encoder, the message and the decoder and determine the rules of the code.
The rules of communication systems are defined in advance by conscious minds. There are no known exceptions to this. Therefore we have 100% inference that the Genetic Code was designed by a conscious mind.
1. Code is defined as communication between an encoder (a “writer” or “speaker”) and a decoder (a “reader” or “listener”) using agreed upon symbols.
2. DNA's definition as a literal code (and not a figurative one) is nearly universal in the entire body of biological literature since the 1960's.
3. DNA code has much in common with human language and computer languages
4. DNA transcription is an encoding / decoding mechanism isomorphic with Claude Shannon's 1948 model: The sequence of base pairs is encoded into messenger RNA which is decoded into proteins.
5. Information theory terms and ideas applied to DNA are not metaphorical, but in fact quite literal in every way. In other words, the information theory argument for design is not based on analogy at all. It is direct application of mathematics to DNA, which by definition is a code.

Code, by definition, implies intelligence and the genetic code is real code, mathematically identical to that of language, computer codes etc. all of which can only arise by intelligent convention of symbologies. The genome contains meta information and there is now evidence of meta-programming as well. Meta info is information on information and we now know the genome contains such structures. But meta information cannot arise without knowledge of the original information.Meta programming is even more solid evidence of intelligence at work. 

Rutgers University professor Sungchul Ji’s excellent paper “The Linguistics of DNA: Words, Sentences, Grammar, Phonetics, and Semantics”  starts off

“Biologic systems and processes cannot be fully accounted for in terms of the principles and laws of physics and chemistry alone, but they require in addition the principles of semiotics—the science of symbols and signs, including linguistics.” Ji identifies 13 characteristics of human language. DNA shares 10 of them. Cells edit DNA. Theyalso communicate with each other and literally speak a language he called “cellese,” described as “a self-organizing system of molecules, some of which encode, act as signs for, or trigger, gene-directed cell processes.” This comparison between cell language and human language is not a loosey-goosey analogy; it’s formal and literal. Human language and cell language both employ multilayered symbols.  Dr. Ji explains this similarity in his paper: “Bacterial chemical conversations also include assignment of contextual meaning to words and sentences (semantic) and conduction of dialogue (pragmatic)—the fundamental aspects of linguistic communication.” This is true of genetic material. Signals between cells do this as well.*

Nucleotides in DNA contain four different nitrogenous bases: Thymine, Cytosine, Adenine, or Guanine. The order of nucleotides along DNA polymers encode the genetic information carried by DNA. DNA polymers can be tens of millions of nucleotides long. At these lengths, the four letter nucleotide alphabet can encode nearly unlimited information. DNA is organized language of coded information. The symbols when read give a message or instructions. DNA is not just a pattern developed like a snowflake. It is information coded as symbolic representation of actual 3D implementation. It is a language that requires decision making and thought between a transmitter and receiver. The symbols contain an alphabet, grammar, meaning, intent and even error correction. You can even store it like computer data. The information is expressed through matter and energy. DNA is designed language. Natural patterns can not achieve cell to mammal morphology. Intelligence is required. All information is carried by a code of symbols that are material in nature.

1. The pattern in DNA is a code.
2. All codes we know the origin of com from a intelligent mind
3. Therefore we have 100% inference that DNA comes from a intelligent mind,  and 0% inference that it is not.

DNA stores coded information. 
All codes com from intelligence. 
Therefore, DNA comes from a mind.

1. Symbols are defined as: something which represents something else.
2. Symbols carry thoughts (or messages) from a personal, intelligent, mind. No exceptions.
3. Scientific inquiry has discovered that DNA carries encoded symbolic instructions.

Just show ONE example of instructional information, that cannot be tracked back to intelligence, and you win. Just one.

DNA ultimately came from a mind, who had to make decisions, and be extraordinarily intelligent.
This claim can be falsified. Show one, just ONE example of coded, specified, complex information, and you top the claim.

Last edited by Admin on Mon Feb 06, 2017 8:19 am; edited 57 times in total

View user profile

2 Re: DNA stores literally coded information on Sat Nov 16, 2013 7:50 am


Common language talks about DNA as 'information' or 'a code'. For a very long time, scientists suspected that something—some kind of plan, specificity or driving force—resided within the sperm and/or egg, such that a snake developed from a snake egg and humans created human offspring. But it was only in the late 1940s and 1950s, when cyberneticists, physicists and mathematicians entered the field of molecular biology, that scientists came to interpret this 'something' as information. The physicist Erwin Schrödinger probably coined the term 'code' when he described living organisms in terms of their molecular and atomic structure, in his influential book What is Life (Schrödinger, 1944). The complete pattern of the future development of an organism and its function when mature, Schrödinger wrote, is contained in the chromosomes in the form of a 'code'. His writings had a strong influence on both Francis Crick and James Watson and their later discovery of the structure of DNA. “Schrödinger probably wasn't the first, but he was the first one I'd read to say that there must be a code of some kind that allowed molecules in cells to carry information,” Watson said in an interview with Scientific American (Watson, 2003). Indeed, Watson and Crick, in a paper on the implications of their DNA structure, picked up Schrödinger's metaphor when they wrote that “it therefore seems likely that the precise sequence of the bases is the code which carries the genetical information.”
On 26 June 2000, when Francis Collins, Director of the National Human Genome Research Institute, announced the completion of the first draft in a major media event at the White House, he said “Today, we celebrate the revelation of the first draft of the human book of life” and declared that this breakthrough lets humans for the first time read “our own instruction book.”

Ok, heres's another thing that amazed me when I found it out. I probably learnt it years agon cos I just checked and found it called complementary base pairing which rings a bell to me. Going back over it I see that it's effectively a binary code. Just like the code that runs in computers. What's the point then. Well it's just this. How fascinating that the very code that runs computers bascially defines 1's and 0's and so does the very code that runs all species that we know of. That just seems amazing. Taking it further, the code that a programmer writes has a similar impact on the program its written for as DNA does for its host cell. Object oriented code for example works in a simiilar way to how proteins are made in a cell. As a programmer I like writing object oriented code. That's code that describes objects so if I wrote a program for a human I would write a class for a cell. I'd describe how the cell functions and what things it can do etc. then I'd describe lots of different kinds of cells. they'd have the same fundamental attributes of the basic cell class but they'd be a little different. A blood cell wouldn't have a nucleus (no dna), a young cell would be able to grow and change easily (say an osteoblast) an older cell wouldn't (an osteoclast). So how does dna work. Well it codes for proteins. Proteins make enzymes. Proteins are buidling blocks like the keratin in your nails and hair, they build cell walls. Enzymes are workers. The lactase in your stomach that breaks down the milk sugar lactose. Some people don't have this enzyme, their dna doesn't describe how to make it so they're lactose intolerant and can't have dairy products. how fascinating that the code for our bodies can be talked about in similar terms as the code I use to write computer programs. Who'd have thought it!

Intention in the genetic code

there is intention in the genetic code, and  the genetic code uses arbitrary symbolism.  the specific DNA sequences together with biochemistry to decode the sequences - has specific purposes. Thus, the DNA sequences and the biochemistry that ends up with the growth of my eyes exists in order to produce an eyes - which themselves exists so that I can see.
In other words, this code was produced so that I can see.
Contrast this with the tree rings. When you cut down a tree you can find out how old the tree is by counting the rings, so the rings contain information about the age of the tree. But there was no intention for the tree rings to contain that information, it is a chance by-product of the way a tree grows.  The  information in the DNA sequences that are the instructions for building an eye are by contrast there with the intention of building an eye. Furthermore, they are arbitrary, in the sense that other instructions to build an eye would do just as well, provided they built an eye.

Last edited by Admin on Mon Feb 06, 2017 8:19 am; edited 4 times in total

View user profile

3 Re: DNA stores literally coded information on Sun Nov 17, 2013 6:41 am



The genetic code by which DNA stores the genetic information consists of "codons" of three nucleotides. The functional segments of DNA which code for the transfer of genetic information are called genes. With four possible bases, the three nucleotides can give 43 = 64 different possibilities, and these combinations are used to specify the 20 different amino acids used by living organisms.

How is genetic information stored?
Genetic information is stored within the chemical structure of the molecule DNA.

Genetic Destinies   Peter Little

A Devil's Chaplain: Selected Writings,  Richard Dawkins

Proceedings of the Sixth Berkeley Symposium on Mathematical Statistics

Last edited by Admin on Wed Jan 11, 2017 1:17 pm; edited 3 times in total

View user profile

4 Re: DNA stores literally coded information on Thu Dec 05, 2013 6:17 am


The DNA in living cells contains coded information. It is not surprising that so many of the terms used in describing DNA and its functions are language terms. We speak of the genetic code. DNA is transcribed into RNA.RNA is translated into protein....Such designations are not simply convenient or just anthropomorphisms. They accurately describe the situation (1984, pp. 85-86, emp. in orig.).

Last edited by Admin on Tue May 19, 2015 12:19 pm; edited 1 time in total

View user profile

5 Re: DNA stores literally coded information on Thu Dec 05, 2013 7:54 am


In their textbook on the origin of life,
Thaxton, et al., addressed the implications of the genetic code.
We know that in numerous cases certain effects always
have intelligent causes, such as dictionaries,
sculptures, machines and paintings.We reason by
analogy that similar effects have intelligent causes.For
example, after looking up to see “BUYFORD” spelled
out in smoke across the skyweinfer the presence of a
skywriter even if we heard or saw no airplane.We
would similarly conclude the presence of intelligent
activity were we to come upon an elephant-shaped
topiary in a cedar forest.
In like manner an intelligible communication via radio
signal from some distant galaxy would be widely
hailed as evidence of an intelligent source.Whythen
doesn’t the message sequence on theDNA molecule
also constitute prima facie evidence for an intelligent
source? After all, DNA information is not just analogous
to a message sequence such asMorse code, it is
such a message sequence....
We believe that if this question is considered, it will
be seen that most often it is answered in the negative
simply because it is thought to be inappropriate to
bring a Creator into science (1984, pp.211-212,emp.
in orig.).

View user profile

6 Re: DNA stores literally coded information on Fri Apr 18, 2014 6:38 pm



Last edited by Admin on Thu Jun 25, 2015 6:45 pm; edited 1 time in total

View user profile

7 An explanation of the Genetic Code on Tue May 12, 2015 1:25 pm


An explanation of the Genetic Code

three DNA base pairs code for one amino acid.

Practically, codons are "decoded" by transfer RNAs (tRNA) which interact with a ribosome-bound messenger RNA (mRNA) containing the coding sequence. There are 64 different tRNAs, each of which has an anticodon loop (used to recognise codons in the mRNA). 61 of these have a bound amino acyl residue; the appropriate "charged" tRNA binds to the respective next codon in the mRNA and the ribosome catalyses the transfer of the amino acid from the tRNA to the growing (nascent) protein/polypeptide chain. The remaining 3 codons are used for "punctuation"; that is, they signal the termination (the end) of the growing polypeptide chain (stopcodons). The genetic code is visualised in this scheme.

The genetic code is universal

Lastly, the Genetic Code in the table above has also been called "The Universal Genetic Code". It is known as "universal", because it is used by all known organisms as a code for DNA, mRNA, and tRNA. The universality of the genetic code is used in animals (including humans), plants, fungi, archaea, bacteria, and viruses. However, all rules have their exceptions, and such is the case with the Genetic Code; small variations in the code exist in mitochondria and certain microbes. Nonetheless, it should be emphasised that these variances represent only a small fraction of known cases, and that the Genetic Code applies quite broadly, certainly to all known nuclear genes.

DNA, in addition to the digital information of the linear genetic code (the semantics), encodes equally important continuous, or analog, information that specifies the structural dynamics and configuration (the syntax) of the polymer.

View user profile

8 feature The digital code of DNA on Thu Oct 22, 2015 11:35 am


Feature The digital code of DNA 1

The discovery of the structure of DNA transformed biology profoundly, catalysing the sequencing of the human genome and engendering a new view of biology as an information science. Two features of DNA structure account for much of its remarkable impact on science: its digital nature and its complementarity, whereby one strand of the helix binds perfectly with its partner. DNA has two types of digital information — the genes that encode proteins, which are the molecular machines of life, and the gene regulatory networks that specify the behaviour of the genes.

The discovery of the double helix in 1953 immediately raised questions about how biological information isencoded in DNA. A remarkable feature of the structure is that DNA can accommodate almost any sequence of base pairs — any combination of the bases adenine (A), cytosine (C), guanine (G) and thymine (T) — and, hence any digital message or information. During the following decade it was discovered that each gene encodes a complementary RNA transcript, called messenger RNA (mRNA), made up of A, C, G and uracil (U), instead of T. The four bases of the DNA and RNA alphabets are related to the 20 amino acids of the protein alphabet by a triplet code — each three letters (or ‘codons’) in a gene encodes one amino acid. For example, AGT encodes the amino acid serine. The dictionary of DNA letters that make up the amino acids is called the genetic code. There are 64 different triplets or codons, 61 of which encode an amino acid (different triplets can encode the same amino acid), and three of which are used for ‘punctuation’ in that they signal the termination of the growing protein chain. The molecular complementary of the double helix — whereby each base on one strand of DNA pairs with its complementary base on the partner strand (A with T, and C with G) — has profound implications for biology. As implied by James Watson and Francis Crick in their landmark paper, base pairing suggests a template copying mechanism that accounts for the fidelity in copying of genetic material during DNA replication . It also underpins the synthesis of mRNA from the DNA template, as well as processes of repairing damaged DNA.

The digital nature of biological information

The value of having an entire genome sequence is that one can initiate the study of a biological system with a precisely definable digital core of information for that organism — a fully delineated genetic source code. The challenge, then, is in deciphering what information is encoded within the digital code. The genome encodes two main types of digital information — the genes that encode the protein and RNA molecular machines of life, and the regulatory networks that specify how these genes are expressed in time, space and amplitude. It is the regulatory networks and not the genes themselves that play the critical role in making organisms different from one another.Development is the elaboration of an organism from a single cell (the fertilized egg) to an adult (for humans this is 10^14 cells of thousands of different types). Physiology is the triggering of
specific functional programmes (for example, the immune response) by environmental cues. Regulatory networks are crucial in each of these aspects of biology. Regulatory networks are composed of two main types of components: transcription factors and the DNA sites to which they bind in the control regions of genes, such as promoters, enhancers and silencers. The control regions of individual genes serve as information processors to integrate the information inherent in the concentrations of different transcription factors into signals that mediate gene expression. The collection of the transcription factors and their cognate DNA-binding sites in the control regions of genes that carry out a particular developmental or physiological function constitute these regulatory networks (Fig. 2).

Because most ‘higher’ organisms or eukaryotes (organisms that contain their DNA in a cellular compartment called the nucleus), such as yeast, flies and humans, have predominantly the same families of genes, it is the reorganization of DNA-binding sites in the control regions of genes that mediate the changes in the developmental programmes that distinguish one species from another. Thus, the regulatory networks are uniquely specified by their DNA-binding sites and, accordingly, are basically digital in nature. One thing that is striking about digital regulatory networks is that they can change significantly in short periods of evolutionary time. This is reflected, for example, in the huge diversity of the body plans, controlled by gene regulatory networks, that emerged over perhaps 10–30 million years during the Cambrian explosion of metazoan organisms (about 550 million years ago). Likewise, remarkable changes occurred to the regulatory networks driving the development of the human brain during its divergence from its common ancestor with chimpanzees about 6 million years ago. Biology has evolved Why not [b]God created ?  several different types of informational hierarchies.
First, a regulatory hierarchy is a gene network that defines the relationships of a set of transcription factors, their DNA-binding sites and the downstream peripheral genes that collectively control a particular aspect of development. A model of development in the sea urchin represents a striking example (Fig. 2). Second, a hierarchy defines an order set of relationships. For example, a single gene may be duplicated to generate a multi-gene family, and a multi-gene family may be duplicated to create a supergene family. Third, molecular machines may be assembled into structural hierarchies by an ordered assembly process.  How was this ordered assembly process done ? One example of this is the basic transcription apparatus that involves the step-by-step recruitment of factors and enzymes that will ultimately drive the specific expression of a given gene. Had this recruitment not have to be programmed ? A second example is provided by the ribosome, the complex that translates RNA into protein, which is assembled from more than 50 different proteins and a few RNA molecules. Finally, an informational hierarchy depicts the flow of information from a gene to environment: gene >RNA >protein >protein interactions >protein complexes >networks of protein complexes in a cell >tissues or organs >individual organisms >populations > ecosystems. At each successively higher level in the informational hierarchy, information can be added or altered for any given element (for example, by alternative RNA splicing or protein modification).


Last edited by Admin on Tue Jan 03, 2017 3:09 pm; edited 2 times in total

View user profile

9 Re: DNA stores literally coded information on Tue Jan 05, 2016 10:29 am


A Quick Way to Understand the Plan of the Code

    We can get a clear idea of how the DNA code is arranged by designing one of our own.  Suppose we form a code in which only four symbols are to be used, the numerals 1, 2, 3, and 4.  It is to be translated later into the 26 letters of our alphabet.
    If we decide to put the numerals in groups of three, then we will have more than enough triplets of digits to match the 26 letters.  In fact, we will have 64 different trios (111, 112, 113, 114, 121, 122, etc.).  Because of the excess of these trios as compared to 26 letters, we can assign several different groups of three to the same letter, in most cases.
    Let’s let the letter “A” be coded by any of the following groups of digits, 111, 112, 113, 114.  “B” can be represented by 121, 122, 123, or 124. For “C,” we will assign only two triplets, 131 and 132.  This will give us enough for the moment.
    Now, using our simple code, let’s write the word “Cab.”  It could possibly be 132114122.6  To translate it, all we need do is divide it into groups of three, beginning at the correct starting point.  Then, by referring to our code key or dictionary, we can easily decipher it.
    It would work just the same if other symbols were used instead of the numerals 1, 2, 3, 4.  For example, we could use a circle, a square, a triangle, and an oval.  We could, as another alternative, use four different types of tree leaves, or even four chemicals.  In the latter case, our code would be much like the DNA code, as we will see.  DNA, however, does not translate to our alphabet but to the 20 amino acids, indicating the proper order for their joining, to make a specific protein that is needed.  Biological life consists, to a great extent, of making the correct proteins with the proper timing and amounts.7  Once formed, these various proteins can do many wonderful things.

    Now that we have the main idea of the code plan, let’s examine the way it actually exists in living things.

View user profile

10 Re: DNA stores literally coded information on Thu Dec 22, 2016 11:02 am


DNA contains codified information

DNA is an information carrying molecule. It carries the genetic code “engraved,” you might say, in its structure. Its “alphabet” consists of the four bases that pair together forming rungs on a spiral ladder, as the molecule’s shape might be likened to. The precise sequence of the bases as one ascends the ladder is what determines the information contained. The DNA in a human genome (separated into 23 chromosomes) contains about 3 billion rungs–or base pairs–and thus 3 billion coded instructions. That’s enough information to fill 1000 encyclopedic volumes. Two genomes–one from each parent– make up the normal 46 chromosome complement of human somatic (body tissue) cells. So each somatic cell contains in its DNA two similar but not identical sets of coded information totaling about six billion instructions. 1

Information, and the Nature of reality, page 21:
Today, the cell is treated as a supercomputer –an information-processing and -replicating system of extraordinary fidelity.

The algorithmic origins of life

Although it has been notoriously difficult to pin down precisely what is it that makes life so distinctive and remarkable, there is general agreement that its informational aspect is one key property, perhaps the key property. The unique informational narrative of living systems suggests that life may be characterized by context-dependent causal influences, and, in particular, that top-down (or downward) causation—where higher levels influence and constrain the dynamics of lower levels in organizational hierarchies—may be a major contributor to the hierarchal structure of living systems. Here, we propose that the emergence of life may correspond to a physical transition associated with a shift in the causal structure, where information gains direct and context-dependent causal efficacy over the matter in which it is instantiated. Such a transition may be akin to more traditional physical transitions (e.g. thermodynamic phase transitions), with the crucial distinction that determining which phase (non-life or life) a given system is in requires dynamical information and therefore can only be inferred by identifying causal architecture. We discuss some novel research directions based on this hypothesis, including potential measures of such a transition that may be amenable to laboratory study, and how the proposed mechanism corresponds to the onset of the unique mode of (algorithmic) information processing characteristic of living systems.

Paul Davies 1

"We propose that the transition from non-life to life is unique and definable," added Davies. "We suggest that life may be characterized by its distinctive and active use of information, thus providing a roadmap to identify rigorous criteria for the emergence of life. This is in sharp contrast to a century of thought in which the transition to life has been cast as a problem of chemistry, with the goal of identifying a plausible reaction pathway from chemical mixtures to a living entity."

In a nutshell, the authors shift attention from the "hardware" – the chemical basis of life – to the "software" – its information content. To use a computer analogy, chemistry explains the material substance of the machine, but it won't function without a program and data. Davies and Walker suggest that the crucial distinction between non-life and life is the way that living organisms manage the information flowing through the system.
"When we describe biological processes we typically use informational narratives – cells send out signals, developmental programs are run, coded instructions are read, genomic data are transmitted between generations and so forth," Walker said.

The genetic language is a collection of rules and regularities of genetic information coding for genetic texts. It is defined by alphabet, grammar, collection of punctuation marks and regulatory sites, semantics.

It is hard to fathom, but the amount of information in human DNA is roughly equivalent to 12 sets of The Encyclopaedia Britannica— an incredible 384 volumes" worth of detailed information that would fill 48 feet of library shelves!

Michael Denton     Evolution: A Theory in Crisis , 1996, p. 334

a teaspoon of DNA, according to molecular biologist , could contain all the information needed to build the proteins for all the species of organisms that have ever lived on the earth, and "there would still be enough room left for all the information in every book ever written"  

Bill Gates, founder of Microsoft, commented that

"DNA is like a software program, only much more complex than anything we've ever devised."

What lies at the heart of every living thing is not a fire, warm breath, not a ‘spark of life’. It is information, words, instructions…Think of a billion discrete digital characters…If you want to understand life think about technology – Richard Dawkins (Dawkins 1996, 112)

– George Sim Johnson (Sims Johnson 1999)

Human DNA contains more organized information than the Encyclopedia Britannica. If the full text of the encyclopedia were to arrive in computer code from outer space, most people would regard this as proof of the existence of extraterrestrial intelligence. But when seen in nature, it is explained as the workings of random forces

For instance, the precision of this genetic language is such that the average mistake that is not caught turns out to be one error per 10 billion letters. If a mistake occurs in one of the most significant parts of the code, which is in the genes, it can cause a disease such as sickle-cell anemia. Yet even the best and most intelligent typist in the world couldn't come close to making only one mistake per 10 billion letters—far from it.

Today’s code, written in DNA, is composed of triplet nucleotide “words” called codons that match the amino acid “words” in the language of proteins.

This chapter will explore how DNA stores information, and how this information is used to build proteins. It will also explore how mutations change this information. The language that life uses to store and transmit information is similar to human languages, but the rules of grammar and the vocabulary are much simpler. Only 20 words are used by life, so the vocabulary is very limited. Punctuation is limited to capitalization and periods. Every sentence must start with the same word.


View user profile

11 Re: DNA stores literally coded information on Thu Dec 22, 2016 11:03 am



Part of the work covered by the Nobel citation, that on the structure and replication of DNA, has been described by Wilkins in his Nobel Lecture this year. The ideas put forward by Watson and myself on the replication of DNA have also been mentioned by Kornberg in his Nobel Lecture in 1959, covering his brilliant researches on the enzymatic synthesis of DNA in the test tube. I shall discuss here the present state of a related problem in information transfer in living material - that of the genetic code - which has long interested me, and on which my colleagues and I, among many others, have recently been doing some experimental work.

It now seems certain that the amino acid sequence of any protein is determined by the sequence of bases in some region of a particular nucleic acid molecule. Twenty different kinds of amino acid are commonly found in protein, and four main kinds of base occur in nucleic acid. The genetic code describes the way in which a sequence of twenty or more things is determined by a sequence of four things of a different type.

It is hardly necessary to stress the biological importance of the problem. It seems likely that most if not all the genetic information in any organism is carried by nucleic acid - usually by DNA, although certain small viruses use RNA as their genetic material. It is probable that much of this information is used to determine the amino acid sequence of the proteins of that organism. (Whether the genetic information has any other major function we do not yet know.) This idea is expressed by the classic slogan of Beadle: "one gene - one enzyme", or in the more sophisticated but cumbersome terminology of today: "one cistron - one polypeptide chain".

It is one of the more striking generalizations of biochemistry - which surprisingly is hardly ever mentioned in the biochemical textbooks - that the twenty amino acids and the four bases, are, with minor reservations, the same throughout Nature. As far as I am aware the presently accepted set of twenty amino acids was first drawn up by Watson and myself in the summer of 1953 in response to a letter of Gamow's.

In this lecture I shall not deal with the intimate technical details of the problem, if only for the reason that I have recently written such a review1 which will appear shortly. Nor shall I deal with the biochemical details of messenger RNA and protein synthesis, as Watson has already spoken about these. Rather I shall ask certain general questions about the genetic code and ask how far we can now answer them.

Let us assume that the genetic code is a simple one and ask how many bases code for one amino acid? This can hardly be done by a pair of bases, as from four different things we can only form 4 x 4 = 16 different pairs, whereas we need at least twenty and probably one or two more to act as spaces or for other purposes. However, triplets of bases would give us 64 possibilities. It is convenient to have a word for a set of bases which codes one amino acid and I shall use the word "codon" for this.

This brings us to our first question. Do codons overlap? In other words, as we read along the genetic message do we find a base which is a member of two or more codons? It now seems fairly certain that codons do not overlap. If they did, the change of a single base, due to mutation, should alter two or more (adjacent) amino acids, whereas the typical change is to a single amino acid, both in the case of the "spontaneous" mutations, such as occur in the abnormal human haemoglobin or in chemically induced mutations, such as those produced by the action of nitrous acid and other chemicals on tobacco mosaic virus2. In all probability, therefore, codons do not overlap.

This leads us to the next problem. How is the base sequence, divided into codons? There is nothing in the backbone of the nucleic acid, which is perfectly regular, to show us how to group the bases into codons. If, for example, all the codons are triplets, then in addition to the correct reading of the message, there are two incorrect readings which we shall obtain if we do not start the grouping into sets of three at the right place. My colleagues and I3 have recently obtained experimental evidence that each section of the genetic message is indeed read from a fixed point, probably from one end. This fits in very well with the experimental evidence, most clearly shown in the work of Dintzis4 that the amino acids are assembled into the polypeptide chain in a linear order, starting at the amino end of the chain.

This leads us to the next general question: the size of the codon. How many bases are there in any one codon? The same experiments to which I have just referred3 strongly suggest that all (or almost all) codons consist of a triplet of bases, though a small multiple of three, such as six or nine, is not completely ruled out by our data. We were led to this conclusion by the study of mutations in the A and B cistrons of the rII locus of bacteriophage T4. These mutations are believed to be due to the addition or subtraction of one or more bases from the genetic message. They are typically produced by acridines, and cannot be reversed by mutagens which merely change one base into another. Moreover these mutations almost always render the gene completely inactive, rather than partly so.

By testing such mutants in pairs we can assign them all without exception to one of two classes which we call + and –. For simplicity one can think of the + class as having one extra base at some point or other in the genetic message and the – class as having one too few. The crucial experiment is to put together, by genetic recombination, three mutants of the same type into one gene. That is, either (+ with + with +) or ( – with – with –). Whereas a single + or a pair of them (+ with +) makes the gene completely inactive, a set of three, suitably chosen, has some activity. Detailed examination of these results show that they are exactly what we should expect if the message were read in triplets starting from one end.

We are sometimes asked what the result would be if we put four +'s in one gene. To answer this my colleagues have recently put together not merely four but six +'s. Such a combination is active as expected on our theory, although sets of four or five of them are not. We have also gone a long way to explaining the production of "minutes" as they are called. That is, combinations in which the gene is working at very low efficiency. Our detailed results fit the hypothesis that in some cases when the mechanism comes to a triplet which does not stand for an amino acid (called a "non sense" triplet) it very occasionally makes a slip and reads, say, only two bases instead of the usual three. These results also enable us to tie down the direction of reading of the genetic message, which in this case is from left to right, as the rII region is conventionally drawn. We plan to write up a detailed technical account of all this work shortly. A final proof of our ideas can only be obtained by detailed studies on the alterations produced in the amino acid sequence of a protein by mutations of the type discussed here.

One further conclusion of a general nature is suggested by our results. It would appear that the number of nonsense triplets is rather low, since we only occasionally come across them. However this conclusion is less secure than our other deductions about the general nature of the genetic code.

It has not yet been shown directly that the genetic message is co-linear with its product. That is, that one end of the gene codes for the amino end of the polypeptide chain and the other for the carboxyl end, and that as one proceeds along the gene one comes in turn to the codons in between in the linear order in which the amino acids are found in the polypeptide chain. This seems highly likely, especially as it has been shown that in several systems mutations affecting the same amino acid are extremely near together on the genetic map. The experimental proof of the co-linearity of a gene and the polypeptide chain it produces may be confidently expected within the next year or so.

There is one further general question about the genetic code which we can ask at this point. Is the code universal, that is, the same in all organisms? Preliminary evidence suggests that it may well be. For example something very like rabbit haemoglobin can be synthesized using a cell-free system, part of which comes from rabbit reticulocytes and part from Escherichia coli5. This would not be very probable if the code were very different in these two organisms. However as we shall see it is now possible to test the universality of the code by more direct experiments.

In a cell in which DNA is the genetic material it is not believed that DNA itself controls protein synthesis directly. As Watson has described, it is believed that the base sequence of the DNA - probably of only one of its chains - is copied onto RNA, and that this special RNA then acts as the genetic messenger and directs the actual process of joining up the amino acids into polypeptide chains. The breakthrough in the coding problem has come from the discovery, made by Nirenberg and Matthaei6, that one can use synthetic RNA for this purpose. In particular they found that polyuridylic acid - an RNA in which every base is uracil - will promote the synthesis of polyphenylalanine when added to a cell-free system which was already known to synthesize polypeptide chains. Thus one codon for phenylalanine appears to be the sequence UUU (where U stands for uracil: in the same way we shall use A, G, and C for adenine, guanine, and cytosine respectively). This discovery has opened the way to a rapid although somewhat confused attack on the genetic code.

It would not be appropriate to review this work in detail here. I have discussed critically the earlier work in the review mentioned previously1 but such is the pace of work in this field that more recent experiments have already made it out of date to some extent. However, some general conclusions can safely be drawn.

The technique mainly used so far, both by Nirenberg and his colleague6 and by Ochoa and his group7, has been to synthesize enzymatically "random" polymers of two or three of the four bases. For example, a polynucleotide, which I shall call poly (U,C), having about equal amounts of uracil and cytosine in (presumably) random order will increase the incorporation of the amino acids phenylalanine, serine, leucine, and proline, and possibly threonine. By using polymers of different composition and assuming a triplet code one can deduce limited information about the composition of certain triplets.

From such work it appears that, with minor reservations, each polynucleotide incorporates a characteristic set of amino acids. Moreover the four bases appear quite distinct in their effects. A comparison between the triplets tentatively deduced by these methods with the changes in amino acid sequence produced by mutation shows a fair measure of agreement. Moreover the incorporation requires the same components needed for protein synthesis, and is inhibited by the same inhibitors. Thus the system is most unlikely to be a complete artefact and is very probably closely related to genuine protein synthesis.

As to the actual triplets so far proposed it was first thought that possibly every triplet had to include uracil, but this was neither plausible on theoretical grounds nor supported by the actual experimental evidence. The first direct evidence that this was not so was obtained by my colleagues Bretscher and Grunberg-Manago8, who showed that a poly (C,A) would stimulate the incorporation of several amino acids. Recently other worker9, 10 have reported further evidence of this sort for other polynucleotides not containing uracil. It now seems very likely that many of the 64 triplets, possibly most of them, may code one amino acid or another, and that in general several distinct triplets may code one amino acid. In particular a very elegant experiment II suggests that both (UUC) and (UUG) code leucine (the brackets imply that the order within the triplets is not yet known). This general idea is supported by several indirect lines of evidence which cannot be detailed here. Unfortunately it makes the unambiguous determination of triplets by these methods much more difficult than would be the case if there were only one triplet for each amino acid. Moreover, it is not possible by using polynucleotides of "random" sequence to determine the order of bases in a triplet. A start has been made to construct polynucleotides whose exact sequence is known at one end, but the results obtained so far are suggestive rather than conclusive12. It seems likely however from this and other unpublished evidence that the amino end of the polypeptide chain corresponds to the "right-hand" end of the polynucleotide chain - that is, the one with the 2', 3' hydroxyls on the sugar.

It seems virtually certain that a single chain of RNA can act as messenger RNA, since poly U is a single chain without secondary structure. If poly A is added to poly U, to form a double or triple helix, the combination is inactive. Moreover there is preliminary evidence9 which suggests that secondary structure within a polynucleotide inhibits the power to stimulate protein synthesis.

It has yet to be shown by direct biochemical methods, as opposed to the indirect genetic evidence mentioned earlier, that the code is indeed a triplet code.

Attempts have been made from a study of the changes produced by mutation to obtain the relative order of the bases within various triplets, but my own view is that these are premature until there is more extensive and more reliable data on the composition of the triplets.

Evidence presented by several groups8, 9, 11 suggest that poly U stimulates both the incorporation of phenylalanine and also a lesser amount of leucine. The meaning of this observation is unclear, but it raises the unfortunate possibility of ambiguous triplets; that is, triplets which may code more than one amino acid. However one would certainly expect such triplets to be in a minority.

It would seem likely, then, that most of the sixty-four possible triplets will be grouped into twenty groups. The balance of evidence both from the cell-free system and from the study of mutation, suggests that this does not occur at random, and that triplets coding the same amino acid may well be rather similar. This raises the main theoretical problem now outstanding. Can this grouping be deduced from theoretical postulates? Unfortunately, it is not difficult to see how it might have arisen at an extremely early stage in evolution by random mutations, so that the particular code we have may perhaps be the result of a series of historical accidents. This point is of more than abstract interest. If the code does indeed have some logical foundation then it is legitimate to consider all the evidence, both good and bad, in any attempt to deduce it. The same is not true if the codons have no simple logical connection. In that case, it makes little sense to guess a codon. The important thing is to provide enough evidence to prove each codon independently. It is not yet clear what evidence can safely be accepted as establishing a codon. What is clear is that most of the experimental evidence so far presented falls short of proof in almost all cases.

In spite of the uncertainty of much of the experimental data there are certain codes which have been suggested in the past which we can now reject with some degree of confidence.

Comma-less triplet codes
All such codes are unlikely, not only because of the genetic evidence but also because of the detailed results from the cell-free system.

Two-letter or three-letter codes
For example a code in which A is equivalent to O, and G to U. As already stated, the results from the cell-free system rule out all such codes.

The combination triplet code
In this code all permutations of a given combination code the same amino acid. The experimental results can only be made to fit such a code by very special pleading.

Complementary codes
There are several classes of these. Consider a certain triplet in relation to the triplet which is complementary to it on the other chain of the double helix. The second triplet may be considered either as being read in the same direction as the first, or in the opposite direction. Thus if the first triplet is UCC, we consider it in relation to either AGG or (reading in the opposite direction) GGA.

It has been suggested that if a triplet stands for an amino acid its complement necessarily stands for the same amino acids, or, alternatively in another class of codes, that its complement will stand for no amino acid, i.e. be nonsense.

It has recently been shown by Ochoa's group that poly A stimulates the incorporation of lysine10. Thus presumably AAA codes lysine. However since UUU codes phenylalanine these facts rule out all the above codes. It is also found that poly (U,G) incorporates quite different amino acids from poly (A,C). Similarly poly (U,C) differs from poly (A,G)9, 10. Thus there is little chance that any of this class of theories will prove correct. Moreover they are all, in my opinion, unlikely for general theoretical reasons.

A start has already been made, using the same polynucleotides in cell-free systems from different species, to see if the code is the same in all organisms. Eventually it should be relatively easy to discover in this way if the code is universal, and, if not, how it differs from organism to organism. The preliminary results presented so far disclose no clear difference between E. coli and mammals, which is encouraging10, 13.

At the present time, therefore, the genetic code appears to have the following general properties:

(1) Most if not all codons consist of three (adjacent) bases.
(2) Adjacent codons do not overlap.
(3) The message is read in the correct groups of three by starting at some fixed point.
(4) The code sequence in the gene is co-linear with the amino acid sequence, the polypeptide chain being synthesized sequentially from the amino end.
(5) In general more than one triplet codes each amino acid.
(6) It is not certain that some triplets may not code more than one amino acid, i.e. they may be ambiguous.
(7) Triplets which code for the same amino acid are probably rather similar.
(Cool It is not known whether there is any general rule which groups such codons together, or whether the grouping is mainly the result of historical accident.
(9) The number of triplets which do not code an amino acid is probably small.
(10) Certain codes proposed earlier, such as comma-less codes, two- or three-letter codes, the combination code, and various transposable codes are all unlikely to be correct.
(11) The code in different organisms is probably similar. It may be the same in all organisms but this is not yet known.

Finally one should add that in spite of the great complexity of protein synthesis and in spite of the considerable technical difficulties in synthesizing polynucleotides with defined sequences it is not unreasonable to hope that all these points will be clarified in the near future, and that the genetic code will be completely established on a sound experimental basis within a few years.

The references have been kept to a minimum. A more complete set will be found in the first reference.

View user profile

12 Re: DNA stores literally coded information on Thu Dec 22, 2016 11:04 am


In the paper The algorithmic origins of life , Paul Davies and Sara Imari Walker 1 write : Although it has been notoriously difficult to pin down precisely what is it that makes life so distinctive and remarkable, there is general agreement that its informational aspect is one key property, perhaps the key property. Of the many open questions surrounding how life emerges from non-life, perhaps the most challenging is the vast gulf between complex chemistry and the simplest biology: even the smallest mycoplasma is immeasurably more complex than any chemical reaction network we might engineer in the laboratory with current technology. The chemist George Whitesides, for example, has stated, ‘How remarkable is life? The answer is: very. Those of us who deal in networks of chemical reactions know of nothing like it’. Often the issue of defining life is sidestepped by assuming that if one can build a simple chemical system capable of Darwinian evolution, then the rest will follow suit and the problem of life's origin will de facto be solved [3]. Although few are willing to accept a simple self-replicating molecule as living, the assumption is that after a sufficiently long period of Darwinian evolution this humble replicator will eventually be transformed into an entity complex enough that it is indisputably living [4].

Darwinian evolution applies to everything from simple software programs, molecular replicators and memes, to systems as complex as multicellular life and even potentially the human brain [5]—therefore spanning a gamut of phenomena ranging from artificial systems, to simple chemistry, to highly complex biology. The power of the Darwinian paradigm is precisely its capacity to unify such diverse phenomena, particularly across the tree of life—all that is required are the well-defined processes of replication with variation and selection. However, this very generality is also the greatest weakness of the paradigm as applied to the origin of life: it provides no means for distinguishing complex from simple, let alone life from non-life.

This may explain Darwin's own reluctance to speculate on the subject, ‘One might as well speculate about the origin of matter’, he quipped. Although it is notoriously hard to identify precisely what makes life so distinctive and remarkable [6–8], there is general agreement that its informational aspect is one key property, and perhaps the key property. The manner in which information flows through and between cells and sub-cellular structures is quite unlike anything else observed in nature. If life is more than just complex chemistry, its unique informational management properties may be the crucial indicator of this distinction. While standard information-theoretic measures, such as Shannon information [15], have proved useful, biological information has an additional quality which may roughly be called ‘functionality’—or ‘contextuality’—that sets it apart from a collection of mere bits as characterized by its Shannon information content. The information content of DNA, for example, is usually defined by the Shannon (sequential) measure. However, the genome is only a small part of the story. DNA is not a blueprint for an organism:1 no information is actively processed by DNA alone [17]. Rather, DNA is a (mostly) passive repository for transcription of stored data into RNA, some (but by no means all) of which goes on to be translated into proteins. The biologically relevant information stored in DNA therefore has very little to do with its specific chemical nature (beyond the fact that it is a digital linear polymer).biological information is distinctive because it possesses a type of causal efficacy [23,24]—it is the information that determines the current state and hence the dynamics (and therefore also the future state(s)).3 We therefore identify the transition from non-life to life with a fundamental shift in the causal structure of the system, specifically a transition to a state in which algorithmic information gains direct, context-dependent, causal efficacy over matter. A longstanding debate—often dubbed the chicken or the egg problem—is which came first, genetic heredity or metabolism ?

A conundrum arises because neither can operate without the other in contemporary life, where the duality is manifested via the genome–proteome systems. The origin of life community has therefore tended to split into two camps, loosely labelled as ‘genetics-first’ and ‘metabolism-first’. In informational language, genetics and metabolism may be unified under a common conceptual framework by regarding metabolism as a form of analogue information processing (to be explained below), to be contrasted with the digital information of genetics. In approaching this debate, a common source of confusion stems from the fact that molecules play three distinct roles: structural, informational and chemical. In terms of computer language, in living systems chemistry corresponds to hardware and information (e.g. genetic and epigenetic) to software [27]. The chicken-or-egg problem, as traditionally posed, thus amounts to a debate of whether analogue or digital hardware came first.


View user profile

13 Re: DNA stores literally coded information on Thu Dec 22, 2016 11:06 am



Physical laws cannot determine the arrangement of the chemical constituents of life anymore than the laws of physics determine how ink is arranged on paper to convey information. As Polanyi explained in a later paper:
Biological systems, like machines, have…functions and forms inexplicable by chemical and physical laws. The argument that the DNA molecule determines genetic processes in living systems does not indicate reducibility. A DNA molecule essentiality transmits information to a developing cell. Similarly, a book transmits information. But the transmission of the information cannot be represented in terms of chemical and physical principles. In other words, the operation of the book is not reducible to chemical terms. Since DNA operates by transmission of (genetic) information, its function cannot be described by chemical laws either. The life process is essentially the development of a fertilized cell, as the result of information imparted by DNA. Transmission of this information is nonchemical and nonphysical, and is the controlling factor in the life process. The description of a living system therefore transcends the chemical and physical laws which govern its constituents.”[2]

View user profile

14 Information, and the nature of reality on Fri Jan 13, 2017 3:04 pm


Information, and the nature of reality

Paul Davies, page 171

Information  means primarily “instruction,” in the sense of a command or step in a computer program. This in turn has the function of imposing a selective condition on the possible processes that can go on in a system. In precisely this sense, living processes are “instructed” by the information that is contained in encoded form in genes. Expressed in the terms of physics: the genome represents a specific physical boundary condition, a constraint that restricts the set of physically possible processes to those that do actually take place within the organism and are directed towards preservation of the system (Küppers, 1992). Thus, the idea of “instruction by information” has a precise physical meaning, and in this context information can be indeed regarded as an objective property of living matter.

It is a harder task to demonstrate the universality of communication. One tends to assume intuitively that this concept is applicable only to the exchange of information between human beings. This assumption arises from the fact that the idea of a “common” understanding seems to make no sense outside the realm of human consciousness. However, this could be a false premise based upon a narrow use of the concept of understanding. Reaching a common understanding usually means reaching agreement. This in turn implies that one must understand one another in the sense that each party can comprehend what the other party intends to communicate. However, attaining a common understanding does not necessarily presuppose any reflections upon the nature or the subject of the communication process, nor does it imply the question of whether the contents of the communication are true or false. Rather, it requires only a mere exchange of information; that is, a number of messages to be sent in both directions – without, however, either party necessarily being aware of the meaning of what is being communicated.

There is thus a subtle difference in scope between “a reflected understanding” and “reaching a coordinated reaction.” If we are for a moment willing to put aside the highly sophisticated forms of human understanding, and to operate with a concept of understanding that encompasses only the objectives of achieving a coordinated reaction, then it becomes easy to see how this concept is applicable to all levels of living matter. We thus have to concede that molecules, cells, bacteria, plants, and animals have the ability to communicate. In this case, “communication” means neither more nor less than the reciprocal harmonization and coordination of processes by means of
chemical, acoustic, and optical signals.

About “understanding”

The foregoing arguments have taken me along a path that some philosophers of science have branded “naïve” naturalism. Their criticism is directed especially at the idea that information can exist as a natural object, independently of human beings: that is to say, outside the myriad ways in which humans communicate. This charge of naturalism is heard from quite diverse philosophical camps. However, all such critics share the conviction that only human language can be a carrier of information, and that the use of linguistic categories in relation to natural phenomena is nothing more than a naturalistic fallacy. For representatives of this philosophical position, any talk of information and communication in the natural sciences – as practiced especially in modern biology – is no more than a metaphor that reveals, ultimately, a sadly uncritical usage of terms such as “language” and “understanding.” Let us examine this controversy more closely and ask once more the question of what we actually understand by the word “understanding.” The tautological way in which I express this question indicates that one can easily get into a vicious circle when trying to approach the notion of understanding. This is because it seems generally to be the case that one can only understand something if one has understood some other things. This plausible statement is central to philosophical hermeneutics, the best-known and most influential doctrine of human understanding (Gadamer, 1965). The hermeneutic thesis, according to which any understanding is bound to some other understanding, obviously refers to the total “network” of human understanding in which any kind of understanding is embedded. In other words: any form of communication presupposes some prior understanding, which provides the necessary basis for a meaningful exchange of information. In fact, there seems to be no information in an absolute sense – not even as a plain syntactic structure – as the mere identification of a sequence of signs as being “information” presupposes a foregoing knowledge of signs and sequences of signs. In short: information exists only in a relative sense – that is, in relation to some other information. Thus, even if we adopt an information-theoretical point of view, there seems to be no obstacle to the hermeneutic circle, according to which a person can only understand something if he has already understood something else. Nevertheless, this perspective contradicts the intentions of philosophical hermeneutics, which puts a completely different construction upon the hermeneutic circle. Within the framework of this philosophy, the pre-understanding of any kind of human understanding is thought to be rooted in the totality of human existence. And this ontological interpretation is intended to lead not to a relative but to an absolute and true understanding of the world.

Moreover, because we use language to comprehend the world, the hermeneutic school regards language as the gate that opens for us the access to our existence. The philosopher Hans-Georg Gadamer (1965, p. 450) has expressed this in the often-quoted sentence: “Being that can be understood is language.” Even though some prominent philosophers of the hermeneutic school assign a special role to dialogue, their concept of understanding still fails to possess the objectiveness and relativity that characterize a critical comprehension of human understanding. On the contrary: a world view that rests its claims to validity and truth exclusively upon the rootedness of understanding in human existence has moved to the forefront and become the absolute norm for any understanding at all. So, in contrast to the relativistic world picture offered to us by modern science, philosophical hermeneutics seeks to propagate a fundamentalism of understanding that is centered firmly on the philosophical tradition of absolute understanding. Moreover, if human language is considered to be a prerequisite for all understanding, human language becomes the ultimate reference in our relation to the world. The thesis of this chapter, which intends to give language a naturalistic interpretation that allows us to speak of the “language” of genes, seems to be diametrically opposed to this position. According to the naturalistic interpretation, which is shared by other biologists, language is a natural principle for the organization of complex systems, which – in the words of Manfred Eigen (1979, p. 181) – “can be analyzed in an abstract sense, that is, without reference to human existence.” From the standpoint of philosophical hermeneutics, such use of the word “language” is completely unacceptable. From this perspective, biologists who think and speak in this way about the existence of a “molecular language” look like drivers thundering down the motorway in the wrong direction – ignoring all the
signposts naturally provided by human language for comprehendingthe world. 9.3 

The “language” of genes

Impressive evidence for the naturalistic view of language seems to be found in the language-like arrangement of genetic information. Thus, as is well known, the genetic alphabet is grouped in higherorder informational units, which in genetic handwriting take over the functions of words, sentences, and so forth. And, like human language, genetic information has a hierarchical structure, which is unfolded in a complex feedback mechanism – a process that shows all the properties of a communication process between the genome and its physical context. Of course, the parallels break down if we try to use the full riches of human language as a measure of the “language-like” structure of the genome. But from an evolutionary standpoint, there are good grounds to assert that “language” is indeed a natural phenomenon, which originates in the molecular language of the genome and has found  its hitherto highest expression in human language (Küppers, 1995). For evolutionary biologists, there is no question as to whether languages below the level of human language exist; the issue is rather about identifying the general conditions under which linguistic structures originate and evolve. The significance of the natural phenomenon “language” for the explanation of living matter was recognized and first expressed with admirable clarity at the end of the nineteenth century by Friedrich Miescher, the discoverer of nucleic acids. Asking how a substance such as a nucleic acid can generate the vast diversity of genetic structures, he drew an analogy to the principles of stereochemistry. In the same way – Miescher argued – that a narrow variety of small molecular units is able to build up large molecules of almost unlimited complexity that are chemically very similar, but which have very different structures in three dimensions, the nucleic acids are capable of instructing the vast diversity of genetic structures. This line of thinking led Miescher to the conclusion that the nucleic acids must be able to “express all the riches and all the diversity of inheritance, just as words and ideas in all languages can be expressed in the 24–30 letters of the alphabet” (Miescher, 1897, p. 116). Obviously Miescher’s view of living matter was that of a “linguistic movement” rather than that of a “clockwork machine.” However, the “linguistic movement” of living matter is not a dead system of rules, but a dynamic one.

So, is this all just metaphoric speech?

An outside observer, watching the disputants from a distance, might wonder what the controversy is all about, and might even suspect that it was a typical philosophers’ war over the meaning of words. Our observer would be certain to draw attention to the fact that we repeatedly take words out of their original context and transpose them into another, so that any discourse about the world of nature is bound to employ metaphors, at least to a certain extent. Why, then, should we not simply regard terms such as “information,” “communication,” and “language” in biology as what they really are: namely, adequate and highly resilient media for the description of the phenomena of life? Do the recent spectacular successes at the interface between biotechnology and information technology not justify the use of these concepts in biology? The construction of bio-computers, the development of genetic algorithms, the simulation of cognitive processes in neural networks, the coupling of nerve cells to computer chips, the generation of genetic information in evolution machines – all these would scarcely be conceivable without the information-theoretical foundations of living matter provided by biology.

However, the foregoing questions cannot be disposed of with simple arguments. This is above all because “information,” “communication,” and “language” are charged with other notions such as “meaning,” “value,” “truth,” and the like. And this is where we run into the real nub of the discussion. Phenomena associated with meaning, as expressed in the semantic dimension of information, appear to evade completely all attempts to explain them on a naturalistic basis, and thus also to escape scientific description. The right to interpret phenomena of meaning has traditionally been claimed by the humanities: especially by its hermeneutic disciplines. They have placed meaning, and thus also the understanding of meaning, at the center of their methodology; a clear demarcation against the natural sciences may indeed have been one of the motives for this. Whatever the reasons, the humanities have long gone their own way, have not considered it necessary to subject themselves to the scientific method of causal analysis – and have thus retained their independence for a considerable length of time.

The question of how broadly the concept of information may be applied is thus by no means a dispute about the content and the range of the applicability of a word. It would be truer to regard this question as the focal point at which philosophical controversies about the unity of knowledge converge – debates that have determined the relationship of the humanities and the natural sciences for more than a hundred years. The biological sciences, which stand at the junction between these two currents of thought, are always the first to get caught in the crossfire. This is because an informationtheoretical account of living matter involving a law-like explanation necessarily introduces questions of meaning and, thus, the semantic aspect of information (Küppers, 1996). Furthermore, the introduction of the semantic aspect of information in turn leads to the most fascinating plan-like and purpose-like aspects of living matter, which have every appearance of overstretching the capacity of traditional scientific explanation. Are, then, physical explanations – and with them the entire reductionistic research program in biology – doomed to founder on the semantic aspect of information?

The semantic dimension of information

Our discussion up to now has suggested that semantic information is “valued” information. The value of information is, however, not an absolute quantity; rather, it can only be judged by a receiver. Thus, the semantics of information depend fundamentally upon the state of the receiver. This state is determined by their prior knowledge, prejudices, expectations, and so forth. In short: the receiver’s evaluation scale is the result of a particular, historically unique, pathway of experiences. Can – we may persist in asking – the particular and individual aspects of reality ever become the object of inquiry in a science based upon general laws and universal concepts? Even Aristotle addressed this important question. His answer was a clear “No.” For him – the logician – there were no general discoveries to be made about things that were necessarily of an individual nature, because the logic of these two attributes – general and particular – made them mutually exclusive. This view has persisted through to our age, and has left a deep mark upon our present-day understanding of what science is and does. 

 Under these circumstances, the achievement of the philosopher Ernst Cassirer appears all the more admirable. Opposing the Aristotelian tradition, Cassirer attempted to bridge the presumed gap between the general and the particular (Cassirer, 1910). Particular phenomena, he argued, do not become particular because they evade the general rules, but because they stand in a particular – that is, singular – relationship to them. Cassirer’s reflections may have been triggered by an aperçu of von Goethe (1981, p. 433): “The general and the particular coincide – the particular is the general as it appears under various conditions.” According to Cassirer, it is the unique constellation of general aspects of a phenomenon that makes up its uniqueness. This is an interesting idea. It makes clear that an all-embracing theory of semantic information is impossible, whereas general aspects of
semantics can very well be discerned. Taken for themselves, these aspects may never completely embrace the phenomenon in question.

At the beginning of the 1950s, the philosophers and logicians Yehoshua Bar-Hillel and Rudolf Carnap (1953) tried to quantify the meaning of a linguistic expression in terms of its novelty value. This idea was a direct continuation of the concept developed within the framework of Shannon’s information theory, where the information content of a message is coupled to its expectation value: the lower the expectation value of a message, the higher its novelty and thus its information content. This approach takes care of the fact that an important task of information is to eliminate or counteract uncertainty. However, the examples adduced by Bar-Hillel and Carnap are restricted to an artificial language.

Pragmatic relevance 

A more powerful approach to measuring the semantics of information is that based upon its pragmatic relevance. This approach has been described in a paradigmatic way by Donald MacKay (1969) in his book Information, Mechanism and Meaning. The pragmatic aspect of information refers to the action(s) of the receiver to which the information leads, or in which it results. For some time now, my own efforts have been focused on a new approach, intended to investigate the complexity of semantic information (Küppers, 1996). Unlike the approaches described above, this one does not seek to make the meaning of information directly amenable to measurement. Rather, it aims to demarcate the most general conditions that make up the essence of semantic information. Investigations of this kind are important because they afford a more general insight into the question of the origin of information, and therefore have consequences for major fundamental problems of biology such as the origin and evolution of life (Küppers, 2000a).

How does information originate?

Let us consider the relationship between semantic information and complexity in more detail. Information, as we have said, is always related to an entity that receives and evaluates the information. This in turn means that evaluation presupposes some other information that underlies the process of registration and processing of the incoming information. But how much information is needed in order to understand, in the foregoing sense, an item of incoming information? This question expresses the quantitative version of the hermeneutic thesis, according to which a person can only understand some piece of information when it has already understood some other information. At first sight, it would seem impossible to provide any kind of answer to this question since it involves the concept of understanding, which, as we have seen, is already difficult to understand by itself, let alone to quantify. Surprisingly, however, an answer can be given, at least if we restrict ourselves to the minimal conditions for understanding. To this belongs first of all the sheer registration by the receiver of the information to be understood. If the information concerned conveys meaning – that is, information of maximum complexity – then the receiver must obviously record its entire symbol sequence before the process of understanding can begin. Thus, even the act of recording involves information of the same degree of (algorithmic) complexity as that of the symbol sequence that is to be understood.

This surprising result is related to the fact that information conveying meaning cannot be compressed without change in, or even loss of, its meaning. It is true that the contents of a message can be shortened into a telegram style or a tabloid headline; however, this always entails some loss of information. This is the case for any meaningful information: be it a great epic poem or simply the day’s weather report. Viewed technically, this means that no algorithms – that is, computer programs – exist that can extrapolate arbitrarily chosen parts of the message and thus generate the rest of the message. But if there are no meaning-generating algorithms, then no information can arise de novo. Therefore, to understand a piece of information of a certain complexity, one always requires background information that is at least of the same complexity. This is the sought-after answer to the question of how much information is needed to understand some other information. Ultimately, it implies that there are no “informational perpetualmotion machines” that can generate meaningful information out of nothing (Küppers, 1996).

In other words, there is no possibility of mechanistic causes, or non intelligent causes, producing information.

This is largely due to the fact that the results up to now have been derived with respect to the semantic dimension of human language, and it is not yet clear to what extent they are applicable to the “language of genes.” For this reason, questions such as whether evolution is a sort of perpetualmotion machine must for the present remain open.

Thats the typical sidestepping of Davies. Rather than make a logical inference, that leads to a intelligent designer as the source of information as the most logical and obvious explanation, he says " we don't know how evolution could have done the job ". ( Do not let us forget that evolution was not even a driving force when the first information containing genes to produce life had to emerge )

At least it is certain that we must take leave of the idea of being able, one day, to construct intelligent machines that spontaneously generate meaningful information de novo and continually raise its complexity.

( In other words, an open admittance that its not possible. It belongs to the realm of science fiction, or pseudo science )

If information always refers to other information, can then information in a genuine sense ever be generated? 

( Not, if intelligence as information generator is excluded a priori, or ad principle )

Or are the processes by which it arises in nature or in society nothing more than processes of transformation: that is, translation and re-evaluation of information, admittedly in an information space of gigantic dimensions, so that the result always seems to be new and unique? Questions such as these take us to the frontline of fundamental research, where question after question arises, and where we have a wealth of opportunities for speculation but no real answers.

No real answers, when a intelligent agency is excluded a priori. That shows how science based on methodological naturalism ALWAYS comes to its explanation limits, where nihilism and agnosticism are the last possible answers.

The world of abstract structures

Finally, I should like to return briefly to the question with which we began: Are the ideas of “information,” “communication,” and “language” applicable to the world of material structures? We saw how difficult it is to decide this on a philosophical basis. But it may also be the case that the question is wrongly put. There does indeed seem a surprising solution on the way: one prompted by current scientific developments. In the last few decades, at the border between the natural sciences and the humanities, a new scientific domain is emerging that has been termed “structural sciences” (Küppers, 2000b).

Alongside information theory, it encompasses important disciplines such as cybernetics, game theory, system theory, complexity theory, network theory, synergetics, and semiotics, to mention but a few. The object of structural sciences is the way in which the reality is structured – expressed, investigated, and described in an abstract form. This is done irrespectively of whether these structures occur in a natural or an artificial, a living or a non-living, system. Among these, “information,” “communication,” and “language” can be treated within structural sciences as abstract structures, without the question of their actual nature being raised. By considering reality only in terms of its abstract structures, without making any distinction between objects of “nature” and “culture,” the structural sciences build a bridge between the natural sciences and the humanities and thus have major significance for the unity of science (Küppers, 2000b).

In philosophy, the structural view of the world is not new. Within the frame of French structuralism, Gilles Deleuze took the linguistic metaphor to its limit when he said that “There are no structures that are not linguistic ones … and objects themselves only have structure in that they conduct a silent discourse, which is the language of signs” (Deleuze, 2002, p. 239). Seen from this perspective, Gadamer’s dictum “Being that can be understood is
language” (Gadamer, 1965, p. 450) takes on a radically new meaning: “Being” can only be understood when it already has a linguistic structure. Pursuing this corollary, the philosopher Hans Blumenberg (2000), in a broad review of modern cultural history, has shown that – and how – the linguistic metaphor has made possible the “readability” (that is, the understanding) of the world. However, the relativity of all understanding has of necessity meant that the material “read” was reinterpreted over and over again, and that the course of time has led to an ever more accurate appreciation of which “readings” are wrong. In this way, we have approached, step by step, an increasingly discriminating understanding of the reality surrounding us.

View user profile

15 Re: DNA stores literally coded information on Sat Jan 14, 2017 2:32 pm


Signature in the cell, Stephen C.Meyer, page 26

When biologists referred to the sequences of chemicals in the DNA molecule as “information,” were they using the term as a metaphor? Or did these sequences of chemicals really function in the same way as “code” or “text” that humans use?

If biologists were using the term merely as a metaphor, then I wondered whether the genetic information designated anything real and, if not, whether the “information” in DNA could be said to point to anything, much less an “intelligent cause.”

Information as Metaphor: Nothing to Explain?

Though most molecular biologists see nothing controversial in characterizing DNA and proteins as “information-bearing” molecules, some historians and philosophers of biology have recently challenged that description. The late historian of science Lily Kay characterized the application of information theory to biology as a failure, in particular because classical information theory could not capture the idea of meaning.15 She suggests that the term “information” as used in biology constitutes nothing more than a metaphor. Since, in Kay’s view, the term does not designate anything real, it follows that the origin of “biological information” does not require explanation.16 Instead, only the origin of the use of the term “information” within biology requires explanation. As a social constructivist, Kay explains this usage as the result of various social forces operating within the “Cold War Technoculture.”17 In a different but related vein, philosopher Sahotra Sarkar has argued that the concept of information has little theoretical significance in biology because it lacks predictive or explanatory power.18 He, like Kay, seems to regard the concept of information as a superfluous metaphor.

Of course, insofar as the term “information” connotes semantic meaning, it does function as a metaphor within biology. That does not mean, however, that the term functions only metaphorically or that origin-of-life biologists have nothing to explain. Though information theory has a limited application in describing biological systems, it has succeeded in rendering quantitative assessments of the complexity of biomacromolecules. Further, experimental work has established the functional specificity of the base sequences in DNA and amino acids in proteins. Thus, the term “information” as used in biology refers to two real and contingent properties: complexity and functional

Only where information connotes subjective meaning does it function as a metaphor in biology. Where it refers to complex functional specificity, it defines a feature of living systems that calls for explanation every bit as much as, say, a mysterious set of inscriptions on the inside of a cave.

View user profile

Sponsored content

View previous topic View next topic Back to top  Message [Page 1 of 1]

Permissions in this forum:
You cannot reply to topics in this forum