Intelligent Design, the best explanation of Origins

This is my personal virtual library, where i collect information, which leads in my view to Intelligent Design as the best explanation of the origin of the physical Universe, life, and biodiversity


You are not connected. Please login or register

Intelligent Design, the best explanation of Origins » Intelligent Design » How the origin of the human eye is best explained through intelligent design

How the origin of the human eye is best explained through intelligent design

View previous topic View next topic Go down  Message [Page 1 of 1]

Admin


Admin
How the origin of the human eye is best explained through intelligent design  1

http://reasonandscience.heavenforum.org/t1638-eye-brain-is-a-interdependent-and-irreducible-complex-system#5103



The human eye consists of over two million working parts making it second only to the brain in complexity. Evolutionists believe that the human eye is a product of millions of years of mutations and natural selection. As you read about the amazing complexity of the eye please ask yourself: could this really be a product of evolution?

Automatic focus


The lens of the eye is suspended in position by hundreds of string like fibres called Zonules. The ciliary muscle changes the shape of the lens. It relaxes to flatten the lens for distance vision; for close work it contracts rounding out the lens. This happens automatically and instantaneously without you having to think about it.
How could evolution produce a system that even knows when it is in focus? Let alone the mechanism to focus.
How would evolution produce a system that can control a muscle that is in the perfect place to change the shape of the lens?


A visual system


The retina is composed of photoreceptor cells. When light falls on one of these cells, it causes a complex chemical reaction that sends an electrical signal through the optic nerve to the brain. It uses a signal transduction pathway, consisting of 9 irreducible steps. the light must go all the way through. Now, what if this pathway did happen to suddenly evolve and such a signal could be sent and go all the way through.  So what?!  How is the receptor cell going to know what to do with this signal?  It will have to learn what this signal means.  Learning and interpretation are very complicated processes involving a great many other proteins in other unique systems.  Now the cell, in one lifetime, must evolve the ability to pass on this ability to interpret vision to its offspring.  If it does not pass on this ability, the offspring must learn as well or vision offers no advantage to them.  All of these wonderful processes need regulation.  No function is beneficial unless it can be regulated (turned off and on).  If the light sensitive cells cannot be turned off once they are turned on, vision does not occur.  This regulatory ability is also very complicated involving a great many proteins and other molecules - all of which must be in place initially for vision to be beneficial. How does evolution explain our retinas having the correct cells which create electrical impulses when light activates them?

Making sense of it all




Each eye takes a slightly different picture of the world. At the optic chiasm each picture is divided in half. The outer left and right halves continue back toward the visual cortex. The inner left and right halves cross over to the other side of the brain then continue back toward the visual cortex.Also, the image that is projected onto the retina is upside down. The brain flips the image back up the right way during processing. Somehow, the human brain makes sense of the electrical impulses received via the optic nerve. The brain also combines the images from both eyes into one image and flips it up the right way… and all this is done in real time. How could  natural selection recognize the problem and evolve the mechanism of  the left side of the brain receiving the information from the left side of both eyes and the right side of the brain taking the information from the right side of both eyes? How would evolution produce a system that can interpret electrical impulses and process them into images? Why would evolution produce a system that knows the image on the retina is upside down?

Constant level of light


The retina needs a fairly constant level of light intensity to best form useful images with our eyes. The iris muscles control the size of the pupil. It contracts and expands, opening and closing the pupil, in response to the brightness of surrounding light. Just as the aperture in a camera protects the film from over exposure, the iris of the eye helps protect the sensitive retina. How would evolution produce a light sensor? Even if evolution could produce a light sensor.. how can a purely naturalistic process like evolution produce a system that knows how to measure light intensity? How would evolution produce a system that would control a muscle which regulates the size of the pupil?

Detailed vision


Cone cells give us our detailed color daytime vision. There are 6 million of them in each human eye. Most of them are located in the central retina. There are three types of cone cells: one sensitive to red light, another to green light, and the third sensitive to blue light.
Isn’t it fortunate that the cone cells are situated in the centre of the retina? Would be a bit awkward if your most detailed vision was on the periphery of your eye sight?


Night vision


Rod cells give us our dim light or night vision. They are 500 times more sensitive to light and also more sensitive to motion than cone cells. There are 120 million rod cells in the human eye. Most rod cells are located in our peripheral or side vision. it can modify its own light sensitivity. After about 15 seconds in lower light, our bodies increase the level of rhodopsin in our retina. Over the next half hour in low light, our eyes get more an more sensitive. In fact, studies have shown that our eyes are around 600 times more sensitive at night than during the day. Why would the eye have different types of photoreceptor cells with one specifically to help us see in low light?

Lubrication


The lacrimal gland continually secretes tears which moisten, lubricate, and protect the surface of the eye. Excess tears drain into the lacrimal duct which empty into the nasal cavity.
If there was no lubrication system our eyes would dry up and cease to function within a few hours.
If the lubrication wasn’t there we would all be blind. Had this system not have to be fully setup from the beginning?
Fortunate that we have a lacrimal duct aren’t we? Otherwise we would have steady stream of tears running down our faces!


Protection


Eye lashes protect the eyes from particles that may injure them. They form a screen to keep dust and insects out. Anything touching them triggers the eyelids to blink.
How is it that the eyelids blink when something touches the eye lashes?


Operational structure


Six muscles are in charge of eye movement. Four of these move the eye up, down, left and right. The other two control the twisting motion of the eye when we tilt our head.
The orbit or eye socket is a cone-shaped bony cavity that protects the eye. The socket is padded with fatty tissue that allows the eye to move easily. When you tilt your head to the side your eye stays level with the horizon.. how would evolution produce this? Isn’t it amazing that you can look where you want without having to move your head all the time? If our eye sockets were not padded with fatty tissue then it would be a struggle to move our eyes.. why would evolution produce this?


Poor Design?


Some  have claimed that the eye is wired back to front and therefore it must be the product of evolution. They claim that a designer would not design the eye this way. Well, it turns out this argument  stems from a lack of knowledge.


Dr Marshall explains that the nerves could not go behind the eye, because that space is reserved for the choroid, which provides the rich blood supply needed for the very metabolically active retinal pigment epithelium (RPE). This is necessary to regenerate the photoreceptors, and to absorb excess heat. So it is necessary for the nerves to go in front instead.


Evolution of the eye?


Proponents of evolutionary mechanisms have come up with how they think the eye might have gradually evolved over time but it’s nothing more than speculation.
For instance, observe how Dawkins explains the origin of the eye:

https://www.youtube.com/watch?v=sUjd8x-1xM0


Observe the words ‘suppose’, ‘probably’, ‘suspect’, ‘perhaps’ & ‘imagine’? This is not science but pseudo scientific speculation and story telling. Sure, there are a lot of different types of eyes out there but that doesn’t mean they evolved. Besides, based on the questions above you can see how much of an oversimplification Dawkins presentation is.

Conclusion


The human eye is amongst  the best automatic camera in existence. Every time we change where we’re looking, our eye (and retina) is changing everything else to compensate: focus & light intensity are constantly adjusting to ensure that our eyesight is as good it can be. Man has made his own cameras… it took intelligent people to design and build them. The human eye is better than the best human made camera. How is the emergence of eyes best explained, evolution, or design ?!


Did eyes evolve by Darwinian mechanisms? 2

The simplest eye type known is the ocellus, a multicellular eye comprising of photoreceptor cells, pigment cells and nerve cells to process the information—is step 4 in Darwin’s list.27 The most primitive eye that meets the definition of an eye is the tiny—about the size of the head of a pin—microscopic marine crustacean copepod copilia. Only the females possess what Wolken and Florida call ‘remarkable eyes which make up more than half of its transparent body.’28 Claimed to be a link between an eyespot and a more complex eye, it has two exterior lenses that raster like a scanning electron microscope to gather light that is processed and then sent to its brain.29 It has retinal cells and an eye ‘analogous to a superposition-type ommatidium of compound eyes’.30 This, the most primitive true eye known, is at stage 6 of Darwin’s evolutionary hierarchy!

Dennett wrote that the eye lens is ‘exquisitely well-designed to do its job, and the engineering rationale for the details is unmistakable, but no designer ever articulated it.’44 He concludes that its design is not real, but an illusion because evolution explains the eye without the need for a designer. This review has shown that evolution does not explain the existence of the vision system, but an intelligent designer does. The leading eye evolution researchers admit they only ‘have some understanding of how eyesmight have evolved’.45 These explanations do not even scratch the surface of how a vision system could have arisen by evolution—let alone ‘when’.
Much disagreement exists about the hypothetical evolution of eyes, and experts recognize that many critical problems exist. Among these problems are an explanation of the evolution of each part of the vision system, including the lens, the eyeball, the retina, the entire optical system, the occipital lobes of the brain, and the many accessory structures. Turner stressed that ‘the real miracle [of vision] lies not so much in the optical eye, but in the computational process that produces vision.’46 All of these different systems must function together as an integrated unit for vision to be achieved. As Arendt concludes, the evolution of the eye has been debated ever since Darwin and is still being debated among Darwinists.47 For non-evolutionists there is no debate.

The Irreducible Complexity of Sight

http://reasonandscience.heavenforum.org/t1638-eye-brain-is-a-interdependent-and-irreducible-complex-system#3058

and

The Inference to Design

Thesis by Wayne Talbot

Our sense of sight is best explained by intelligent design: the system is irreducibly complex and the prerequisites for processing visual information are beyond development by undirected biological processes.

Propositions:

1. The origin of knowledge and information in the brain cannot be explained by undirected organic processes.
2. Sight is the result of intelligent data processing.
3. Data input requires a common understanding between sender and receiver of the data coding and transmission protocols.
4. Data storage and retrieval requires an understanding of the data from an informational perspective.
5. Data processing requires meta-data: conceptual and contextual data to be pre-loaded prior to the input of transaction data.
6. Light can be considered an encoded data stream which is decoded and re-encoded by the eye for transmission via the optic nerve to serve as input data.
7. All data transmissions are meaningless without the superimposition of meta-data.
8. None of the components of our visual system individually offer any advantage for natural selection.

The Concepts of Light and Sight

Imagine that some thousands of years ago, a mountain tribe suffered a disease or mutation such that all members became blind. Generation after generation were born blind and eventually even the legends of the elders of being able to see were lost. Everyone knew that they had these soft spots in their heads which hurt when poked, but nobody knew if they had some intended function. Over time, the very concepts of light and sight no longer existed within their tribal knowledge. As a doctor specialising in diseases and abnormalities of the eye, you realise that with particular treatment you can restore the sight of these people. Assuming that you are fluent in the local language, how would you describe what you can do for them? How would you convey the concept of sight to people to whom such an idea was beyond their understanding?
My point is that this is the very problem that primitive organisms would have faced if sight did in fact evolve organically in an undirected fashion. When the first light sensitive cell hypothetically evolved, the organism had no way of understanding that the sensation it experienced represented light signals conveying information about its environment: light and sight were concepts unknown to it.

The Training of Sight

Those familiar with the settlement of Australia by Europeans in the 19th century, and the even earlier settlements in the Americas and Africa would have heard of the uncanny ability of the indigenous population to track people and animals. It was not so much that their visual acuity was better, but that they had learned to understand what they were seeing. It was found that this tracking ability could be taught and learned. In military field craft, soldiers are taught to actively look for particular visual clues and features. In my school cadet days, we undertook night “lantern stalks” (creeping up on enemy headquarters) and later in life, the lessons learned regarding discrimination of objects in low light were put to good use in orienteering at night. All of this experience demonstrates that while many people simply “see” passively, it is possible to engage the intellect and actively “look”, thus seeing much more.

With the advent of the aeroplane came aerial photography and its application during wartime as a method of intelligence gathering. Photographic analysis was a difficult skill to acquire - many people could look at the same picture but offer differing opinions as to what they were seeing, or rather thought they were seeing.
The underlying lesson is that sight is as much a function of intellect as it is receiving light signals through the eyes. Put another way, it is intelligent data processing.
Understanding Data vs Information

The digital computer was invented circa 1940 and during the technical revolution that followed, we have come to learn a great deal about information processing. I was fortunate to join the profession in the very early days of business computing and through training and experience, acquired an understanding of data coding methodologies, their application and interpretation. More importantly however, I came to understand a great deal about how data becomes information, and when it isn’t. In the early days of sequential and index-sequential files, the most common programming error occurred in attempting to match master, application (reference), and transaction files. With the advent of direct access files and disk resident databases, new skills were required in the fields of data analysis, data structuring, and data mining.

The computing experience teaches this: data only becomes cognitive information when intelligently processed against a pre-loaded referential framework of conceptual and contextual data. Using this computer analogy, master files represented conceptual information, application files provided context, and input data was provided by transaction files.
With apologies to Claude Shannon, Werner Gitt and other notables who have contributed so much to our understanding on this subject, I would contend that in the context of this discussion, none of these files contain information in the true sense: each contains data which only becomes usable information when intelligently correlated. I would further contend that no single transmission in any form can stand alone as information: absent of a preloaded conceptual and contextual framework in the recipient, it can only ever be a collection of meaningless symbols. This is easily demonstrated by simply setting down everything you have to know before you can read and understand these words written here, or what you would have to know before reading a medical journal in a foreign language in an unfamiliar script such as Hebrew or Chinese.

Can Coding Systems Evolve?

A computer processor operates via switches set to either “on” or “off”. A system with 8 switches (28) provides 256 options; 16 switches (216) 65,536; 32 switches (232) 4,294,967,296; and 64 switches (264) the very massive 18 trillion. This feature provides the terminology such as 32-bit and 64-bit computers: it refers to the number of separate addresses than can be accessed in memory. In the early days of expensive iron-core memory, 8 or 16 bit addressing was adequate, but with the development of the silicon chip and techniques for dissipating heat, larger memory became viable thus requiring greater addressing capability and the development of 32 then 64-bit computers. All very interesting, you may say but why is that relevant? The relevance is found in most users’ experience: software versions that are neither forward nor backward compatible. The issue is that as coding systems change or evolve, the processing and storage systems cannot simply evolve in an undirected fashion: they must be intelligently converted. Let us look at some practical examples.

Computer coding systems are multi-layered. The binary code (1’s and 0’s) of computers translates to a coded language such as Octal through to ASCII, or Hexadecimal through to EBCDIC, and then to a common human language such as English or French. Computer scientists long ago established these separate coding structures for reasons not relevant here. The point to note is that in general, you cannot go from Octal to EBCDIC or from Hexadecimal to ASCII: special intermediate conversion routines are needed. The problem is that once coding systems are established, particularly multi-layered systems, any sequence of symbols which does not conform to the pattern cannot be processed without an intelligent conversion process.

Slightly off-topic, but consider the four chemicals in DNA which are referred to as A, C, G, and T. Very recently, scientists expressed surprise in finding that the DNA sequences code not just for proteins, but for the processing of these proteins. In other words, there is not just one but two or more “languages” written into our genome. What surprises me is that they were surprised: I would have thought the multi-language function to be obvious, but I will leave that for another time. If an evolving genome started with just two chemicals, say A and C, downstream processes could only recognise combinations of these two. If a third chemical G arose, there would be no system that could utilise it and more probably, its occurrence would interfere in a deleterious way. Quite simply, you cannot go from a 2 letter code to a 3 letter code without re-issuing the code book, a task quite beyond undirected biological evolution.

The Code Book Enigma

I will use an example similar to DNA because it is much easier to illustrate the problem using a system comprising just four symbols, in this case W, X, Y, and Z. I am avoiding ACGT simply so that you are not distracted by your knowledge of that particular science. Our coding system uses these 4 letters in groups of 3. If I sent you the message “XYW ZWZ YYX WXY” you would have no idea of what it means: it could be a structured sequence or a random arrangement, the letters are just symbols which are individually meaningless until intelligently arranged in particular groups or sequences. To be useful, we would need to formalise the coding sequences in a code book: that way the sender can encode the message and someone with the same version of the code book can decode the message and communication is achieved. Note that if the sender and receiver are using different versions of the code book, communication is compromised.
This brings us to a vital concept: meta-data (or data about data).
There is a foundational axiom that underpins all science and intellectual disciplines––nothing can explain itself. The explanation of any phenomenon will always lie outside itself, and this applies equally to any coding system: it cannot explain itself. You may recall the breakthrough achieved by archaeologists in deciphering Egyptian hieroglyphs when they found the Rosetta Stone.

In our example, the code book provides the meta-data about the data in the encoded message. Any language, particularly one limited to just four letters, requires a code book to both compose and decipher the meaning. Every time there is a new rearrangement of the letters, or new letters are added to (or deleted from) an existing string, the code book has to be updated. From a chronological sequence perspective, for a change to be useful, the code explanation or definition must precede, not follow, any new arrangement of letters in a message. Rearrangements that occur independently of the code book cannot be understood by downstream processes. Logically, the code book is the controlling mechanism, not the random rearrangements of coding sequences. First the pre-arranged code sequence, then its implementation. In other words, it is a top-down process, not the bottom-up process that evolutionists such as Richard Dawkins assert.
Now it matters not whether you apply this to the evolution of the genome or to the development of our senses, the same principle holds: ALL data must be preceded by meta-data to be comprehensible as information. In general, messages or other forms of communication can be considered as transactions which require a conceptual and contextual framework to provide meaning.

Understanding Input Devices


We have five physical senses: sight, hearing, smell, taste, and touch, and each requires unique processes, whether physical and/or chemical. What may not be obvious from an evolution standpoint is that for the brain to process these inputs, it must first know about them (concept) and how to differentiate amongst them (context). Early computers used card readers as input devices, printers for output, and magnetic tape for storage (input & output). When new devices such as disk drives, bar code readers, plotters, etc were invented, new programs were needed to “teach” the central processor about these new senses. Even today, if you attach a new type of reader or printer to your computer, you will get the message “device not recognised” or similar if you do not preload the appropriate software. It is axiomatic that an unknown input device cannot autonomously teach the central processor about its presence, its function, the type of data it wishes to transmit, or the protocol to be used.
The same lessons apply to our five senses. If we were to hypothesise that touch was the first sense, how would a primitive brain come to understand that a new sensation from say light was not just a variation of touch?

Signal Processing

All communications can be studied from the perspective of signal processing, and without delving too deeply, we should consider a just few aspects of the protocols. All transmissions, be they electronic, visual, or audible are encoded using a method appropriate to the medium. Paper transmissions are encoded in language using the symbol set appropriate for that language; sound makes use of wavelength, frequency and amplitude; and light makes use of waves and particles in a way that I cannot even begin to understand. No matter, it still seems to work. The issue is that for communication to occur, both the sender and receiver must have a common understanding of the communication protocol: the symbols and their arrangement, and both must have equipment capable of encoding and decoding the signals.
Now think of the eye. It receives light signals containing data about size, shape, colour, texture, brightness, contrast, distance, movement, etc. The eye must decode these signals and re-encode them using a protocol suitable for transmission to the brain via the optic nerve. Upon receipt, the brain must store and correlate the individual pieces of data to form a mental picture, but even then, how does it know what it is seeing? How did the brain learn of the concepts conveyed in the signal such as colour, shape, intensity, and texture? Evolutionists like to claim that some sight is better than no sight, but I would contend that this can only be true provided that the perceived image matches reality: what if objects approaching were perceived as receding? Ouch!

When the telephone was invented, the physical encode/decode mechanisms were simply the reverse of one another, allowing sound to be converted to electrical signals then reconverted back to sound. Sight has an entirely different problem because the encode mechanism in the eye has no counterpart in the brain: the conversion is to yet another format for storage and interpretation. These two encoding mechanisms must have developed independently, yet had to be coherent and comprehensible with no opportunity for prolonged systems testing. I am not a mathematician, but the odds against two coding systems developing independently yet coherently in any timespan must argue against it ever happening.

Data Storage and Retrieval

Continuing our computer analogy, our brain is the central processing unit and just as importantly, our data storage unit, reportedly with an equivalent capacity of 256 billion gigabytes (or thereabouts). In data structuring analysis, there is always a compromise to be made between storage and retrieval efficiency. The primary difference from an analysis perspective is whether to establish the correlations in the storage structure thus optimising the retrieval process, or whether to later mine the data looking for the correlations. In other words, should the data be indexed for retrieval rather than just randomly distributed across the storage device. From our experience in data analysis, data structuring, and data mining, we know that it requires intelligence to structure data and indices for retrieval, and even greater intelligence to make sense of unstructured data.
Either way, considerable understanding of the data is required.
Now let’s apply that to the storage, retrieval, and processing of visual data. Does the brain store the data then analyse, or analyse and then store, all in real time? Going back to the supposed beginnings of sight, on what basis did the primitive brain decide where to store and how to correlate data which was at that time, just a meaningless stream of symbols? What was the source of knowledge and intelligence that provided the logic of data processing?

Correlation and Pattern Recognition

In the papers that I have read on the subject, scientists discuss the correlation of data stored at various locations in the brain. As best as I understand it, no-one has any idea of how or why that occurs. Imagine a hard drive with terabytes of data and those little bits autonomously arranging themselves into comprehensible patterns. Impossible you would assert, but that is what evolution claims. It is possible for chemicals to self-organise based on their physical properties, but what physical properties are expressed in the brain’s neural networks such that self-organisation would be possible? I admit to very limited knowledge here but as I understand it, the brain consists of neurons, synapses, and axons, and in each class, there is no differentiation: every neuron is like every other neuron, and so forth. Now even if there are differences such as in electric potential, the differences must occur based on physical properties in a regulated manner for there to be consistency. Even then, the matter itself can have no “understanding” of what those material differences mean in terms of its external reality.

In the case of chemical self-organisation, the conditions are preloaded in the chemical properties and thus the manner of organisation is pre-specified. When it comes to data patterns and correlation however, there are no pre-specified properties of the storage material that are relevant to the data which is held, whether the medium be paper, silicon, or an organic equivalent. It can be demonstrated that data and information is independent of the medium in which it is stored or transmitted, and is thus not material in nature. Being immaterial, it cannot be autonomously manipulated in a regulated manner by the storage material itself, although changes to the material can corrupt the data.
Pattern recognition and data correlation must be learned, and that requires an intelligent agent that itself is preloaded with conceptual and contextual data.
Facial Recognition

Facial recognition has become an important tool for security and it is easy for us to think, “Wow! Aren’t computers smart!” The “intelligence” of facial recognition is actually an illusion: it is an algorithmic application of comparing data points but what the technology cannot do is identify what type of face is being scanned. In 2012, Google fed 10 million images of cat faces into a very powerful computer system designed specifically for one purpose: that an algorithm could learn from a sufficient number of examples to identify what was being seen. The experiment was partially successful but struggled with variations in size, positioning, setting and complexity. Once expanded to encompass 20,000 potential categories of object, the identification process managed just 15.8% accuracy: a huge improvement on previous efforts, but nowhere near approaching the accuracy of the human mind.

The question raised here concerns the likelihood of evolution being able to explain how facial recognition by humans is so superior to efforts to date using the best intelligence and technology.

Irreducible Complexity

Our sense of sight has many more components than described here. The eye is a complex organ which would have taken a considerable time to evolve, the hypothesis made even more problematic by the claim that it happened numerous times in separate species (convergent evolution). Considering the eye as the input device, the system requires a reliable communications channel (the optic nerve) to convey the data to the central processing unit (the brain) via the visual cortex, itself providing a level of distributed processing. This is not the place to discuss communications protocols in detail, but very demanding criteria are required to ensure reliability and minimise data corruption. Let me offer just one fascinating insight for those not familiar with the technology. In electronic messaging, there are two basic ways of identifying a particular signal: (1) by the purpose of the input device, or (2) by tagging the signal with an identifier.

A certain amount of signal processing occurs in the eye itself; particular receptor cells have been identified in terms of function: M cells are sensitive to depth and indifferent to color; P cells are sensitive to color and shape; K cells are sensitive to color and indifferent to shape or depth. The question we must ask is how an undirected process could inform the brain about these different signal types and how they are identified. The data is transmitted to different parts of the brain for parallel processing, a very efficient process but one that brings with it a whole lot of complexity. The point to note is that not only does the brain have the problem of decoding different types of messages (from the M, P, and K cells), but it has to recombine this data into a single image, a complex task of co-ordinated parallel processing.
Finally we have the processor itself which if the evolution narrative is true, progressively evolved from practically nothing to something hugely complex. If we examine each of the components of the sight system, it is difficult to identify a useful function for any one of them operating independently except perhaps the brain. However, absent of any preloaded data to interpret input signals from wherever, it is no more useful than a computer without an operating system. It can be argued that the brain could have evolved independently for other functions, but the same argument could not be made for those functions pertaining to the sense of sight.
As best as I can understand, our system of sight is irreducibly complex.

Inheriting Knowledge

Let us suppose, contrary to all reason and everything that we know about how knowledge is acquired, that a primitive organism somehow began developing a sense of sight. Maybe it wandered from sunlight into shadow and after doing that several times, came to “understand” these variations in sensation as representative of its external environment, although just what it understood is anyone’s guess, but let us assume that it happened. How is this knowledge then inherited by its offspring for further development? If the genome is the vehicle of inheritance, then sensory experience must somehow be stored therein.
I have no answer to that, but I do wonder.
Putting it all together
I could continue to introduce even greater complexities that are known to exist, but I believe that we have enough to draw some logical conclusions. Over the past sixty years, we have come to understand a great deal about the nature of information and how it is processed. Scientists have been working on artificial intelligence with limited success, but it would seem probable that intelligence and information can only be the offspring of a higher intelligence. Even where nature evidences patterns, those patterns are the result of inherent physical properties, but the patterns themselves cannot be externally recognised without intelligence. A pattern is a form of information, but without an understanding of what is regular and irregular, it is nothing more than a series of data points.

We often hear the term, emergent properties of the brain, to account for intelligence and knowledge, but just briefly, what is really meant is emergent properties of the mind. You may believe that the mind is nothing more than a description of brain processes but even so, emergence requires something from which to emerge, and that something must have properties which can be foundational to the properties of that which emerges. Emergence cannot explain its own origins, as we have noted before.
Our system of sight is a process by which external light signals are converted to an electro-chemical data stream which is fed to the brain for storage and processing. The data must be encoded in a regulated manner using a protocol that is comprehensible by the recipient. The brain then stores that data in a way that allows correlation and future processing. Evolutionists would have us believe that this highly complex system arose through undirected processes with continual improvement through generations of mutation and selection. However, there is nothing in these processes which can begin to explain how raw data received through a light sensitive organ could be processed without the pre-loading of the meta-data that allows the processor to make sense of the raw data. In short, the only source of data was the very channel that the organism neither recognised nor understood.

Without the back-end storage, retrieval, and processing of the data, the input device has no useful function. Without an input device, the storage and retrieval mechanisms have no function. Just like a computer system, our sensory sight system is irreducibly complex.

Footnote:

Earlier I noted that I was surprised that scientists were surprised to find a second language in DNA, but on reflection I considered that I should justify that comment. The majority of my IT career was in the manufacturing sector. I have a comprehensive understanding of the systems and information requirements of manufacturing management, having designed, developed, and implemented integrated systems across a number of vertical markets and industries.
The cell is often described as a mini factory and using that analogy, it seems logical to me that if the genome holds all of the production data in DNA, then it must include not just the Bill of Material for proteins, but also the complete Bill of Resources for everything that occurs in human biology and physiology. Whether that is termed another language I will leave to others, but what is obvious to anyone with experience in manufacturing management is that an autonomous factory needs more information than just a recipe.
Fred Hoyle’s “tornado through a junk yard assembling a Boeing 747” analogy understates the complexity by several orders of magnitude. A more accurate analogy would be a tornado assembling a full automated factory capable of replicating itself and manufacturing every model of airplane Boeing ever produced.

It takes a lot of faith to believe that the human eye could be a product of evolution.

http://reasonandscience.heavenforum.org/t1653-the-human-eye-intelligent-designed#2563

https://www.youtube.com/watch?v=NZMY5v79zyI

http://goddidit.org/the-human-eye/

The eye lense is suspended in position by hundreds of string like fibres called zonules.
The ciliary muscle changes the shape of the lens. It relaxes to flattern the lens for distance vision. For close work, it contracts rounding out the lens. This is all automatic.
Question : How could evolution produce a system that can control a muscle that is in the perfect place to change the shape of the lense ?
The retina is composed of photoreceptor cells. When light falls on one of these cells, it causes a complex chemical reaction that sends a electrical signal through the optic nerve to the brain.
Question : How does evolution explain our retinas having the correct cells which create electrical impulses when light activates them ?
The image from the left side from the left eye is combined with the image from the left side of the right eye and vice versa.
The image that is projected on to the retina is upside down. The brain flips the image during processing. Somehow, the brain makes sense of the electrical impulses received via the optic nerve.
Questions : how would evolution produce a system that can intepret electrical impulses and process them into images ?
Why would evolution produce a system that knows that the image in the retina is upside down ?
How does evolution explain the left side of the brain receiving the information from the left side of both eyes and the right side of the brain taking information from the right side of both eyes ?
The retina needs a fairly constant level of light to best form useful images with our eyes. the iris muscles controls the size of the pupil. it contracts and expands, opening and closing the pupil, in response to the brightness of the surrounding light. Just as the aperture in the camera protects the film from over exposure, the iris of the eye helps protect the sensitive retina.
Questions: how would evolution produce a light sensor ? even if evolution could produce a light sensor, how could a purely naturalistc process like evolution produce a system, that can measure light intesity ? how could evolution produce a system that would control a muscle that regulates the size of the pupill ?
Cone cells give us the detailled color daytime vision. There are six million of them in each human eye. Most of them are located in the central retina. There are 3 types of cone cells:  one sensitive to red light, another to green light, and the third sensitive to blue light.
Question: Isnt it fortunate that the cone cells are located at the center of the retina ? Would be a bit awkward if your most detailled vision was on the periphery of your eye sight ?
Rod cells give us our dim light or night vision. They are 500 times more sensitive to light and also more sensitive to motion than cone cells. There are 120 mil rod cells in the human eye. Most rod cells are located in our peripheral or side vision.
The eye can modify its own light sensitivity. After about 15 seconds in lower light, our body increases the level of rhodopsin in our retina. Over the new half hour in low light, our eyes get more and more sensitive. In fact, studies have shown that our eyes are around 600 times more sensitive at night than during the day.
Questions : why would the eye have different kind of photoreceptor cells with one helping us specifically see at low light ?
The lacrimal gland continually secrets tears which moisten, lubricate, and protect the surface of the eye. Excess tears drain into the lacrimal duct which empty into the nasal cavity.
If there was no lubrication system, our eyes would dry up and cease to function within a few hours.
Question : If the lubrication wasnt there we would all be blind. This system had to be there right from the beginning, no ?Fortunate that we have a lacrimal duct , arent we ?
otherwise , we would have stead stream of tears running down our faces.
Eye lashes protec the eyes from particles that may injure them. They form a screen to keep dust and insects out. Anything touching them triggers the eye lids to blink.
Question : How could evolution produce this ?
Six muscles are in charge of eye movement. four of these move the eye up, down, left, and right. The other two control the twisting motion of the eye when we tilt our head. The orbit or eye socket is a cone shaped bony cavity that protects the eye. the socket is padded with fatty tissue that allows the eye to move easily.
Question: When you tilt your head to the side your eye stays level with the horizon. how would evolution produce this ?
Isnt it amazing that you can look where you want without having to move your head all the time ? If our eye sockets were not padded with fatty tissues, then it would be a struggle to move our eyes. why would evolution produce this ?
Conclusion : the eye is the best automatic camera in existence. Every time we change were we are looking, our eye and retina is changing everything else to compensate . focus and light intensity are constantly adjusting to ensure that our eye sight is as good as it can be.
Man has made its own cameras. It took intelligent people to design and build them. The human eye is better than any of them. Was it therefore designed or not ?

Common objection :

Is Our ‘Inverted’ Retina Really ‘Bad Design’?

http://reasonandscience.heavenforum.org/t1689-is-our-inverted-retina-really-bad-design

As it turns out, the supposed problems Dawkins finds with the inverted retina become actual advantages in light of recent research published by Kristian Franze et. al., in the May 2007 issue of PNAS . As it turns out, "Muller cells are living optical fibers in the vertebrate retina." Consider the observations and conclusions of the authors in the following abstract of their paper:

http://www.pnas.org/content/104/20/8287.short

Although biological cells are mostly transparent, they are phase objects that differ in shape and refractive index. Any image that is projected through layers of randomly oriented cells will normally be distorted by refraction, reflection, and scattering. Counterintuitively, the retina of the vertebrate eye is inverted with respect to its optical function and light must pass through several tissue layers before reaching the light-detecting photoreceptor cells. Here we report on the specific optical properties of glial cells present in the retina, which might contribute to optimize this apparently unfavorable situation. We investigated intact retinal tissue and individual Muller cells, which are radial glial cells spanning the entire retinal thickness. Muller cells have an extended funnel shape, a higher refractive index than their surrounding tissue, and are oriented along the direction of light propagation. Transmission and reflection confocal microscopy of retinal tissue in vitro and in vivo showed that these cells provide a low-scattering passage for light from the retinal surface to the photoreceptor cells. Using a modified dual-beam laser trap we could also demonstrate that individual Muller cells act as optical fibers. Furthermore, their parallel array in the retina is reminiscent of fiberoptic plates used for low-distortion image transfer. Thus, Muller cells seem to mediate the image transfer through the vertebrate retina with minimal distortion and low loss. This finding elucidates a fundamental feature of the inverted retina as an optical system and ascribes a new function to glial cells






Stephen Jay Gould, Former Professor of Geology and Paleontology at Harvard University

To suppose that the eye with all its inimitable contrivances for adjusting the focus to different distances, for admitting different amounts of light, and for the correction of spherical and chromatic aberration, could have been formed by natural selection, seems, I freely confess, absurd in the highest degree.

http://webvision.med.utah.edu/book/part-ii-anatomy-and-physiology-of-the-retina/photoreceptors/

http://www.ic.ucsc.edu/~bruceb/psyc123/Vision123.html.pdf

http://www.detectingdesign.com/humaneye.html

http://www.creationstudies.org/Education/Darwin_and_the_eye.html

http://www.harunyahya.com/en/Books/592/darwinism-refuted/chapter/51

Human sight is a very complex system of irreducibly complex interacting parts. These include all the physical components of the eye as well as the activity of the optic nerve attached to certain receptors in the brain.
The optic nerve is attached to the sclera or white of the eye. The optic nerve is also known as cranial nerve II and is a continuation of the axons of the ganglion cells in the retina. There are approximately 1.1 million nerve cells in each optic nerve. The optic nerve, which acts like a cable connecting the eye with the brain, actually is more like brain tissue than it is nerve tissue. In addition to this, there are complex equations that the brain uses to transform what we see in real life onto the curved screen of the retina in the human eyeball.

This complex system is combination of an intelligently designed camera, lens, and brain programming all work together enabling us to see our world in incredible clarity.

Each of these components have no function of their own, even in other systems. That is a interdependent system. It cannot arise in a stepwise evolutionary fashion, because the single parts by their own would have no function.


Imagine having to use spherical shaped film instead of the conventional flat form of film in your camera. The images should be distorted. Just like those funny mirrors in the fun houses at state fairs and carnivals. That is exactly the way we see the world reflected against our curved retinas in back of the human eye.

We manage to correct these images and see them accurately without distortion because the Creator has installed fast-running programs in the brain that instantly correct the distortions in the image, so that the world around us appears to be flawless, like a photograph.

Not only that, the human brains turns our eyes, which are already more complex and refined than the most advanced High Definition cameras available today, into biological computers that can estimate the size and distance of objects seen. The objects are not measured as they appear on the retina, our brains act as an advanced evaluation program processing the physical data received by our sensory organs: it enlarges, reduces, and adjusts them precisely, so that the information is presented in a way that makes our sense of sight into an apparatus that is far superior to any pure instrument of physics.

Comparing the eye to a camera is an interesting analogy because our sight is really superior to an instrument of pure physics. Our eyes are able to see the darkest shadows as well as the brightest sunlight by automatically adjusting the optical range of operation. It can see colors. It can perceive white paper as being white, even when it is illuminated by bright light of varying colors. Our eyes contain the ability to perceive colors in essentially the same way. Color and shape are perceived as the same, whether the object is close or far away, even if the lighting varies radically.



http://iaincarstairs.wordpress.com/2013/03/25/as-smart-as-molecules/

The invention of television was certainly ingenious and changed the face of the Earth, and relied on a material called selenium, which converted photon stimulation to electrical signals.  The Russian, Nipkow,  experimented with it in the 1800’s but found it unworkable due to the weak signal and rapid decay.  It was Baird who in the 1920’s, with the advent of electrical amplifiers, realised that the signals all decayed at the same rate, and all that was required was a consistent amplification.  Refining the process was to take up the remainder of his life.

The eye uses a similar system in which retinal, a small molecule which fits into the binding site of a large protein called opsin, making up rhodopsin, is triggered into activity by the sensitivity of the opsin molecule to photon stimulation.  The following chain reaction of chemicals and eventual electrical signals, include feedback loops, timing mechanisms, amplifiers and interpretive mechanisms in the brain woudl fill a book.

Even modern television doesn’t improve on the devices contained within the retina, which are dealt with in greater detail elsewhere on this site.  The chain of events which give rise to sight are so important that the eyes use about 1/5th of the body’s energy; the eyes are constantly in vibratory motion, without which, the signals would cease to be forwarded to the brain.

The eyes include their own immune system, variable blood flow heat sinks behind the RPE controlled by the iris contraction, built in sunglasses, a recycling depot and separate circuits for motion, line detection and binocular perspective. And lastly, remember that all these components are smaller than the wavelength of visible light.  They transmit signals of light for us to use, but in their molecular world, they all work completely in the dark.

Priceless!


The eye, due to design, or evolution ?

http://reasonandscience.heavenforum.org/t1653-the-human-eye-intelligent-designed#2563

http://www.arn.org/docs/glicksman/eyw_041201.htm

Evolutionary Simplicity?

Review of this and the last two columns clearly demonstrates:

   the extreme complexity and physiological interdependence of many parts of the eyeball

   the absolute necessity of many specific biomolecules reacting in exactly the right order to allow for photoreceptor cells and other neurons to transmit nervous impulses to the brain

   the presence of, not only an eyeball whose size is in the proper order to allow for focusing by the cornea and lens, but also a region in the retina (fovea) that is outfitted with the proper concentration of photoreceptor cells that are connected to the brain in a 1:1:1 fashion to allow for clear vision

   that vision is dependent on a complex array of turned around, upside down, split-up, and overlapping messages, from over two million optic nerve fibers that course their way to the visual cortex causing a neuroexcitatory spatial pattern that is interpreted as sight

   that scientists are blind to how the brain accomplishes the task of vision

The foregoing is likely to give most people pause before they subscribe to the theory of macroevolution and how it may apply to the development of the human eye and vision. How can one be so certain of an origins theory when one still doesn’t fully understand how something actually works? Most of what I’ve read by its supporters about this topic contains a lot of rhetoric and assumption without much detail and logical progression. It all seems just a bit premature and somewhat presumptuous.

Quite frankly, science just doesn’t have the tools to be able to definitively make this determination, yet. Will it ever have them? Maybe yes, maybe no. Until such time, I reserve the right to look upon evolutionary biologists’ explanations for the development of the human eye and the sensation of vision, with a large amount of skepticism, and as seeming overly simplistic and in need of a heavy dose of blind faith.


Human Eye Has Nanoscale Resolution

http://crev.info/2015/07/human-eye-has-nanoscale-resolution/

1. http://web.archive.org/web/20160322111142/http://goddidit.org/the-human-eye
2) http://creation.com/did-eyes-evolve-by-darwinian-mechanisms[/size][/size][/size][/size][/size][/size]



Last edited by Admin on Tue Oct 17, 2017 8:50 am; edited 18 times in total

View user profile http://elshamah.heavenforum.com

Admin


Admin
Eyes most likely evolved from simple to complex through a gradual series of tiny steps. Piecing together the sequence of eye evolution is challenging, and we don't know the sequence of steps that led to every modern eye. ( despite of this, it evolved. Nice evolution of the gap argument ! ) 8

Light Detection, Pigment, and Movement Make an Eye



Photoreceptor cells

http://en.wikipedia.org/wiki/Photoreceptor_cell



A photoreceptor cell is a specialized type of neuron found in the retina that is capable of phototransduction. The great biological importance of photoreceptors is that they convert light (visible electromagnetic radiation) into signals that can stimulate biological processes. To be more specific, photoreceptor proteins in the cell absorb photons, triggering a change in the cell's membrane potential.



Neuron :

Signal transduction pathway

http://www.youtube.com/watch?v=qOVkedxDqQo

The absorption of light leads to a isomeric change in the retinal molecule.

The signal transduction pathway is the mechanism by which the energy of a photon signals a mechanism in the cell that leads to its electrical polarization. This polarization ultimately leads to either the transmittance or inhibition of a neural signal that will be fed to the brain via the optic nerve. 
The pathway must go through nine highly specific steps, of which any one has no function, unless the whole pathway is been go through. 

Pitman writes : 
The question now of course is, how could such a system evolve gradually?  All the pieces must be in place simultaneously.  For example, what good would it be for euglena that has no eyes to suddenly evolve the protein 11-cis-retinal in a small group or "spot" of cells on its head?  These cells now have the ability to detect photons, but so what?  What benefit is that for euglena?  Now, lets say that somehow these cells develop all the needed proteins to activate an electrical charge across their membranes in response to a photon of light striking them.  So what?!  What good is it for them to be able to establish an electrical gradient across their membranes if there is no nervous pathway to activate its flagella to swim closer to the light source, or more far away ?   Now, what if this pathway did happen to suddenly evolve and such a signal could be sent to the euglena's eyespot.  So what?!  How is euglena going to know what to do with this signal?  It will have to learn what this signal means.  Learning and interpretation are very complicated processes involving a great many other proteins in other unique systems.  Now euglena, in one lifetime, must evolve the ability to pass on this ability to interpret vision to its offspring.  If it does not pass on this ability, the offspring must learn as well or vision offers no advantage to them.  All of these wonderful processes need regulation.  No function is beneficial unless it can be regulated (turned off and on).  If the light sensitive cells cannot be turned off once they are turned on, vision does not occur.  This regulatory ability is also very complicated involving a great many proteins and other molecules - all of which must be in place initially for vision to be beneficial.


The steps, or signal transduction pathway, in the vertebrate eye's rod and cone photoreceptors are then:

1.The rhodopsin or iodopsin in the disc membrane of the outer segment absorbs a photon, changing the configuration of a retinal Schiff base cofactor inside the protein from the cis-form to the trans-form, causing the retinal to change shape.

2.This results in a series of unstable intermediates, the last of which binds stronger to the G protein in the membrane and activates transducin, a protein inside the cell. This is the first amplification step – each photoactivated rhodopsin triggers activation of about 100 transducins. (The shape change in the opsin activates a G protein called transducin.)

3.Each transducin then activates the enzyme cGMP-specific phosphodiesterase (PDE).

4.PDE then catalyzes the hydrolysis of cGMP to 5' GMP. This is the second amplification step, where a single PDE hydrolyses about 1000 cGMP molecules.

5.The net concentration of intracellular cGMP is reduced (due to its conversion to 5' GMP via PDE), resulting in the closure of cyclic nucleotide-gated Na+ ion channels located in the photoreceptor outer segment membrane.

6.As a result, sodium ions can no longer enter the cell, and the photoreceptor outer segment membrane becomes hyperpolarized, due to the charge inside the membrane becoming more negative.

7.This change in the cell's membrane potential causes voltage-gated calcium channels to close. This leads to a decrease in the influx of calcium ions into the cell and thus the intracellular calcium ion concentration falls.

8.A decrease in the intracellular calcium concentration means that less glutamate is released via calcium-induced exocytosis to the bipolar cell (see below). (The decreased calcium level slows the release of the neurotransmitter glutamate, which can either excite or inhibit the postsynaptic bipolar cells.)

9.Reduction in the release of glutamate means one population of bipolar cells will be depolarized and a separate population of bipolar cells will be hyperpolarized, depending on the nature of receptors (ionotropic or metabotropic) in the postsynaptic terminal (see receptive field).

Thus, a rod or cone photoreceptor actually releases less neurotransmitter when stimulated by light. Less neurotransmitter could either stimulate (depolarize) or inhibit (hyperpolarize) the bi-polar cell it synapses with, dependent on the nature of the receptor on the bipolar cell. This ability is integral to the center on/off mapping of visual units.[citation needed]

ATP provided by the inner segment powers the sodium-potassium pump. This pump is necessary to reset the initial state of the outer segment by taking the sodium ions that are entering the cell and pumping them back out.

Although photoreceptors are neurons, they do not conduct action potentials with the exception of the photosensitive ganglion cell – which are involved mainly in the regulation of circadian rhythms, melatonin, and pupil dilation.


Advantages


Phototransduction in rods and cones is unique in that the stimulus (in this case, light) actually reduces the cell's response or firing rate, which is unusual for a sensory system where the stimulus usually increases the cell's response or firing rate. However, this system offers several key advantages.

First, the classic (rod or cone) photoreceptor is depolarized in the dark, which means many sodium ions are flowing into the cell. Thus, the random opening or closing of sodium channels will not affect the membrane potential of the cell; only the closing of a large number of channels, through absorption of a photon, will affect it and signal that light is in the visual field. Hence, the system is noiseless.

Second, there is a lot of amplification in two stages of classic phototransduction: one pigment will activate many molecules of transducin, and one PDE will cleave many cGMPs. This amplification means that even the absorption of one photon will affect membrane potential and signal to the brain that light is in the visual field. This is the main feature that differentiates rod photoreceptors from cone photoreceptors. Rods are extremely sensitive and have the capacity of registering a single photon of light, unlike cones. On the other hand, cones are known to have very fast kinetics in terms of rate of amplification of phototransduction, unlike rods.

http://www.detectingdesign.com/humaneye.html

the first step in vision is the detection of photons.  In order to detect a photon, specialized cells use a molecule called 11-cis-retinal.  When a photon of light interacts with this molecule, it changes its shape almost instantly.  It is now called trans-retinal.  This change in shape causes a change in shape of another molecule called rhodopsin.  The new shape of rhodopsin is called metarhodopsin II.  Metarhodopsin II now sticks to another protein called transducin forcing it to drop an attached molecule called GDP and pick up another molecule called GTP.  The GTP-transducin-metarhodopsin II molecule now attaches to another protein called phosphodiesterase.  When this happens, phosphodiesterase cleaves molecules called cGMPs.  This cleavage of cGMPs reduces their relative numbers in the cell.  This reduction in cGMP is sensed by an ion channel.  This ion channel shuts off the ability of the sodium ion to enter the cell.  This blockage of sodium entrance into the cell causes an imbalance of charge across the cell's membrane.  This imbalance of charge sends an electrical current to the brain.  The brain then interprets this signal and the result is called vision.

11-cis-retinal
rhodopsin ==>> becomes metarhodopsin II + transducin  , drops GDP + adds GTP

GTP-transducin-metarhodopsin II + phosphodiesterase

phosphodiesterase cleaves cGMPs == >> blockage of sodium entrance into the cell

imbalance of charge across the cell's membrane.  This imbalance of charge sends an electrical current to the brain.  The brain then interprets this signal and the result is called vision.

Many other proteins are now needed to convert the proteins and other molecules just mentioned back to their original forms so that they can detect another photon of light and signal the brain.  If any one of these proteins or molecules is missing, even in the simplest eye system, vision will not occur

The question now of course is, how could such a system evolve gradually?  All the pieces must be in place simultaneously.  For example, what good would it be for an earthworm that has no eyes to suddenly evolve the protein 11-cis-retinal in a small group or "spot" of cells on its head?  These cells now have the ability to detect photons, but so what?  What benefit is that to the earthworm?  Now, lets say that somehow these cells develop all the needed proteins to activate an electrical charge across their membranes in response to a photon of light striking them.  So what?!  What good is it for them to be able to establish an electrical gradient across their membranes if there is no nervous pathway to the worm's minute brain?   Now, what if this pathway did happen to suddenly evolve and such a signal could be sent to the worm's brain.  So what?!  How is the worm going to know what to do with this signal?  It will have to learn what this signal means.  Learning and interpretation are very complicated processes involving a great many other proteins in other unique systems.  Now the earthworm, in one lifetime, must evolve the ability to pass on this ability to interpret vision to its offspring.  If it does not pass on this ability, the offspring must learn as well or vision offers no advantage to them.  All of these wonderful processes need regulation.  No function is beneficial unless it can be regulated (turned off and on).  If the light sensitive cells cannot be turned off once they are turned on, vision does not occur.  This regulatory ability is also very complicated involving a great many proteins and other molecules - all of which must be in place initially for vision to be beneficial.


Arguments against IC of the signal transduction pathway 5

Second, concerning vision. The argument has been made that the vision system is also an irreducibly complex system. Mike Behe has found no fault in Darwin's lack of concern with the origin of light reception at the detailed cellular and molecular level, but now with our opening of the black box of vision, we have no excuse for not concerning ourselves with those sort of details. Mike claims that upon examination of the open box that we must conclude that evolution of this complex system is impossible. So again we must ask, does the pre-adaptation argument get us anywhere in the discussion of the origin of vision? Again, the answer is an obvious yes. First, if we restrict ourselves to light reception, then I think that it's fair to say that a nerve cell is a pre adaptation to vision. Given a nerve cell, I don't have to explain where all those components come from (at least when explaining vision). Second, transducin, one of the key proteins involved in the light signal transduction from rhodopsin to nerve cell, is a member of the G protein family, a large family involved in all sorts of signal transduction events, including hormone signaling. The main novel feature of transducin is its specificity for rhodopsin. The generic G protein is a pre adaptation for transducin. Finally, rhodopsin, the main light reception protein, is a membrane protein similar in structure to other sensory receptors and to hormone receptors. These other receptors whose physiological effects are also mediated by G proteins may have been pre adaptations for rhodopsin. My answer here may be a form of question-begging, because you can always ask where did these other systems come from, but I think that the functional diversification of similar signal transduction sytems is reminiscent of the hemoglobin tale told above. What's needed is detailed sequence and structure information about all these proteins in a variety of organisms that are representative of the tree of life. Then maybe we can say that we have opened the black box. Until then, I think that given the present data that the evolutionary explanation for complexity is not only plausible but likely.
This answer does not address the key issue of IC, namely that unless the process goes all steps through, no function is achieved.


The Nilsson and Pelger Theory of Eye Evolution 3

Since Darwin's time, our understanding of genetics has opened the door to comprehending what previously seemed impossible. In a 1994 paper, for example, Dan Nilsson and Susanne Pelger produced a dramatic example of computer modeling that illustrated how an eye could evovle in fewer than 400,000 “generations.” Random mutations of the refractive index of the surface membrane were incorporated into the algorithm, along with selective pressure for only 1% improvement each time. A steadily improving lens and eyecup resulted, solving the question asked agonizingly by Darwin, how could a structure as complex as the eye evolve? The answer from this model is: in a geological eyeblink, improved eyes can evolve in which each stage provides a better eye than the last.



http://www.rpgroup.caltech.edu/courses/aph161/Handouts/Nilsson1994.pdf

The evolution of complex structures, however, involves modifications of a large number of separate quantitative characters, and in addition there may be discrete innovations and an unknown number of hidden but necessary phenotypic changes. These complications seem effectively to prevent evolution rate estimates for entire organs and other complex structures. An eye is unique in this respect because the structures necessary for image formation, although there may be several, are all typically quantitative in their nature, and can be treated as local modifications of pre-existing tissues. Taking a patch of pigmented light-sensitive epithelium as the starting point, we avoid the more inaccessible problem of photoreceptor cell evolution. Thus, if the objective is limited to finding the number of generations required for the evolution of an eye’s optical geometry, then the problem becomes solvable.

Evolution Can Even Explain How the Human Eye Evolved 4

Benjamin Radford writes for the Discovery News and is interested in why people believe things for which there is little or no evidence. He applies critical thinking and scientific methodologies to unusual claims. One of those things that interests Radford is skepticism of evolution. After all, as Radford notes, there is “overwhelming scientific evidence for evolution,” and it is “confirmed by nearly every scientific discipline.” Evolution is all around us, all the time. Evolution is why we need to get a new flu shot every year and, notes Radford, evolution can even explain how the human eye evolved. It is strange that such claims come from a critical thinker such as Radford because, in fact, they are all false.

Consider the evolution of the human eye. Charles Darwin considered the eye to be an “organ of extreme perfection.” Even after writing Origins he confessed it gave him a cold shudder. He needed to focus on his theory’s fine gradations to give himself comfort. But one hundred and thirty four years later, in 1994, evolutionists claimed they had solved the problem. The evolution of the eye was finally understood. It turned out such evolution was no big deal after all. In fact the eye could rather easily evolve.

The only catch to the conclusion was that it was circular. The evolutionists, who believe evolution is a fact, first assumed the evolution of the eye in order to solve the problem of the evolution of the eye.

With evolution taken as a given, whether or not vision systems evolved was no longer in question—they did. The only question was how they have evolved. The 1994 paper explained that although Darwin “anticipated that the eye would become a favorite target for criticism,” the problem “has now almost become a historical curiosity” and “the question is now one of process rate rather than one of principle.” The evolutionists estimated this rate by first assuming that the eye indeed evolved. They wrote:

   The evolution of complex structures, however, involves modifications of a large number of separate quantitative characters, and in addition there may be discrete innovations and an unknown number of hidden but necessary phenotypic changes. These complications seem effectively to prevent evolution rate estimates for entire organs and other complex structures. An eye is unique in this respect because the structures necessary for image formation, although there may be several, are all typically quantitative in their nature, and can be treated as local modifications of pre-existing tissues. Taking a patch of pigmented light-sensitive epithelium as the starting point, we avoid the more inaccessible problem of photoreceptor cell evolution. Thus, if the objective is limited to finding the number of generations required for the evolution of an eye’s optical geometry, then the problem becomes solvable.


The problem becomes solvable? The evolutionists skipped the entire evolution of cellular signal transduction and the vision cascade. That would be like saying you have showed how motorcycles evolved although you took the engine, drive train and wheels as your starting point.

The evolutionists then skipped all of the major problems that arise after you have a signal transduction system in place, such as the incredible post processing system and the creation of the machinery to construct the vision system. The problem they ended up solving is sometimes affectionately referred to as a “cartoon” version of the real world problem.

The research, if you can call it that, did not demonstrate that the eye evolved or could have evolved. Yet the paper became a favorite reference for evolutionists wanting to promote evolution. Eye evolution, they insisted, was now known to be straightforward. Here, for instance, is how our tax dollars are used by PBS to promote this abuse of science:

   Zoologist Dan-Erik Nilsson demonstrates how the complex human eye could have evolved through natural selection acting on small variations. Starting with a simple patch of light sensitive cells, Nilsson’s model “evolves” until a clear image is produced.



This spreading of false information is not limited to popular presentations. A paper reporting on “highly advanced compound eyes” which are “as advanced as those of many living forms” in early arthropods begins by informing the reader that “theory (i.e., the Nilsson paper) suggests that complex eyes can evolve very rapidly.” This helps them to conclude that those incredible arthropod eyes are “further evidence that the Cambrian explosion involved rapid innovation.”

With the mythological framework in place, the findings could then safely be presented as confirmations of evolution. As the journal’s editor added:

   Charles Darwin thought that the eye, which he called an “organ of extreme perfection,” was a serious challenge to evolutionary theory — but he was mistaken. Theory predicts that eyes can evolve with great speed, and now there is support for this prediction from the fossil record.


Support for this prediction? You’ve got to be kidding. A cartoon version of reality, taking the myth of evolution as true, is considered a “prediction” and amazing early complexity in the fossils then becomes a “support for this prediction”?

What the arthropod fossils revealed is an early Cambrian, highly advanced vision system more elaborate than any so far discovered. Its compound eyes have more then 3,000 lenses optimally arranged in the densest and most efficient packing pattern. As the paper explains:

   The extremely regular arrangement of lenses seen here exceeds even that in certain modern taxa, such as the horseshoe crab Limulus, in which up to one-third of lenses deviate from hexagonal packing.


All of this is presented to the reader as merely another demonstration of how fantastic designs just happen spontaneously to arise:

   The new fossils reveal that some of the earliest arthropods had already acquired visual systems similar to those of living forms, underscoring the speed and magnitude of the evolutionary innovation that occurred during the Cambrian explosion.


Ho-hum, yet more evolutionary innovation. For evolutionists it was just another day in the office. As PZ Myers explained, we already knew that complex animals appear rapidly. After all, that is why they call it the “Cambrian explosion.” Evolutionists have written “whole books on the subject.”

Myers follows this circular reasoning with yet more question begging:

The sudden appearance of complexity is no surprise, either. We know that the fundamental mechanisms of eye function evolved long before the Cambrian, from the molecular evidence;



Of course there is no “molecular evidence” that gives us such knowledge (see here for example). But if you assume evolution is true to begin with, as do evolutionists who analyze the molecular patterns, then Myers’ fictional, question begging, world makes sense.

Myers follows these circular arguments with a more subtle type of fallacy. He explains that these particular findings are no big deal because both this finding and the similar trilobite vision systems require cellular signal transduction, development machines and so forth:

   It is also the case that the measure of complexity here is determined by a simple meristic trait, the number of ommatidia. This is not radical. The hard part in the evolution of the compound eye was the development of the signal transduction mechanism, followed by the developmental rules that governed the formation of a regular, repeating structure of the eye. The number of ommatidia is a reflection of the degree of commitment of tissues in the head to eye formation, and is a quantitative difference, not a qualitative one.


Setting aside the usual evolutionary speculation about how easily designs evolve, the problem here is that the cellular signal transduction, development machines and so forth are themselves problems for evolution. Indeed, even the simplest of light detection systems sport such incredible designs for which evolution has no explanation beyond vague speculation.

Next Myers is back to question-begging. In typical fashion he attempts to shore up the evolution position with the usual reference to, yes, the mythical 1994 Nilsson paper:

   And finally, there’s nothing in the data from this paper that implies sudden origins; there can’t be. If it takes a few hundred thousand years for a complex eye to evolve from a simple light sensing organ, there is no way to determine that one sample of a set of fossils was the product of millions of years of evolution, or one day of magical creation.

Next is the fallacy of credulity. If you present an evolutionist with the scientific failures of his theory, he will accuse you of basing your skepticism on your own failure to imagine a solution. As Myers puts it:

It’s a logical error and a failure of the imagination to assume that these descriptions are of a population that spontaneously emerged nearly-instantaneously.

Failure of the imagination? Indeed, we just need to do more imagining, that’s the problem.

Finally Myers reiterates the flawed Darwinian argument that whatever abruptness you see in the fossil record is, after all, merely a consequence of all those gaps in the fossils:

Darwin himself explained in great detail how one should not expect fine-grained fossil series, due to the imperfection of the geological record.


When in doubt, doubt the data. Paleontologists agree that the fossil record reveals abrupt appearances, but when convenient evolutionists can always protect their theory with those gaps in the fossil record.

Evolutionary thinking is remarkable. I am reminded of John Earman’s remarks about Hume’s arguments. For it is astonishing how well evolution is treated, given how completely the confection collapses under a little probing. So if Benjamin Radford really is interested in why people believe things for which there is little or no evidence, we have just the topic for him. [/quote]

http://musingsofscience.wordpress.com/2011/01/22/evolution-of-the-eye-nilsson-pelger-and-lens-evolution/



http://www.reviewevolution.com/press/pressRelease_EyeEvolution.php

Since Nilsson and Pelger's article was published, it has been widely--but erroneously--reported that their conclusions were based on a computer model. Berlinski calls this claim "an urban myth." At a minimum, PBS should make clear to viewers that Nilsson's conclusions are not based on computer models at all, and it should acknowledge that his work is highly speculative."

http://creationwiki.org/The_eye_is_too_complex_to_have_evolved_%28Talk.Origins%29

http://www.grisda.org/origins/21039.htm

"What is one to make of all this? First, comparing the evolution of the eye to shape changes on a computer screen seems rather far-fetched. The entire project seems closer to an exercise in geometry than in biology. Second, the exercise assumes a functional starting point. Thus it has nothing to do with the origin of the biochemical systems of vision or the requisite neural network. Third, Nilsson and Pelger's computer exercise operates as if each 1% change in morphology can be accounted for by a single gene mutation. They do not consider the effects of pleiotropy, genetic background, or developmental processes. Fourth, an important part of the model relies on the special circumstance of a layer of clear cells covering the "retina." This layer somehow assumes the proper shape of a lens. Fifth, as noted by the authors, several features of the eye remain unaccounted for, such as the iris. Basically, the only result achieved was to show that two light-sensitive surfaces that differ in shape by 1% will have different efficiencies in photoreception, and that an uninterrupted series of 1% improvements is possible. The failure of scientists to produce new structures in selection experiments illustrates the implausibility of Nilsson and Pelger's "just so" story."

http://www.brainfacts.org/sensing-thinking-behaving/senses-and-perception/articles/2012/vision-it-all-starts-with-light/

For the brain to process visual information, light waves must first pass through the eyes. The sensitive photoreceptors of the retina then transmits the electrochemical feeds from the eye to the brain. The visual centers of the brain then process and decode the transmitted signals, which is now perceived as visual information.

Both your eyes and your brain have a role in vision; it isn't a task that a single organ does.
Your eyes collect light and turn it into electrical signals. Your brain then processes those signals and converts it to an image that you can understand.
The eyes do not see anything. Light impinges on photo-receptors in the retina (which, technically, is part of the brain, but I won't be considering it as such) and nerve impulses are generated by RGCs and sent to the brain via the optic nerve. You do not perceive images until signal processing occurs in the brain (visual cortex and related visual areas). So there is a interdependence. Without the eye, you don't see. Without the brain, you don't see either. Both are required.


http://www.compellingtruth.org/irreducible-complexity.html

The eye: Although evolutionists have attempted to show how the eye could have evolved, the sheer complexity of the mechanism defies explanation. The retina actually interprets much of the input before it reaches the brain. The processors in the brain would have had to evolve parallel with yet independently of the development of the eye itself. Even the computer simulation of the evolution of the eye shows how only an intentional design could have resulted in such functionality.


http://www.wikihow.com/Argue-Against-Evolution-of-Eyesight

eye / brain is a interdependent and irreducible complex system

If the sight center of the brain that “sees” did not “begin” to correctly process sight at exactly at the same moment as the optic nerve, then the nerve would have no reason to transmit or to be attached to the brain and the eye wouldn’t see.

What would you think of a person who saw a computer chip with millions of transistors that were self-connected into a computer and it all worked--who said,

"You know that spontaneous events formed that computer chip by electrical and mineral activity -- multiprocessing program included? It is scientific fact that the chip obviously had no designer, no plan or maker! Oh, that is explained by billions of years of processes by pressures of nature..."

You would think that person was not just confused. Yet, the evolutionist essentially says that's how the optic center in the brain came to exist, and that is a more interdependent processor than the most advanced computer chip ever made. So they say that the optic nerves, the eye and the retina all happened by a series of natural events that we can call ad hoc mistake--"formed in one particular moment without ability to consider any application."

So then is the existence of a gnat or mosquito, with exquisite processing that enables flight, formed without logic, design or awareness, in that process... need it be so. Evolutionists may even propose that it was not by random chaos. How would evolution not be "random chaos?" That's a fair question to ask.

Evolutionists argue that [non-directed] features that improve a creature’s chance for survival get passed on in succeeding generations. Is it just as logical that all mutations would be passed on? Whether or not they are beneficial there is no awareness of the process by the genetic system. So, how would the unrelated parts of an eye and a brain now correlate, by independent mutations--all "failures" of the previously existing genetic process. Unobservant processes that did not realize their own existence then continued, added, subtracted by failures (mutations) of the genetic system, and perhaps surviving to breed, and therefore passing on defects, or dying before they get a chance to breed!


How the Body Works: Overview of the Nervous and Endocrine Systems

http://drbenkim.com/nervous-endocrine-system.htm

The interdependent relationship between your nervous and endocrine systems begins in a tiny area of tissue in your brain called your hypothalamus.

Your hypothalamus is only about as large as a grape, and can be viewed as the micro-processing chip that controls almost all of your body's external and internal activities. Your hypothalamus receives information from all of the major areas of your brain, your major organs, and your eyes, and it registers sensations like pain, temperature, hunger, thirst, lust, stress, fear, and anger.

Once your hypothalamus registers incoming information and decides what your body needs to best survive and be healthy, it uses your autonomic nervous system to affect the behavior of all of your major organs. Examples of such effects are increased heart and lung rates, increased blood flow to your skeletal muscles or digestive organs, changes in how much light enters your eyes and how well your eyes can focus on distant objects, production of sweat or shivers, and arousal of sexual organs.


Dissecting Darwinism  7
biochemists have shown that even a simple light-sensitive spot requires a complex array of enzyme systems. When light strikes the retina, a photon interacts with a molecule called 11-cis-retinal, which rearranges within picoseconds to trans-retinal. The change in the shape of the retinal molecule forces a change in the shape of the protein rhodopsin. The protein then changes to metarhodopsin II and sticks to another protein, called transducin. This process requires energy in the form of GTP, which binds to transducin. GTP-transducin-metarhodopsin II then binds to a protein called phosphodiesterase, located on the cell wall. This affects the cGMP levels within the cell, leading to a signal that then goes to the brain. The recognition of this signal in the brain and subsequent interpretation involve numerous other proteins and enzymes and biochemical reactions within the brain cells. Thus, each of these enzymes and proteins must exist for the system to work properly. Many other mathematical and logistical weaknesses to the Nilsson example of eye evolution have been uncovered . In summary, the eye is incredibly complex. Since it is unreasonable to expect self-formation of the enzymes in perfect proportion simultaneously, eye function represents a system that could not have arisen by gradual mutations.

http://genesisfile.com/?page_id=146

Complexity of the Eye and Brain

Without the foresight of a plan, we would expect that the random evolutionary changes would attempt all kinds of useless combinations of parts while trying to provide for a successful evolutionary advancement. Yet as we look at living organisms over the world, we do not seem to see any of these random combinations. In nature, it appears that we are dealing largely, if not exclusively, with purposeful parts. Furthermore, if evolution is a real ongoing process, why don’t we find new developing complex organs in organisms that lack them? We would expect to find developing legs, eyes, livers, and new unknown kinds of organs, providing for evolutionary advancement in organisms that lacked desirable advantagesThis absence is a serious indictment against any proposed undirected evolutionary process, and favors the concept that what we see represents the work of an intelligent Creator.

The simple example of a muscle, mentioned above, pales into insignificance when we consider more complicated organs such as the eye or the brain. These contain many interdependent systems composed of parts that would be useless without the presence of all the other necessary parts. In these systems, nothing works until all the necessary components are present and working. The eye has an automatic focusing system that adjusts the lens so as to permit us to clearly see close and distant objects. We do not fully understand how it works, but a part of the brain analyzes data from the eye and controls the muscles in the eye that change the shape of the lens. The system that controls the size of the pupil so as to adjust to light intensity and to reduce spherical lens aberration also illustrates interdependent parts. Then there are the 100,000,000 light-sensitive cells in the human eye that send information to the brain through some 1,000,000 nerve fibers of the optic nerve. In the brain this information is sorted into various components such as color, movement, form and depth. It is then analyzed and combined into an intelligible picture. This involves an extremely complex array of interdependent parts.

But the visual process is only part of our complex brains, which contain some 100,000,000,000 nerve cells connected by some 400,000 kilometers of nerve fibres. It is estimated that there are around 100,000,000,000,000 connections between nerve cells in the human brain. That we can think straight (we hope most of us do!) is a witness to a marvelous ordered complex of interdependent parts that challenges suggestions of an origin by random evolutionary changes. How could such complicated organs develop by an unplanned process?



What would you think of a person who saw a computer chip with millions of transistors that were self-connected into a computer and it all worked--who said,

"You know that spontaneous events formed that computer chip by electrical and mineral activity -- multiprocessing program included? It is scientific fact that the chip obviously had no designer, no plan or maker! Oh, that is explained by billions of years of processes by pressures of nature..." 2

You would think that person was not just confused. Yet, the evolutionist essentially says that's how the optic center in the brain came to exist, and that is a more interdependent processor than the most advanced computer chip ever made. So they say that the optic nerves, the eye and the retina all happened by a series of natural events that we can call ad hoc mistake--"formed in one particular moment without ability to consider any application."

So then is the existence of a gnat or mosquito, with exquisite processing that enables flight, formed without logic, design or awareness, in that process... need it be so. Evolutionists may even propose that it was not by random chaos. How would evolution not be "random chaos?" That's a fair question to ask.

Evolutionists argue that [non-directed] features that improve a creature’s chance for survival get passed on in succeeding generations. Is it just as logical that all mutations would be passed on? Whether or not they are beneficial there is no awareness of the process by the genetic system. So, how would the unrelated parts of an eye and a brain now correlate, by independent mutations--all "failures" of the previously existing genetic process. Unobservant processes that did not realize their own existence then continued, added, subtracted by failures (mutations) of the genetic system, and perhaps surviving to breed, and therefore passing on defects, or dying before they get a chance to breed!

How the Body Works: Overview of the Nervous and Endocrine Systems

http://drbenkim.com/nervous-endocrine-system.htm

The interdependent relationship between your nervous and endocrine systems begins in a tiny area of tissue in your brain called your hypothalamus.

Your hypothalamus is only about as large as a grape, and can be viewed as the micro-processing chip that controls almost all of your body's external and internal activities. Your hypothalamus receives information from all of the major areas of your brain, your major organs, and your eyes, and it registers sensations like pain, temperature, hunger, thirst, lust, stress, fear, and anger.

Once your hypothalamus registers incoming information and decides what your body needs to best survive and be healthy, it uses your autonomic nervous system to affect the behavior of all of your major organs. Examples of such effects are increased heart and lung rates, increased blood flow to your skeletal muscles or digestive organs, changes in how much light enters your eyes and how well your eyes can focus on distant objects, production of sweat or shivers, and arousal of sexual organs.

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3246854/

biochemists have shown that even a simple light-sensitive spot requires a complex array of enzyme systems. When light strikes the retina, a photon interacts with a molecule called 11-cis-retinal, which rearranges within picoseconds to trans-retinal. The change in the shape of the retinal molecule forces a change in the shape of the protein rhodopsin. The protein then changes to metarhodopsin II and sticks to another protein, called transducin. This process requires energy in the form of GTP, which binds to transducin. GTP-transducin-metarhodopsin II then binds to a protein called phosphodiesterase, located on the cell wall. This affects the cGMP levels within the cell, leading to a signal that then goes to the brain. The recognition of this signal in the brain and subsequent interpretation involve numerous other proteins and enzymes and biochemical reactions within the brain cells. Thus, each of these enzymes and proteins must exist for the system to work properly. Many other mathematical and logistical weaknesses to the Nilsson example of eye evolution have been uncovered (28). In summary, the eye is incredibly complex. Since it is unreasonable to expect self-formation of the enzymes in perfect proportion simultaneously, eye function represents a system that could not have arisen by gradual mutations.

Complexity of the Eye and Brain  6


Without the foresight of a plan, we would expect that the random evolutionary changes would attempt all kinds of useless combinations of parts while trying to provide for a successful evolutionary advancement. Yet as we look at living organisms over the world, we do not seem to see any of these random combinations. In nature, it appears that we are dealing largely, if not exclusively, with purposeful parts. Furthermore, if evolution is a real ongoing process, why don’t we find new developing complex organs in organisms that lack them? We would expect to find developing legs, eyes, livers, and new unknown kinds of organs, providing for evolutionary advancement in organisms that lacked desirable advantages. This absence is a serious indictment against any proposed undirected evolutionary process, and favors the concept that what we see represents the work of an intelligent Creator.

The simple example of a muscle, mentioned above, pales into insignificance when we consider more complicated organs such as the eye or the brain. These contain many interdependent systems composed of parts that would be useless without the presence of all the other necessary parts. In these systems, nothing works until all the necessary components are present and working. The eye has an automatic focusing system that adjusts the lens so as to permit us to clearly see close and distant objects. We do not fully understand how it works, but a part of the brain analyzes data from the eye and controls the muscles in the eye that change the shape of the lens. The system that controls the size of the pupil so as to adjust to light intensity and to reduce spherical lens aberration also illustrates interdependent parts. Then there are the 100,000,000 light-sensitive cells in the human eye that send information to the brain through some 1,000,000 nerve fibers of the optic nerve. In the brain this information is sorted into various components such as color, movement, form and depth. It is then analyzed and combined into an intelligible picture. This involves an extremely complex array of interdependent parts.

But the visual process is only part of our complex brains, which contain some 100,000,000,000 nerve cells connected by some 400,000 kilometers of nerve fibres. It is estimated that there are around 100,000,000,000,000 connections between nerve cells in the human brain. That we can think straight (we hope most of us do!) is a witness to a marvelous ordered complex of interdependent parts that challenges suggestions of an origin by random evolutionary changes. How could such complicated organs develop by an unplanned process?

Our senses do not work in isolation. An eye is useless without a brain that can interpret its input correctly. Furthermore, the brain cannot make sense of the eye without any other sensory input. Research shows our eyes must perform saccades to continually refresh the image, and the brain must also track where the head is and how it is moving. Other muscles also control the shape and length of the lens to allow proper focusing. All this requires feedback from our proprioception—our knowledge of where our body is and what it is doing in space—and the vestibular system, which controls our sense of balance. Disrupting the vestibular system by squirting fluid in the ear or by spinning the person will cause nystagmus—rapid involuntary back and forth movement of the eyes, which ruins vision's efficiency; they cannot focus to read, judge distance and so on until the nystagmus stops. This inner chaos also affects people's ability to focus on what others are saying, as well. This intense interdependence between balance, self-movement, and vision is why artificial eyes are a long way away from becoming a viable option. A fully functional artificial eye would need to be correctly attached to and controlled by eye muscles and the optic nerve itself—a very complex, virtually impossible feat of surgery and healing in a very sensitive part of the head. This is also why whole eye transplants cannot be done, just corneal transplants and other forms of eye surgery. 1


http://harunyahya.com/en/Articles/13989/the-invalidity-of-the-claims

In order for “sight” to be able to arise, even in its simplest form, it is essential that some of a living thing’s cells become light-sensitive, that these possess the ability to transfer that sensitivity to electrical signals, that a special nerve network from these cells to the brain form and that a “visual cortex” capable of analyzing this information appear in the brain.

-          A light-sensitive cell is not the first or a primitive eye. The idea that a complex eye gradually evolved from this cell is a deception.  The eye of the trilobite, which lived 530 million years ago in the Cambrian Period when all the characteristics of living things and complex life forms appeared, is IDENTICAL to the perfect faceted eye of the present-day fly and dragonfly.  ALL THAT LIVED BEFORE THAT TIME WERE BACTERIA. There is no question of any light-sensitive cell or any transition from it.

The perfect human eye is far too complex that makes it impossible for all its components to have evolved separately. The 40 separate parts that make up the eye have to exist together in order for the eye to see.

-          The retina is described as the most complex tissue in the body. Millions of cells bind together on the retina to constitute a miniature brain. It is impossible for even the retinal layer in the eye alone to have come into being spontaneously and by chance.

-          The cornea and the retina constantly move in tiny circles just about a thousandth of a millimeter in diameter. If those movements alone were to stop, the light-sensitive cells in the retina would immediately freeze and stop sending information to the brain. That would lead to the image being perceived disappearing within seconds.

-          Just the absence of ocular fluid is enough for the eye to stop working.

-          The reason why images are of such a quality is that the movements and colors in the images are constantly refreshed, right down to the finest detail, and “a slice of motion” takes place at an unbelievable speed, without our ever being aware of it.

-          The efficiency and flawlessness of our eyes and brain are incomparably greater than that of any device or equipment invented to date.

-          The 40 different parts that make up the eye act together to collect 1.5 million electrical signals in one millisecond deliver them to their destination and interpret them. Dozens of super-computers would have to be flawlessly programmed and work together, never making a mistake, in order to perform the same function.


   Darwin, troubled even with his 19th century level of knowledge and technology, made the following admissions:
   
   To suppose that the eye with all its inimitable contrivances for adjusting the focus to different distances, for admitting different amounts of light, and for the correction of spherical and chromatic aberration, could have been formed by natural selection, seems, I freely confess, ABSURD IN THE HIGHEST DEGREE. ii

   The eye to this day gives me a cold shudder, but when I think of the fine known gradations, my reason tells me I ought to conquer the cold shudder. iii

   The recur to the eye. I really think it would have been dishonest, not to have faced the difficulty. iv  

   If it could be demonstrated that any complex organ existed, which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely break down. v


Richard Dawkins, similarly alarmed in the face of such an extraordinary and irreducibly complex organ as the eye, confesses:

   But it must be gradual when it is being used to explain the coming into existence of complicated, APPARENTLY DESIGNED OBJECTS, like eyes. For if it is not gradual in these cases, IT CEASES TO HAVE ANY EXPLANATORY POWER AT ALL. Without gradualness in these cases, we are back to miracle, which is simply a synonym for the total absence of explanation.iv

http://www.allaboutthejourney.org/human-eye.htm

The human eye is enormously complicated - a perfect and interrelated system of about 40 individual subsystems, including the retina, pupil, iris, cornea, lens and optic nerve. For instance, the retina has approximately 137 million special cells that respond to light and send messages to the brain. About 130 million of these cells look like rods and handle the black and white vision. The other seven million are cone shaped and allow us to see in color. The retina cells receive light impressions, which are translated to electric pulses and sent to the brain via the optic nerve. A special section of the brain called the visual cortex interprets the pulses to color, contrast, depth, etc., which allows us to see "pictures" of our world. Incredibly, the eye, optic nerve and visual cortex are totally separate and distinct subsystems. Yet, together, they capture, deliver and interpret up to 1.5 million pulse messages a milli-second! It would take dozens of Cray supercomputers programmed perfectly and operating together flawlessly to even get close to performing this task.1

That's so powerful to me! Obviously,

http://www.allaboutthejourney.org/human-eye.htm

if all the separate subsystems aren't present and performing perfectly at the same instant, the eye won't work and has no purpose. Logically, it would be impossible for random processes, operating through gradual mechanisms of natural selection and genetic mutation, to create 40 separate subsystems when they provide no advantage to the whole until the very last state of development and interrelation.

   How did the lens, retina, optic nerve, and all the other parts in vertebrates that play a role in seeing suddenly come about? Because natural selection cannot choose separately between the visual nerve and the retina. The emergence of the lens has no meaning in the absence of a retina. The simultaneous development of all the structures for sight is unavoidable. Since parts that develop separately cannot be used, they will both be meaningless, and also perhaps disappear with time. At the same time, their development all together requires the coming together of unimaginably small probabilities. 2

The foregoing represents the core of "irreducible complexity." Complex organs made up of separate but necessary subsystems cannot be the result of random chance. Or, using the above language, such development could only result from "unimaginably small probabilities." For me, this means "statistical impossibility."

Come to think of it, I remember Darwin specifically discussing the incredible complexity of the eye in Origin of Species:

   To suppose that the eye, with all its inimitable contrivances for adjusting the focus to different distances, for admitting different amounts of light, and for the correction of spherical and chromatic aberration, could have formed by natural selection, seems, I freely confess, absurd in the highest degree possible. 3

So, how did Darwin deal with the staggering realities of the eye in the 1850's? As "absurdly" improbable as it was, he followed through with his theory and pointed to the simpler eye structures found in simpler creatures. He reasoned that more complex eyes gradually evolved from the simpler ones.

However, this hypothesis no longer passes muster. Short of the micro-biological and genetic information issues, paleontology now shows that "simple creatures" emerged in the world with complex structures already intact. Even the simple trilobite has an eye (complete with its double lens system) that's considered an optical miracle by today's standards.

Wait. The trilobite reminds me of something… Before I continue with the marvel of irreducible complexity and design, I have one more thought about Darwin and his original claims…


1. http://www.csa.com/discoveryguides/sensory/review2.php
2. http://www.wikihow.com/Argue-Against-Evolution-of-Eyesight
3. http://darwins-god.blogspot.com.br/search?q=eye
4. http://adarwinstudygroup.org/biology-culture-psychology/design-or-natural-selection/#img-01-1803
5. http://www.asa3.org/evolution/irred_compl.html
6. http://genesisfile.com/?page_id=146
7. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3246854/
8. http://learn.genetics.utah.edu/content/selection/eye/



Last edited by Admin on Tue Oct 17, 2017 8:54 am; edited 2 times in total

View user profile http://elshamah.heavenforum.com

Admin


Admin
Darwinists Solve Eye Evolution (Again)

http://darwins-god.blogspot.com.br/2012/11/evolutionists-solve-eye-evolution-again.html

Recently we discussed a paper from 2008 in which evolutionists claimed to have solved the long-standing question of how the eye evolved. It is a problem that famously once made Darwin shudder but the evolutionists claimed that now, with our advanced scientific knowledge, “the gap in understanding of the molecular evolution of eye components is all but closed.” That was quite a claim and, not surprisingly, there was no such breakthrough. In fact, the “explanation” that the evolutionists provided was simply that the key cellular signal transduction pathway in our eyes came from a very similar pathway in yeast that senses certain types of signaling chemicals known as pheromones. The evolutionists had no explanation for how the yeast pathway arose in the first place or exactly how it could morph into the animal vision system. It was yet another example of evolution’s trivial, non scientific, solutions that do nothing but generate vacuous headlines. Well evolutionists have done it again. This month they “solved” the problem of eye evolution yet again. Apparently that 2008 solution didn’t take, but this new solution is no better.

You may have seen the Wall Street Journal article from earlier this month proclaiming the new discovery of how vision evolved. The headline read, “A Relief to Darwin: The Eyes Have It.” Or perhaps you saw the press release trumpeting the new “Breakthrough study” that “Pinpoints Evolutionary Origins of Sight.”

The paper itself was no less triumphant. It claimed to reveal “a simple route to animal vision.” But in fact the evolutionists discovered no such thing. It was all yet another abuse of science.

As with the earlier 2008 paper, the new study appeals to yet another signal transduction pathway as the progenitor of later vision systems. This time instead of yeast, the paper appeals to a pathway in placozoa. As with the yeast system, the placozoa system probably detects signaling chemicals.

As usual, the evolutionists “solve” the problem simply by pushing it back in time. The evolution narrative continues to push complexity to earlier stages where it somehow and fortuitously appears. As the evolutionists conclude: “Our results are compatible with the view that the last common neuralian ancestor might have been more complex than generally assumed.”

So the new study makes evolution even more heroic and implausible. What it does not show is precisely what the paper and the articles claim that it shows: how vision evolved.

Evolution is more ludicrous with each passing week. It is a religiously-motivated movement that force-fits scientific findings to its truth. Its unending trail of vacuous discoveries is nothing more than a reflection of the underlying religion. As John Ioannidis has put it, “claimed research findings may often be simply accurate measures of the prevailing bias.” That is a good description of evolutionary science.

Religion drives science and it matters.

Metazoan opsin evolution reveals a simple route to animal vision 1

Abstract
All known visual pigments in Neuralia (Cnidaria, Ctenophora, and Bilateria) are composed of an opsin (a seven-transmembrane G protein-coupled receptor), and a light-sensitive chromophore, generally retinal. Accordingly, opsins play a key role in vision. There is no agreement on the relationships of the neuralian opsin subfamilies, and clarifying their phylogeny is key to elucidating the origin of this protein family and of vision. We used improved methods and data to resolve the opsin phylogeny and explain the evolution of animal vision. We found that the Placozoa have opsins, and that the opsins share a common ancestor with the melatonin receptors. Further to this, we found that all known neuralian opsins can be classified into the same three subfamilies into which the bilaterian opsins are classified: the ciliary (C), rhabdomeric (R), and go-coupled plus retinochrome, retinal G protein-coupled receptor (Go/RGR) opsins. Our results entail a simple scenario of opsin evolution. The first opsin originated from the duplication of the common ancestor of the melatonin and opsin genes in a eumetazoan (Placozoa plus Neuralia) ancestor, and an inference of its amino acid sequence suggests that this protein might not have been light-sensitive. Two more gene duplications in the ancestral neuralian lineage resulted in the origin of the R, C, and Go/RGR opsins. Accordingly, the first animal with at least a C, an R, and a Go/RGR opsin was a neuralian progenitor.

http://redwood.berkeley.edu/bruno/animal-eyes/Kaas_revised_2013.pdf

Charles Darwin : On the Origin of Species:

To suppose that the eye with all its inimitable contrivances for adjusting the focus to different distances, for admitting different amounts of light, and for the correction of spherical and chromatic aberration, could have been formed by natural selection, seems, I freely confess, absurd in the highest degree. When it was first said that the sun stood still and the world turned round, the common sense of mankind declared the doctrine false; but the old saying of Vox populi, vox Dei, as every philosopher knows, cannot be trusted in science. Reason tells me, that if numerous gradations from a simple and imperfect eye to one complex and perfect can be shown to exist, each grade being useful to its possessor, as is certainly the case; if further, the eye ever varies and the variations be inherited, as is likewise certainly the case; and if such variations should be useful to any animal under changing conditions of life, then the difficulty of believing that a perfect and complex eye could be formed by natural selection, though insuperable by our imagination, should not be considered as subversive of the theory. How a nerve comes to be sensitive to light, hardly concerns us more than how life itself originated; but I may remark that, as some of the lowest organisms, in which nerves cannot be detected, are capable of perceiving light, it does not seem impossible that certain sensitive elements in their sarcode should become aggregated and developed into nerves, endowed with this special sensibility.


How could such a complicated piece of optical machinery arise through a process that has no foresight or intentionality?

When asked, atheists generally point out  to Nillson and Penger's Paper from 1994 ,

where the paper makes following assertion :

http://www.rpgroup.caltech.edu/courses/aph161/Handouts/Nilsson1994.pdf

The evolution of complex structures, however, involves modifications of a large number of separate quantitative characters, and in addition there may be discrete innovations and an unknown number of hidden but necessary phenotypic changes. These complications seem effectively to prevent evolution rate estimates for entire organs and other complex structures. An eye is unique in this respect because the structures necessary for image formation, although there may be several, are all typically quantitative in their nature, and can be treated as local modifications of pre-existing tissues. Taking a patch of pigmented light-sensitive epithelium as the starting point, we avoid the more inaccessible problem of photoreceptor cell evolution. Thus, if the objective is limited to finding the number of generations required for the evolution of an eye’s optical geometry, then the problem becomes solvable.

usually proponents of evolution of the eye come up with that explanation :


Steps in the evolution of the eye as reflected in the range of eye complexity in living mollusk species (left to right): a pigment spot, as in the limpet Patella; a pigment cup, as in the slit shell mollusk Pleurotomaria; the "pinhole-lens" eye of Nautilus; a primitive lensed eye, as in the marine snail Murex; and the complex eye—with iris, crystalline lens, and retina—of octopuses and squids.

http://skeptoid.com/blog/2013/12/24/is-the-human-eye-irreducibly-complex/

Here’s an abbreviated version of the leading model:

A mutation resulted in a single photoreceptor cell, which allowed the organism to respond to light, and helped to calibrate circadian rhythms by detecting daylight.

Over successive generations, possessing multiple photoreceptors became the norm in the gene pool, because individuals with mutations encoding for an increased number of photoreceptors were better able to react to their surroundings. An arms race began, fueling the evolution of the new sensory organ.

Eventually, what was once just a single photoreceptor cell became a light-sensitive patch. At this point, the creature was still only able to distinguish light from dark.

A slight depression in the patch created a pit, for the first time allowing a limited ability to sense from which direction light or shadow was coming from.

The pit’s opening gradually narrowed to create an aperture — like that of a pinhole camera — making vision sharper.

The aqueous humour formed. A colourless, gelatinous mass filling the chamber of the eye, it helped to maintain the shape of the eye and keep the light sensitive retina in place.

A transparent tissue formed at the front, with a concave curvature for refracting light. The addition of this simple lens drastically improved image fidelity.

A transparent layer evolved in front of the lens. This transparent layer, the cornea, further focused light, and also allowed for more blood vessels, better circulation, and larger eyes.

Behind the cornea, a circular ring formed, the iris, with a hole in its centre, the pupil. By constricting, the iris was able to control the amount of light that reached the retina through the pupil.

Separation of these two layers allowed another gelatinous mass to form, the aqueous humor, which further increased refractive power.

http://www.reviewevolution.com/press/pressRelease_EyeEvolution.php

Since Nilsson and Pelger's article was published, it has been widely--but erroneously--reported that their conclusions were based on a computer model. Berlinski calls this claim "an urban myth." At a minimum, PBS should make clear to viewers that Nilsson's conclusions are not based on computer models at all, and it should acknowledge that his work is highly speculative."

http://creationwiki.org/The_eye_is_too_complex_to_have_evolved_%28Talk.Origins%29

http://www.grisda.org/origins/21039.htm

"What is one to make of all this? First, comparing the evolution of the eye to shape changes on a computer screen seems rather far-fetched. The entire project seems closer to an exercise in geometry than in biology. Second, the exercise assumes a functional starting point. Thus it has nothing to do with the origin of the biochemical systems of vision or the requisite neural network. Third, Nilsson and Pelger's computer exercise operates as if each 1% change in morphology can be accounted for by a single gene mutation. They do not consider the effects of pleiotropy, genetic background, or developmental processes. Fourth, an important part of the model relies on the special circumstance of a layer of clear cells covering the "retina." This layer somehow assumes the proper shape of a lens. Fifth, as noted by the authors, several features of the eye remain unaccounted for, such as the iris. Basically, the only result achieved was to show that two light-sensitive surfaces that differ in shape by 1% will have different efficiencies in photoreception, and that an uninterrupted series of 1% improvements is possible. The failure of scientists to produce new structures in selection experiments illustrates the implausibility of Nilsson and Pelger's "just so" story."


http://www.detectingdesign.com/humaneye.html

If a change in selective pressures favored a dimpled eyespot with a slight increase in visual acuity, pretty soon the majority of the population would have dimpled eyespots.  The problem with this notion is that no population of creatures with flat eyespots shows any sort of intra-population range like this were even a small portion of the population has dimpled eyespots to any selectable degree.  This is a common assertion, but it just isn't true.


the eye/brain is howerer  not only a interdependent system:

http://reasonandscience.heavenforum.org/t1638-eye-brain-is-a-interdependent-and-irreducible-complex-system

but the eye is also irreducibly complex in many ways, which can be illustrated best with the Signal transduction pathway in photoreceptor cell's:

http://reasonandscience.heavenforum.org/t1696-photoreceptor-cells



The absorption of light leads to a isomeric change in the retinal molecule.

The signal transduction pathway is the mechanism by which the energy of a photon signals a mechanism in the cell that leads to its electrical polarization. This polarization ultimately leads to either the transmittance or inhibition of a neural signal that will be fed to the brain via the optic nerve. The steps, or signal transduction pathway, in the vertebrate eye's rod and cone photoreceptors are then:

  1.The rhodopsin or iodopsin in the disc membrane of the outer segment absorbs a photon, changing the configuration of a retinal Schiff base cofactor inside the protein from the cis-form to the trans-form, causing the retinal to change shape.

  2.This results in a series of unstable intermediates, the last of which binds stronger to the G protein in the membrane and activates transducin, a protein inside the cell. This is the first amplification step – each photoactivated rhodopsin triggers activation of about 100 transducins. (The shape change in the opsin activates a G protein called transducin.)

  3.Each transducin then activates the enzyme cGMP-specific phosphodiesterase (PDE).

  4.PDE then catalyzes the hydrolysis of cGMP to 5' GMP. This is the second amplification step, where a single PDE hydrolyses about 1000 cGMP molecules.

  5.The net concentration of intracellular cGMP is reduced (due to its conversion to 5' GMP via PDE), resulting in the closure of cyclic nucleotide-gated Na+ ion channels located in the photoreceptor outer segment membrane.

  6.As a result, sodium ions can no longer enter the cell, and the photoreceptor outer segment membrane becomes hyperpolarized, due to the charge inside the membrane becoming more negative.

  7.This change in the cell's membrane potential causes voltage-gated calcium channels to close. This leads to a decrease in the influx of calcium ions into the cell and thus the intracellular calcium ion concentration falls.

  8.A decrease in the intracellular calcium concentration means that less glutamate is released via calcium-induced exocytosis to the bipolar cell (see below). (The decreased calcium level slows the release of the neurotransmitter glutamate, which can either excite or inhibit the postsynaptic bipolar cells.)

  9.Reduction in the release of glutamate means one population of bipolar cells will be depolarized and a separate population of bipolar cells will be hyperpolarized, depending on the nature of receptors (ionotropic or metabotropic) in the postsynaptic terminal (see receptive field).

http://www.detectingdesign.com/humaneye.html

The question now of course is, how could such a system evolve gradually?  All the pieces must be in place simultaneously.  For example, what good would it be for an earthworm that has no eyes to suddenly evolve the protein 11-cis-retinal in a small group or "spot" of cells on its head?  These cells now have the ability to detect photons, but so what?  What benefit is that to the earthworm?  Now, lets say that somehow these cells develop all the needed proteins to activate an electrical charge across their membranes in response to a photon of light striking them.  So what?!  What good is it for them to be able to establish an electrical gradient across their membranes if there is no nervous pathway to the worm's minute brain?   Now, what if this pathway did happen to suddenly evolve and such a signal could be sent to the worm's brain.  So what?!  How is the worm going to know what to do with this signal?  It will have to learn what this signal means.  Learning and interpretation are very complicated processes involving a great many other proteins in other unique systems.  Now the earthworm, in one lifetime, must evolve the ability to pass on this ability to interpret vision to its offspring.  If it does not pass on this ability, the offspring must learn as well or vision offers no advantage to them.  All of these wonderful processes need regulation.  No function is beneficial unless it can be regulated (turned off and on).  If the light sensitive cells cannot be turned off once they are turned on, vision does not occur.  This regulatory ability is also very complicated involving a great many proteins and other molecules - all of which must be in place initially for vision to be beneficial.

http://darwins-god.blogspot.com.br/search?q=eye

When in doubt, doubt the data. Paleontologists agree that the fossil record reveals abrupt appearances, but when convenient evolutionists can always protect their theory with those gaps in the fossil record.

Evolutionary thinking is remarkable. I am reminded of John Earman’s remarks about Hume’s arguments. For it is astonishing how well evolution is treated, given how completely the confection collapses under a little probing. So if Benjamin Radford really is interested in why people believe things for which there is little or no evidence, we have just the topic for him.

Evolution of complexity in signaling pathways 2

Conclusion
The intuitive view for the emergence of complexity is that it is due to increasing fitness, a view that has found support from studies on digital organisms. However, this study suggests that an imbalance in the effects of size-decreasing and -increasing mutations on function could lead to increase in complexity, supporting a mechanistic or neutral explanation . Hence, as long as there is selection acting on a system, even neutral processes that do not cause any immediate fitness benefit would force the system toward higher complexity.

We find that the amount of the aforementioned imbalance is related to complexity itself. It is highest for the simplest possible pathways that could achieve the task for which there is selection operating and decreases with increasing complexity. Hence, as evolution progresses, pathways achieve a certain size or level of complexity that is always well above the minimum required for their function. Further, we find that although simulations with different selection criteria start with pathways of the same size, for pathways that are under selection for functions that are attainable by many different topologies (i.e., that are simpler to achieve), the final level of complexity achieved tends to be higher.

These findings have implications for our understanding of the evolution of new features. For example, in simulating the evolution of a random population of three-protein pathways under selection for responding to a given signal, we reach a structurally diverse population with an average pathway size of ≈14 proteins. Given the exponential relation between pathway size and available pathway topologies, it is plausible to assume that such growth would facilitate the emergence of pathways with various response dynamics that would not have been possible with three proteins only. If such pathways achieve new functions or high-fitness solutions under the current selection criterion, they would be strongly selected. Hence, even though complexity can emerge neutrally, once it results in evolutionary favorable pathways, it may be maintained subsequently by selection.

Finally, we note that the presented results have an interesting connection to robustness. In general, robustness in biological pathways is associated with the ability of such pathways to withstand knockout mutations or disturbances in interaction parameters. Based on this description, many natural pathways have been found to have high robustness (19–24). This observation becomes obvious in light of the presented results. Our simulations suggest that in general, simplest-solution pathways are nonrobust toward deletions. Thus, populations predominantly consisting of such pathways drift toward larger, more complex solutions, and, in this sense, selection for function leads to emergence of robustness. A similar scenario is given for the evolution of robustness in general (25); through evolutionary mechanisms populations move toward wide plateaus on an imaginary fitness landscape, where they would be protected against small perturbations (i.e., they have high robustness). The idea of an imbalance between the functional effects of various mutations having different effects could be seen as the driving force behind such move, leading both to high robustness and high complexity. This statement overlaps with the theories that see complexity and robustness tying closely together (26).

The presented discussion approximates the behavior of biological pathways with a simple mathematical model and makes several assumptions regarding the nature and rate of mutations affecting their structure. Although the true nature of these pathways clearly is more complicated, we believe that the qualitative conclusions of this theoretical study would remain under biologically relevant parameter values (see Methods). These conclusions could also be extendable to any system that is under selection and is subject to mutations affecting its structure. Under such conditions, the idea of complexity arising neutrally could be more general.
Chance or design?

http://creation.com/excellent-eye-better-than-any-camera-the-eyes-response-to-light

Far from being a poor design, the eye’s dynamic range exceeds that of the best man-made photodetectors. And this latest research shows the intricate microscopic machinery behind it—a motor, glue, ‘calmer’ and internal ‘train tracks’.

All these features would need to be present and coordinated; otherwise, the eye would be blinded by bright light.4 Thus natural selection could not build this system up step-by-step, since each step by itself has no advantage over the previous step, until all steps are complete.

The Irreducible Complexity of Sight

http://reasonandscience.heavenforum.org/t1638-eye-brain-is-a-interdependent-and-irreducible-complex-system#3058

and

The Inference to Design

Thesis by Wayne Talbot

Our sense of sight is best explained by intelligent design: the system is irreducibly complex and the prerequisites for processing visual information are beyond development by undirected biological processes.

Propositions:

1. The origin of knowledge and information in the brain cannot be explained by undirected organic processes.
2. Sight is the result of intelligent data processing.
3. Data input requires a common understanding between sender and receiver of the data coding and transmission protocols.
4. Data storage and retrieval requires an understanding of the data from an informational perspective.
5. Data processing requires meta-data: conceptual and contextual data to be pre-loaded prior to the input of transaction data. 
6. Light can be considered an encoded data stream which is decoded and re-encoded by the eye for transmission via the optic nerve to serve as input data.
7. All data transmissions are meaningless without the superimposition of meta-data.
8. None of the components of our visual system individually offer any advantage for natural selection.

The Concepts of Light and Sight

Imagine that some thousands of years ago, a mountain tribe suffered a disease or mutation such that all members became blind. Generation after generation were born blind and eventually even the legends of the elders of being able to see were lost. Everyone knew that they had these soft spots in their heads which hurt when poked, but nobody knew if they had some intended function. Over time, the very concepts of light and sight no longer existed within their tribal knowledge. As a doctor specialising in diseases and abnormalities of the eye, you realise that with particular treatment you can restore the sight of these people. Assuming that you are fluent in the local language, how would you describe what you can do for them? How would you convey the concept of sight to people to whom such an idea was beyond their understanding?
My point is that this is the very problem that primitive organisms would have faced if sight did in fact evolve organically in an undirected fashion. When the first light sensitive cell hypothetically evolved, the organism had no way of understanding that the sensation it experienced represented light signals conveying information about its environment: light and sight were concepts unknown to it.

The Training of Sight

Those familiar with the settlement of Australia by Europeans in the 19th century, and the even earlier settlements in the Americas and Africa would have heard of the uncanny ability of the indigenous population to track people and animals. It was not so much that their visual acuity was better, but that they had learned to understand what they were seeing. It was found that this tracking ability could be taught and learned. In military field craft, soldiers are taught to actively look for particular visual clues and features. In my school cadet days, we undertook night “lantern stalks” (creeping up on enemy headquarters) and later in life, the lessons learned regarding discrimination of objects in low light were put to good use in orienteering at night. All of this experience demonstrates that while many people simply “see” passively, it is possible to engage the intellect and actively “look”, thus seeing much more.

With the advent of the aeroplane came aerial photography and its application during wartime as a method of intelligence gathering. Photographic analysis was a difficult skill to acquire - many people could look at the same picture but offer differing opinions as to what they were seeing, or rather thought they were seeing. 
The underlying lesson is that sight is as much a function of intellect as it is receiving light signals through the eyes. Put another way, it is intelligent data processing.
Understanding Data vs Information

The digital computer was invented circa 1940 and during the technical revolution that followed, we have come to learn a great deal about information processing. I was fortunate to join the profession in the very early days of business computing and through training and experience, acquired an understanding of data coding methodologies, their application and interpretation. More importantly however, I came to understand a great deal about how data becomes information, and when it isn’t. In the early days of sequential and index-sequential files, the most common programming error occurred in attempting to match master, application (reference), and transaction files. With the advent of direct access files and disk resident databases, new skills were required in the fields of data analysis, data structuring, and data mining.

The computing experience teaches this: data only becomes cognitive information when intelligently processed against a pre-loaded referential framework of conceptual and contextual data. Using this computer analogy, master files represented conceptual information, application files provided context, and input data was provided by transaction files.
With apologies to Claude Shannon, Werner Gitt and other notables who have contributed so much to our understanding on this subject, I would contend that in the context of this discussion, none of these files contain information in the true sense: each contains data which only becomes usable information when intelligently correlated. I would further contend that no single transmission in any form can stand alone as information: absent of a preloaded conceptual and contextual framework in the recipient, it can only ever be a collection of meaningless symbols. This is easily demonstrated by simply setting down everything you have to know before you can read and understand these words written here, or what you would have to know before reading a medical journal in a foreign language in an unfamiliar script such as Hebrew or Chinese.

Can Coding Systems Evolve?

A computer processor operates via switches set to either “on” or “off”. A system with 8 switches (28) provides 256 options; 16 switches (216) 65,536; 32 switches (232) 4,294,967,296; and 64 switches (264) the very massive 18 trillion. This feature provides the terminology such as 32-bit and 64-bit computers: it refers to the number of separate addresses than can be accessed in memory. In the early days of expensive iron-core memory, 8 or 16 bit addressing was adequate, but with the development of the silicon chip and techniques for dissipating heat, larger memory became viable thus requiring greater addressing capability and the development of 32 then 64-bit computers. All very interesting, you may say but why is that relevant? The relevance is found in most users’ experience: software versions that are neither forward nor backward compatible. The issue is that as coding systems change or evolve, the processing and storage systems cannot simply evolve in an undirected fashion: they must be intelligently converted. Let us look at some practical examples.

Computer coding systems are multi-layered. The binary code (1’s and 0’s) of computers translates to a coded language such as Octal through to ASCII, or Hexadecimal through to EBCDIC, and then to a common human language such as English or French. Computer scientists long ago established these separate coding structures for reasons not relevant here. The point to note is that in general, you cannot go from Octal to EBCDIC or from Hexadecimal to ASCII: special intermediate conversion routines are needed. The problem is that once coding systems are established, particularly multi-layered systems, any sequence of symbols which does not conform to the pattern cannot be processed without an intelligent conversion process.

Slightly off-topic, but consider the four chemicals in DNA which are referred to as A, C, G, and T. Very recently, scientists expressed surprise in finding that the DNA sequences code not just for proteins, but for the processing of these proteins. In other words, there is not just one but two or more “languages” written into our genome. What surprises me is that they were surprised: I would have thought the multi-language function to be obvious, but I will leave that for another time. If an evolving genome started with just two chemicals, say A and C, downstream processes could only recognise combinations of these two. If a third chemical G arose, there would be no system that could utilise it and more probably, its occurrence would interfere in a deleterious way. Quite simply, you cannot go from a 2 letter code to a 3 letter code without re-issuing the code book, a task quite beyond undirected biological evolution.

The Code Book Enigma

I will use an example similar to DNA because it is much easier to illustrate the problem using a system comprising just four symbols, in this case W, X, Y, and Z. I am avoiding ACGT simply so that you are not distracted by your knowledge of that particular science. Our coding system uses these 4 letters in groups of 3. If I sent you the message “XYW ZWZ YYX WXY” you would have no idea of what it means: it could be a structured sequence or a random arrangement, the letters are just symbols which are individually meaningless until intelligently arranged in particular groups or sequences. To be useful, we would need to formalise the coding sequences in a code book: that way the sender can encode the message and someone with the same version of the code book can decode the message and communication is achieved. Note that if the sender and receiver are using different versions of the code book, communication is compromised.
This brings us to a vital concept: meta-data (or data about data). 
There is a foundational axiom that underpins all science and intellectual disciplines––nothing can explain itself. The explanation of any phenomenon will always lie outside itself, and this applies equally to any coding system: it cannot explain itself. You may recall the breakthrough achieved by archaeologists in deciphering Egyptian hieroglyphs when they found the Rosetta Stone.

In our example, the code book provides the meta-data about the data in the encoded message. Any language, particularly one limited to just four letters, requires a code book to both compose and decipher the meaning. Every time there is a new rearrangement of the letters, or new letters are added to (or deleted from) an existing string, the code book has to be updated. From a chronological sequence perspective, for a change to be useful, the code explanation or definition must precede, not follow, any new arrangement of letters in a message. Rearrangements that occur independently of the code book cannot be understood by downstream processes. Logically, the code book is the controlling mechanism, not the random rearrangements of coding sequences. First the pre-arranged code sequence, then its implementation. In other words, it is a top-down process, not the bottom-up process that evolutionists such as Richard Dawkins assert.
Now it matters not whether you apply this to the evolution of the genome or to the development of our senses, the same principle holds: ALL data must be preceded by meta-data to be comprehensible as information. In general, messages or other forms of communication can be considered as transactions which require a conceptual and contextual framework to provide meaning.

Understanding Input Devices


We have five physical senses: sight, hearing, smell, taste, and touch, and each requires unique processes, whether physical and/or chemical. What may not be obvious from an evolution standpoint is that for the brain to process these inputs, it must first know about them (concept) and how to differentiate amongst them (context). Early computers used card readers as input devices, printers for output, and magnetic tape for storage (input & output). When new devices such as disk drives, bar code readers, plotters, etc were invented, new programs were needed to “teach” the central processor about these new senses. Even today, if you attach a new type of reader or printer to your computer, you will get the message “device not recognised” or similar if you do not preload the appropriate software. It is axiomatic that an unknown input device cannot autonomously teach the central processor about its presence, its function, the type of data it wishes to transmit, or the protocol to be used.
The same lessons apply to our five senses. If we were to hypothesise that touch was the first sense, how would a primitive brain come to understand that a new sensation from say light was not just a variation of touch?

Signal Processing

All communications can be studied from the perspective of signal processing, and without delving too deeply, we should consider a just few aspects of the protocols. All transmissions, be they electronic, visual, or audible are encoded using a method appropriate to the medium. Paper transmissions are encoded in language using the symbol set appropriate for that language; sound makes use of wavelength, frequency and amplitude; and light makes use of waves and particles in a way that I cannot even begin to understand. No matter, it still seems to work. The issue is that for communication to occur, both the sender and receiver must have a common understanding of the communication protocol: the symbols and their arrangement, and both must have equipment capable of encoding and decoding the signals.
Now think of the eye. It receives light signals containing data about size, shape, colour, texture, brightness, contrast, distance, movement, etc. The eye must decode these signals and re-encode them using a protocol suitable for transmission to the brain via the optic nerve. Upon receipt, the brain must store and correlate the individual pieces of data to form a mental picture, but even then, how does it know what it is seeing? How did the brain learn of the concepts conveyed in the signal such as colour, shape, intensity, and texture? Evolutionists like to claim that some sight is better than no sight, but I would contend that this can only be true provided that the perceived image matches reality: what if objects approaching were perceived as receding? Ouch!

When the telephone was invented, the physical encode/decode mechanisms were simply the reverse of one another, allowing sound to be converted to electrical signals then reconverted back to sound. Sight has an entirely different problem because the encode mechanism in the eye has no counterpart in the brain: the conversion is to yet another format for storage and interpretation. These two encoding mechanisms must have developed independently, yet had to be coherent and comprehensible with no opportunity for prolonged systems testing. I am not a mathematician, but the odds against two coding systems developing independently yet coherently in any timespan must argue against it ever happening.

Data Storage and Retrieval

Continuing our computer analogy, our brain is the central processing unit and just as importantly, our data storage unit, reportedly with an equivalent capacity of 256 billion gigabytes (or thereabouts). In data structuring analysis, there is always a compromise to be made between storage and retrieval efficiency. The primary difference from an analysis perspective is whether to establish the correlations in the storage structure thus optimising the retrieval process, or whether to later mine the data looking for the correlations. In other words, should the data be indexed for retrieval rather than just randomly distributed across the storage device. From our experience in data analysis, data structuring, and data mining, we know that it requires intelligence to structure data and indices for retrieval, and even greater intelligence to make sense of unstructured data.
Either way, considerable understanding of the data is required.
Now let’s apply that to the storage, retrieval, and processing of visual data. Does the brain store the data then analyse, or analyse and then store, all in real time? Going back to the supposed beginnings of sight, on what basis did the primitive brain decide where to store and how to correlate data which was at that time, just a meaningless stream of symbols? What was the source of knowledge and intelligence that provided the logic of data processing?

Correlation and Pattern Recognition

In the papers that I have read on the subject, scientists discuss the correlation of data stored at various locations in the brain. As best as I understand it, no-one has any idea of how or why that occurs. Imagine a hard drive with terabytes of data and those little bits autonomously arranging themselves into comprehensible patterns. Impossible you would assert, but that is what evolution claims. It is possible for chemicals to self-organise based on their physical properties, but what physical properties are expressed in the brain’s neural networks such that self-organisation would be possible? I admit to very limited knowledge here but as I understand it, the brain consists of neurons, synapses, and axons, and in each class, there is no differentiation: every neuron is like every other neuron, and so forth. Now even if there are differences such as in electric potential, the differences must occur based on physical properties in a regulated manner for there to be consistency. Even then, the matter itself can have no “understanding” of what those material differences mean in terms of its external reality.

In the case of chemical self-organisation, the conditions are preloaded in the chemical properties and thus the manner of organisation is pre-specified. When it comes to data patterns and correlation however, there are no pre-specified properties of the storage material that are relevant to the data which is held, whether the medium be paper, silicon, or an organic equivalent. It can be demonstrated that data and information is independent of the medium in which it is stored or transmitted, and is thus not material in nature. Being immaterial, it cannot be autonomously manipulated in a regulated manner by the storage material itself, although changes to the material can corrupt the data.
Pattern recognition and data correlation must be learned, and that requires an intelligent agent that itself is preloaded with conceptual and contextual data.
Facial Recognition

Facial recognition has become an important tool for security and it is easy for us to think, “Wow! Aren’t computers smart!” The “intelligence” of facial recognition is actually an illusion: it is an algorithmic application of comparing data points but what the technology cannot do is identify what type of face is being scanned. In 2012, Google fed 10 million images of cat faces into a very powerful computer system designed specifically for one purpose: that an algorithm could learn from a sufficient number of examples to identify what was being seen. The experiment was partially successful but struggled with variations in size, positioning, setting and complexity. Once expanded to encompass 20,000 potential categories of object, the identification process managed just 15.8% accuracy: a huge improvement on previous efforts, but nowhere near approaching the accuracy of the human mind.

The question raised here concerns the likelihood of evolution being able to explain how facial recognition by humans is so superior to efforts to date using the best intelligence and technology.

Irreducible Complexity

Our sense of sight has many more components than described here. The eye is a complex organ which would have taken a considerable time to evolve, the hypothesis made even more problematic by the claim that it happened numerous times in separate species (convergent evolution). Considering the eye as the input device, the system requires a reliable communications channel (the optic nerve) to convey the data to the central processing unit (the brain) via the visual cortex, itself providing a level of distributed processing. This is not the place to discuss communications protocols in detail, but very demanding criteria are required to ensure reliability and minimise data corruption. Let me offer just one fascinating insight for those not familiar with the technology. In electronic messaging, there are two basic ways of identifying a particular signal: (1) by the purpose of the input device, or (2) by tagging the signal with an identifier.

A certain amount of signal processing occurs in the eye itself; particular receptor cells have been identified in terms of function: M cells are sensitive to depth and indifferent to color; P cells are sensitive to color and shape; K cells are sensitive to color and indifferent to shape or depth. The question we must ask is how an undirected process could inform the brain about these different signal types and how they are identified. The data is transmitted to different parts of the brain for parallel processing, a very efficient process but one that brings with it a whole lot of complexity. The point to note is that not only does the brain have the problem of decoding different types of messages (from the M, P, and K cells), but it has to recombine this data into a single image, a complex task of co-ordinated parallel processing.
Finally we have the processor itself which if the evolution narrative is true, progressively evolved from practically nothing to something hugely complex. If we examine each of the components of the sight system, it is difficult to identify a useful function for any one of them operating independently except perhaps the brain. However, absent of any preloaded data to interpret input signals from wherever, it is no more useful than a computer without an operating system. It can be argued that the brain could have evolved independently for other functions, but the same argument could not be made for those functions pertaining to the sense of sight.
As best as I can understand, our system of sight is irreducibly complex.

Inheriting Knowledge

Let us suppose, contrary to all reason and everything that we know about how knowledge is acquired, that a primitive organism somehow began developing a sense of sight. Maybe it wandered from sunlight into shadow and after doing that several times, came to “understand” these variations in sensation as representative of its external environment, although just what it understood is anyone’s guess, but let us assume that it happened. How is this knowledge then inherited by its offspring for further development? If the genome is the vehicle of inheritance, then sensory experience must somehow be stored therein.
I have no answer to that, but I do wonder.
Putting it all together
I could continue to introduce even greater complexities that are known to exist, but I believe that we have enough to draw some logical conclusions. Over the past sixty years, we have come to understand a great deal about the nature of information and how it is processed. Scientists have been working on artificial intelligence with limited success, but it would seem probable that intelligence and information can only be the offspring of a higher intelligence. Even where nature evidences patterns, those patterns are the result of inherent physical properties, but the patterns themselves cannot be externally recognised without intelligence. A pattern is a form of information, but without an understanding of what is regular and irregular, it is nothing more than a series of data points.

We often hear the term, emergent properties of the brain, to account for intelligence and knowledge, but just briefly, what is really meant is emergent properties of the mind. You may believe that the mind is nothing more than a description of brain processes but even so, emergence requires something from which to emerge, and that something must have properties which can be foundational to the properties of that which emerges. Emergence cannot explain its own origins, as we have noted before. 
Our system of sight is a process by which external light signals are converted to an electro-chemical data stream which is fed to the brain for storage and processing. The data must be encoded in a regulated manner using a protocol that is comprehensible by the recipient. The brain then stores that data in a way that allows correlation and future processing. Evolutionists would have us believe that this highly complex system arose through undirected processes with continual improvement through generations of mutation and selection. However, there is nothing in these processes which can begin to explain how raw data received through a light sensitive organ could be processed without the pre-loading of the meta-data that allows the processor to make sense of the raw data. In short, the only source of data was the very channel that the organism neither recognised nor understood.

Without the back-end storage, retrieval, and processing of the data, the input device has no useful function. Without an input device, the storage and retrieval mechanisms have no function. Just like a computer system, our sensory sight system is irreducibly complex.

Footnote:

Earlier I noted that I was surprised that scientists were surprised to find a second language in DNA, but on reflection I considered that I should justify that comment. The majority of my IT career was in the manufacturing sector. I have a comprehensive understanding of the systems and information requirements of manufacturing management, having designed, developed, and implemented integrated systems across a number of vertical markets and industries.
The cell is often described as a mini factory and using that analogy, it seems logical to me that if the genome holds all of the production data in DNA, then it must include not just the Bill of Material for proteins, but also the complete Bill of Resources for everything that occurs in human biology and physiology. Whether that is termed another language I will leave to others, but what is obvious to anyone with experience in manufacturing management is that an autonomous factory needs more information than just a recipe.
Fred Hoyle’s “tornado through a junk yard assembling a Boeing 747” analogy understates the complexity by several orders of magnitude. A more accurate analogy would be a tornado assembling a full automated factory capable of replicating itself and manufacturing every model of airplane Boeing ever produced.

http://darwins-god.blogspot.com.br/2013/03/william-bialek-more-perfect-than-we.html

Three hundred years ago Gottfried Leibniz said we live in the best of all possible worlds but today Princeton’s world reknown theorist William Bialek explains that it is more perfect than we imagined. This video is long and it sometimes dwells on Bialek rather than the slide he is talking to, but those drawbacks are minor compared to what you will learn. If you want to hear an intelligent, thoughtful scientist scratch the surface of creation’s wonders and reflect on what it all means, then this video is for you.

Bialek, for instance, discusses compound eyes of insects such as the fly. These compound eyes have a large number of small lenses packed into an array. A large number of small lenses gives high resolution, just as does a digital camera with a large number of pixels.

But when the lens becomes too small its optics become distorted due to diffraction. So in determining the best lens size there is a tradeoff between resolution and diffraction. In the optimum solution the lens size is roughly proportional to the square root of the radius of the head. An indeed, Bialek shows an old paper surveying the compound eye designs in more than two dozen different insects. That paper shows that for the different size insects, the lens size is proportional, as predicted, to the square root of the head size.

This is one of Bialek’s half a dozen or so examples showing the optimization of biological designs and, as Bialek assures the audience, there are many, many more. Here is how one science writer explained it:

   Yet for all these apparent flaws, the basic building blocks of human eyesight turn out to be practically perfect. Scientists have learned that the fundamental units of vision, the photoreceptor cells that carpet the retinal tissue of the eye and respond to light, are not just good or great or phabulous at their job. They are not merely exceptionally impressive by the standards of biology, with whatever slop and wiggle room the animate category implies. Photoreceptors operate at the outermost boundary allowed by the laws of physics, which means they are as good as they can be, period. Each one is designed to detect and respond to single photons of light — the smallest possible packages in which light comes wrapped.

   “Light is quantized, and you can’t count half a photon,” said William Bialek, a professor of physics and integrative genomics at Princeton University. “This is as far as it goes.” …

   Photoreceptors exemplify the principle of optimization, an idea, gaining ever wider traction among researchers, that certain key features of the natural world have been honed by evolution to the highest possible peaks of performance, the legal limits of what Newton, Maxwell, Pauli, Planck et Albert will allow. Scientists have identified and mathematically anatomized an array of cases where optimization has left its fastidious mark, among them the superb efficiency with which bacterial cells will close in on a food source; the precision response in a fruit fly embryo to contouring molecules that help distinguish tail from head; and the way a shark can find its prey by measuring micro-fluxes of electricity in the water a tremulous millionth of a volt strong — which, as Douglas Fields observed in Scientific American, is like detecting an electrical field generated by a standard AA battery “with one pole dipped in the Long Island Sound and the other pole in waters of Jacksonville, Fla.” In each instance, biophysicists have calculated, the system couldn’t get faster, more sensitive or more efficient without first relocating to an alternate universe with alternate physical constants.


But there is much more to Bialek’s talk than examples of nature’s optimal designs. In a thoughtful segment Bialek discusses his philosophy of science. At the [16:30] mark he asks “That was fun, but what does it mean?” The answer, he begins, is that nature’s many examples of optimization help to highlight the difference between two ways of doing and thinking about science.

Bialek describes a cartoon in which a family is driving the car over a bridge with a posted weight limit. The son asks the father how they know what is the weight limit. The father responds that they wait until a sufficiently heavy truck destroys the bridge, they then weigh the remains of the truck, rebuild the bridge exactly as it was, and post the sign.

Bialek uses this funny cartoon as a metaphor for evolutionary theory’s reliance on contingency. This trial-and-error approach to understanding and invention is, Bialek explains, a very common view. The species are the way they are because that is the way they happened to evolve.

In fact, Bialek cogently points out, evolution’s promotion of contingency and trial-and-error is not so much out of scientific necessity. In the bridge example, we could actually model and compute the load limit, based on the design of the bridge and the types of materials used.

But given the “political context” in which many of these discussion occur, it is understandable why evolution is presented as a process of tinkering and not design. [19:30] In fact, the Yale biophysicist notes, these arguments are opposed to the idea of a “interventionist designer,” rather than the question of whether there are design principles in biology.

Bialek contrasts this approach with another view—the view that guides so many physicists—which he represents with Galileo’s famous quote that “The book of Nature is written in the language of mathematics.” Physics, Bialek points out, has been remarkably successful using this formula. It is, he notes, an “astonishing achievement” of the human mind over these four hundred past years. Bialek laments the evolutionary view that Galileo would never have said such a thing if he had known about biology.

Bialek’s point that evolution opposes the idea of a interventionist designer is crucial. For far from reflecting atheism, as so many have charged, and far from being a scientific finding as today’s positivist sentiment wants to believe, this foundation of evolutionary thought is religious.

That is not to say evolution is right or wrong, or true or false. It simply is religious. And until we understand the religion we are immersed in, we will not comprehend its influence on our thinking.

Without this religion, which is ubiquitous, evolution could certainly continue as a theory of mechanical origins. But evolutionary thought would be stripped of its core theoretic and its metaphysical certainty. The theory of evolution would then, rather than be mandated to be a fact, lie exposed to the light of science which shows it to be so improbable.

But it is precisely this distinction, this parsing of the religion from the science, that is so difficult to achieve. When I first began to study the evolutionary literature I was constantly fooled by its intertwining of metaphysics with the empirical science. The evolution literature is rife with religious claims in hiding.

But when properly distinguished and separated, one immediately can see that the conviction of evolution’s truth lies in the non scientific claims whereas the empirical evidence, alone, gives us no such confidence.

Religion drives science, and it matters.

1. http://www.pnas.org/content/109/46/18868
2. http://m.pnas.org/content/103/44/16337.full



Last edited by Admin on Tue Oct 17, 2017 8:57 am; edited 1 time in total

View user profile http://elshamah.heavenforum.com

Admin


Admin
http://www.evolutionnews.org/2011/05/rebutting_karl_giberson_and_fr046491.html

Light-Sensitive Pigments Aren't the Only Starting Point

Classical explanations for the evolution of the eye assume that the eye can be built via such small, step-by-step changes. Darwin believed the eye could evolve under a scheme of "fine gradations," but standard evolutionary accounts for the origin of the eye fall far short of that mark: they lack details, ignore biochemical complexity, and in fact invoke sudden and abrupt appearance of key components of eye morphology.

For example, all accounts of eye evolution start with a fully functional eyespot, not mere "light-sensitive pigments." As Mark Ridley's textbook Evolution explains, the commonly-cited model of eye evolution

began with a crude light-sensitive organ consisting of a layer of light-sensitive cells sandwiched between a darkened layer of cells and a transparent protective layer above. The simulation, therefore, does not cover the complete evolution of an eye. To begin with, it takes light sensitive cells as given ... and at the other end it ignores the evolution of advanced perceptual skills (which are more a problem in the evolution of the brain than the eye).

(Matt Ridley, Evolution, p. 261 (3rd Ed., Blackwell, 2004).)

Ridley calls it "not absurd" (p. 261) to assume simple light sensitive cells as a starting point, but evolutionary biologist Sean B. Carroll cautions to "not be fooled by these eyes' simple construction and appearance. They are built with and use many of the ingredients used in fancier eyes." (Sean B. Carroll, The Making of the Fittest: DNA and the Ultimate Forensic Record of Evolution, p. 197 (W. W. Norton, 2006).)

Likewise, after reviewing some of the basic biochemistry underlying the processes that allow vision, Michael Behe (responding to Richard Dawkins) observes:

"Remember that the 'light-sensitive spot' that Dawkins takes as his starting point requires a cascade of factors including 11-cis retinal and rhodopsin, to function.

Dawkins doesn't mention them." (Michael J. Behe, Darwin's Black Box: The Biochemical Challenge to Evolution, p. 38 (Free Press, 1996).)

In fact, no accounts for the evolution of the eye provide an account for this always-assumed starting point, which is far more complex than a few "light-sensitive pigments."


Other Eye Parts Appear Abruptly

In addition to assuming the abrupt appearance of a fully-functional eyespot, standard accounts of eye-evolution invoke the abrupt appearance of key features of advanced eyes such as the lens, cornea, and iris. Of course the emplacement of each of these features--fully formed and intact--would undoubtedly increase visual acuity. But where did these parts suddenly come from in the first place? As Scott Gilbert put it, such evolutionary accounts are "good at modelling the survival of the fittest, but not the arrival of the fittest." (John Whitfield, "Biological Theory: Postmodern evolution?," Nature, Vol. 455:281-284 (2008).)

As an example of these hyper-simplistic accounts of eye evolution, Francisco Ayala's book Darwin's Gift asserts that, "Further steps--the deposition of pigment around the spot, configuration of cells into a cuplike shape, thickening of the epidermis leading to the development of a lens, development of muscles to move the eyes and nerves to transmit optical signals to the brain--gradually led to the highly developed eyes of vertebrates and celphalopod (octopuses and squids) and to the compound eyes of insects." (Francisco J. Ayala, Darwin's Gift to Science and Religion, p. 146 (Joseph Henry Press, 2007).)

Ayala's explanation is vague and shows no appreciation for the biochemical complexity of these visual organs. Thus, regarding the configuration of cells into a cuplike shape, Michael Behe asks (while responding to Richard Dawkins on the same point):

And where did the "little cup" come from? A ball of cells--from which the cup must be made--will tend to be rounded unless held in the correct shape by molecular supports. In fact, there are dozens of complex proteins involved in maintaining cell shape, and dozens more that control extracellular structure; in their absence, cells take on the shape of so many soap bubbles. Do these structures represent single-step mutations? Dawkins did not tell us how the apparently simple "cup" shape came to be.


(Michael J. Behe, Darwin's Black Box: The Biochemical Challenge to Evolution, pg. 15 (Free Press, 1996).)

Likewise, mathematician and philosopher David Berlinski has assessed the alleged "intermediates" for the evolution of the eye and observes that the transmission of data signals from the eye to a central nervous system for data processing, which can then output some behavioral response, comprises an integrated system that is not amenable to stepwise evolution:

Light strikes the eye in the form of photons, but the optic nerve conveys electrical impulses to the brain. Acting as a sophisticated transducer, the eye must mediate between two different physical signals. The retinal cells that figure in Dawkins' account are connected to horizontal cells; these shuttle information laterally between photoreceptors in order to smooth the visual signal. Amacrine cells act to filter the signal. Bipolar cells convey visual information further to ganglion cells, which in turn conduct information to the optic nerve. The system gives every indication of being tightly integrated, its parts mutually dependent.

The very problem that Darwin's theory was designed to evade now reappears. Like vibrations passing through a spider's web, changes to any part of the eye, if they are to improve vision, must bring about changes throughout the optical system. Without a correlative increase in the size and complexity of the optic nerve, an increase in the number of photoreceptive membranes can have no effect. A change in the optic nerve must in turn induce corresponding neurological changes in the brain. If these changes come about simultaneously, it makes no sense to talk of a gradual ascent of Mount Improbable. If they do not come about simultaneously, it is not clear why they should come about at all.

The same problem reappears at the level of biochemistry. Dawkins has framed his discussion in terms of gross anatomy. Each anatomical change that he describes requires a number of coordinate biochemical steps. "[T]he anatomical steps and structures that Darwin thought were so simple," the biochemist Mike Behe remarks in a provocative new book (Darwin's Black Box), "actually involve staggeringly complicated biochemical processes." A number of separate biochemical events are required simply to begin the process of curving a layer of proteins to form a lens. What initiates the sequence? How is it coordinated? And how controlled? On these absolutely fundamental matters, Dawkins has nothing whatsoever to say.

(David Berlinski, "Keeping an Eye on Evolution: Richard Dawkins, a relentless Darwinian spear carrier, trips over Mount Improbable," Review of Climbing Mount Improbable by Richard Dawkins (W. H. Norton & Company, Inc. 1996)," in The Globe & Mail (November 2, 1996).))

In sum, standard accounts of eye evolution fail to explain the evolution of key eye features like:

The biochemical evolution of the fundamental ability to sense light
The origin of the first "light sensitive spot"
The origin of neurological pathways to transmit the optical signal to a brain
The origin of a behavioral response to allow the sensing of light to give some behavioral advantage to the organism
The origin of the lens, cornea and iris in vertebrates
The origin of the compound eye in arthropods

At most, accounts of the evolution of the eye provide a stepwise explanation of "fine gradations" for the origin of more or less one single feature: the increased concavity of eye shape. But that does not explain the origin of the eye.

Giberson and Collins claim that "[o]ver time mutations in DNA can produce novel features ... like ... eyes from light-sensitive pigment." But their vague argument provides us with no citations or discussion of the evidence to back up that claim. In fact, much evidence not cited in their book can be found which challenges their assertion. It seems that they simply want us to take their evolutionary claims about the power of mutation on faith.

View user profile http://elshamah.heavenforum.com

Admin


Admin
http://creation.com/did-eyes-evolve-by-darwinian-mechanisms

Much disagreement exists about the hypothetical evolution of eyes, and experts recognize that many critical problems exist. Among these problems are an explanation of the evolution of each part of the vision system, including the lens, the eyeball, the retina, the entire optical system, the occipital lobes of the brain, and the many accessory structures. Turner stressed that ‘the real miracle [of vision] lies not so much in the optical eye, but in the computational process that produces vision.’46 All of these different systems must function together as an integrated unit for vision to be achieved. As Arendt concludes, the evolution of the eye has been debated ever since Darwin and is still being debated among Darwinists.47 For non-evolutionists there is no debate.

View user profile http://elshamah.heavenforum.com

Admin


Admin
God's Intelligently-Designed Mirrors to the Soul: The Eyes...

"Jesus had compassion and touched their eyes. And immediately their eyes received sight, and they followed Him.” [Matthew 20:34.] Just as quickly as He made the first human eyes out of dust, Jesus the Creator fixed two men’s broken vision systems as only a Master Biotechnician could. Today, new inner-eye wonders are regularly uncovered, exposing the eye’s miraculous origin.

One critical vitamin-like eye molecule bears the chemistry-friendly name “11-cis-retinal.” When this molecule is embedded in its partner protein, energy from an absorbed photon straightens its bend at the 11th carbon atom to complete vision’s first step. This altered shape initiates other factors that amplify the visual signal inside the eye cell. Yet, slightly different versions of the retinal molecule—those built to bend at the 9th, 10th, or any other carbon atom—demonstrate little or no optic activity. [Zyga, L. Scientists solve mystery of the eye. PhysOrg. Posted on physorg.com November 17, 2011, accessed November 17, 2011. ] The Lord placed each atomic bond precisely where it needed to be.

Biophysicists have even concluded that certain living systems, including the human visual system, “couldn’t get faster, more sensitive or more efficient without first relocating to an alternate universe with alternate physical constants.” [How the retina works: Like a multi-layered jigsaw puzzle of receptive fields. Salk Institute for Biological Studies news release, April 7, 2009.] For example, researchers discovered that Müller cells inside the retina—that thin, light-sensitive tissue layer at the back of our eyes—perform several tasks to optimize vision:

1. Covering the entire surface of the retina to collect the maximum number of available photons4
2. Conducting light from around nerve cells and blood vessels directly to light-sensitive cells
3. Filtering certain harmful radiation
4. Reducing light noise—light waves reflected randomly inside the eyeball
5. Collecting and reorienting different wavelengths of light
6. Providing architectural support for neighboring cells
7. Supplying nearby neurons with fuel
8. Mopping up and recycling waste
9. Managing potassium ion distribution

But sight requires more than just eyes—the brain processes visual input. For example, one program in particular solves the problem of “perceptual stability.” Mental software organizes dizzying, streaky blurs from fast eye movements into coherent visual pictures. Investigation revealed that part of the brain does process the blurred streaks, but like a clutch that disengages an engine from the transmission when car gears shift, it disengages streaky images from the conscious awareness center of the brain. [Rutgers Research: Discoveries Shed New Light on How the Brain Processes What the Eye Sees. Rutgers University news release, June 2, 2009.] Sound wasteful? It’s not. This process tells how far and fast the eye moved so the brain can place subsequent images right back in gear.

Lastly, retinas pre-process visual data. Their different cells sort raw visual inputs into 20 different “channels,” or parallel representations, before the data are recompiled and transmitted to the brain. One channel uses parasol ganglion cells to detect motion and flicker. Another uses midget ganglion cells to process spatial information. Ganglion cells are “far from being simple passive ‘cables’ that [evolutionists have long believed] relay the photoreceptor signal from the outer to the inner retinal layers.” [Baden, T. and T. Euler. 2013. Early Vision: Where (Some of) the Magic Happens. Current Biology. 23 (24): R1096-R1098.] Recently, neurologists worked with macaque eyes to discover that retinal pre-processing works by precise placement of voltage-gated channel proteins within each ganglion cell.

Precision never just happens, and even a human engineer cannot achieve the level of precision in eye design. Thus, someone had to ensure proper placement of these channel proteins, mental software, Müller cells, and 11-cis-retinal. “Lift up your eyes on high, and see who has created these things.” [Isaiah 40:26.] Inner eye workings leave no doubt that the Lord of miracles created eyes.

http://www.icr.org/article/8020/

View user profile http://elshamah.heavenforum.com

Admin


Admin
from various sources :

How did new biochemical pathways, which involve multiple enzymes working together in sequence, originate? Every pathway and nano-machine requires multiple protein/enzyme components to work. How did lucky accidents create even one of the components, let alone 10 or 20 or 30 at the same time, often in a necessary programmed sequence ?

Michael Behe (responding to Richard Dawkins) observes:

"Remember that the 'light-sensitive spot' that Dawkins takes as his starting point requires a cascade of factors including 11-cis retinal and rhodopsin, to function.

Dawkins doesn't mention them." (Michael J. Behe, Darwin's Black Box: The Biochemical Challenge to Evolution, p. 38 (Free Press, 1996).)

The signal transduction pathway is the mechanism by which the energy of a photon signals a mechanism in the cell that leads to its electrical polarization. This polarization ultimately leads to either the transmittance or inhibition of a neural signal that will be fed to the brain via the optic nerve. There are nine steps in the signal transduction pathway, in the vertebrate eye's rod and cone photoreceptors , that must go all through, in order for the transmittance to be able to take place. Stop the pathway before, and no signal happens. That seems a irreducible complex system to me.

View user profile http://elshamah.heavenforum.com

Admin


Admin
http://creation.com/refuting-evolution-2-chapter-10-argument-irreducible-complexity

A transparent layer is also far more difficult to obtain than the researchers think. The best explanation for the cornea’s transparency is diffraction theory, which shows that light is not scattered if the refractive index doesn’t vary over distances more than half the wavelength of light. This in turn requires a certain very finely organized structure of the corneal fibers, which in turn requires complicated chemical pumps to make sure there is exactly the right water content

View user profile http://elshamah.heavenforum.com

Admin


Admin
http://www.icsintegrity.org/irreducible-complexity.html

However, biochemists have shown that even a simple light-sensitive spot requires a complex array of enzyme systems. When light strikes the retina, a photon interacts with a molecule called 11-cis-retinal, which rearranges within picoseconds to trans-retinal. The change in the shape of the retinal molecule forces a change in the shape of the protein rhodopsin. The protein then changes to metarhodopsin II and sticks to another protein, called transducin. This process requires energy in the form of GTP, which binds to transducin. GTP-transducin-metarhodopsin II then binds to a protein called phosphodiesterase, located on the cell wall. This affects the cGMP levels within the cell, leading to a signal that then goes to the brain. The recognition of this signal in the brain and subsequent interpretation involve numerous other proteins and enzymes and biochemical reactions within the brain cells. Thus, each of these enzymes and proteins must exist for the system to work properly. Many other mathematical and logistical weaknesses to the Nilsson example of eye evolution have been uncovered (28). In summary, the eye is incredibly complex. Since it is unreasonable to expect self-formation of the enzymes in perfect proportion simultaneously, eye function represents a system that could not have arisen by gradual mutations.

Modern scientists applying knowledge of the intrinsic complexity within each cell would understand that each sequential mutation in the DNA within the eyeball would require simultaneous mutations in bone structure, nerves, brain function, and hundreds of proteins and cell signaling pathways to make even the smallest change in only one organ system. Such changes would require far more than could be expected from random mutation and natural selection. Since these systems are irreducibly complex and individual mutations in one organ would not be beneficial for the organism, these random mutations in all aspects of vision would need to occur simultaneously. Therefore, the human body represents an irreducibly complex system on a cellular and an organ/system basis.

View user profile http://elshamah.heavenforum.com

Admin


Admin
THE FOLLOWING ARE EVOLUTIONARY-BASED SCENARIOS THAT ATTEMPT TO EXPLAIN HOW EVOLUTION MIGHT CREATE AN EYE.

http://whoisyourcreator.com/topics/how-does-evolution-supposedly-work/

Note that none of them even touch on the neurotransmitters that process the information in the brain:
“The brain can do nothing and perceive nothing unless thousands of its neurons (nerve cells) communicate in a coordinated fashion with thousands of other neurons.”
http://books.google.com/books?id=1QE1C5cyI4YC&pg=PA341&lpg=PA341&dq=harvard+neurotransmitter&source=
bl&ots=4qit_x4vyZ&sig=rJTQgJIdvD4gO7xnz3ghcQ599o0&hl=en&ei=l91NTLK-BsmUnQf-tMHYCw&sa=X&oi=book_result&
ct=result&resnum=9&ved=0CDcQ6AEwCA#v=onepage&q=harvard%20neurotransmitter&f=false

Example #1

“The simple light-sensitive spot on the skin of some ancestral creature gave it some tiny survival advantage, perhaps allowing it to evade a predator. Random changes then created a depression in the light-sensitive patch, a deepening pit that made “vision” a little sharper. At the same time, the pit’s opening gradually narrowed, so light entered through a small aperture, like a pinhole camera.
Every change had to confer a survival advantage, no matter how slight. Eventually, the light-sensitive spot evolved into a retina, the layer of cells and pigment at the back of the human eye. Over time a lens formed at the front of the eye. It could have arisen as a double-layered transparent tissue containing increasing amounts of liquid that gave it the convex curvature of the human eye.”
Evolution Library, “Evolution of the Eye”, PBS Online.
http://www.pbs.org/wgbh/evolution/library/01/1/l_011_01.html

Critique of terms used:

a. “Random changes then created a depression in the light-sensitive patch, a deepening pit that made “vision” a little sharper.”
No mention of a likely genetic process.

b. “simple light-sensitive spot”
No explanation for the initial evolution of each complex component that makes-up the spot or the response triggers that activate the flagella. Read how complex “spots” are:
“These eyes constitute the simplest and most common visual system found in nature. The eyes contain optics, photoreceptors and the elementary components of a signal-transduction chain. Rhodopsin serves as the photoreceptor, as it does in animal vision. Upon light stimulation, its all-trans-retinal chromophore isomerizes into 13-cis and activates a photoreceptor channel which leads to a rapid Ca2+ influx into the eyespot region. At low light levels, the depolarization activates small flagellar current which induce in both flagella small but slightly different beating changes resulting in distinct directional changes. In continuous light, Ca2+ fluxes serve as the molecular basis for phototaxis. In response to flashes of higher energy the larger photoreceptor currents trigger a massive Ca2+ influx into the flagella which causes the well-known phobic response.”
http://www.ncbi.nlm.nih.gov/pubmed/9431675

c. “ … evolved into a retina,”
No explanation for the evolution of the components of a fully formed retina, the optic nerve, or the independent specific mental and neural capacity required for interpreting the information. The following describes the components of a retina:
“The retina is a highly specialized tissue lining the innermost portion of the eye. It contains millions of specialized photoreceptor cells called rods and cones that convert light rays into electrical signals that are transmitted to the brain through the optic nerve. Rods provide the ability to see in dim light while cones allow for sharp and color vision. The macula, located in the center of the retina, is where most of the cone cells are located. It is very small (500µ or about the size of a ballpoint). The fovea, a small depression in the center of the macula, has the highest concentration of cone cells. In front of the retina is a chamber called the vitreous body, which contains a clear, gelatinous fluid called vitreous humor.”
http://eyerepublic.com/retina-vitreous/retina-faq/29-anatomy-retina.html

Example #2

“This ancient animal probably had very simple eye spots with no image-forming ability, but still needed some diversity in eye function. It needed to be able to sense both slow, long-duration events such as the changing of day into night, and more rapid events, such as the shadow of a predator moving overhead. These two forms arose by a simple gene duplication event and concomitant specialization of association with specific G proteins, which has also been found to require relatively few amino acid changes. This simple molecular divergence has since proceeded by way of the progress of hundreds of millions of years and amplification of a cascade of small changes into the multitude of diverse forms we see now. There is a fundamental unity that arose early, but has been obscured by the accumulation of evolutionary change. Even the eyes of a scorpion carry an echo of our kinship, not in their superficial appearance, but deep down in the genes from which they are built.”
PZ Myers, “Eyeing the Evolutionary Past”, March 6, 2008, Seed Online.

http://www.seedmagazine.com/news/2008/03/eyeing_the_evolutionary_past.php?page=3

Critique:

a. “ … but still needed some diversity in eye function. It needed to be able to sense …”
An organism senses a need? This suggests that a particular need produces change:
“Contrary to a widespread public impression, biological evolution is not random, even though the biological changes that provide the raw material for evolution are not directed toward predetermined, specific goals.”
“Science, Evolution, and Creationism,” 2008, National Academy of Sciences (NAS), The National Academies Press, 3rd edition, page 50.
http://www.nap.edu/openbook.php?record_id=11876&page=50

b. “ … very simple eye spots,”
Refer to above “Example #1.”

c. “ … simple gene duplication event”
There is NO scientific proof that gene duplication can create genes with more complex functions. Research papers reflect this admission by using words “most likely”:
“Duplicate gene evolution has most likely played a substantial role in both the rapid changes in organismal complexity apparent in deep evolutionary splits and the diversification of more closely related species. The rapid growth in the number of available genome sequences presents diverse opportunities to address important outstanding questions in duplicate gene evolution.”
http://biology.plosjournals.org/perlserv/?request=get-document&doi=10.1371%2F
journal.pbio.0020206&ct=1&SESSID=9999360a804131d0f0009da33ced0db9
An erroneous example cited is the claim that, over 100 million years ago, two genes of the yeast S. cerevisiae supposedly evolved from one gene of another specie of yeast (K. lactis).
Refer to:
http://www.nature.com/nature/journal/v449/n7163/abs/nature06151.html
What is the evidence for their claim? Nothing but the presupposition that Darwinism is true so the very existence of two genes that total the same functions of the one gene proves that they must have evolved from each other:
”The primary evidence that duplication has played a vital role in the evolution of new gene functions is the widespread existence of gene families.”
http://biology.plosjournals.org/perlserv/?request=get-document&doi=10.1371%2F
journal.pbio.0020206&ct=1&SESSID=9999360a804131d0f0009da33ced0db9
Also, what Darwinists fail to present is a feasible step-by-step scenario how each gene could:
- split their functions in a precise manner so that neither function would be disabled until ‘random chance’ completed the event;
- become fixed in the population during each new step:
“A duplicated gene newly arisen in a single genome must overcome substantial hurdles before it can be observed in evolutionary comparisons. First, it must become fixed in the population, and second, it must be preserved over time. Population genetics tells us that for new alleles, fixation is a rare event, even for new mutations that confer an immediate selective advantage. Nevertheless, it has been estimated that one in a hundred genes is duplicated and fixed every million years (Lynch and Conery 2000), although it should be clear from the duplication mechanisms described above that it is highly unlikely that duplication rates are constant over time.”
http://biology.plosjournals.org/perlserv/?request=get-document&doi=10.1371%2F
journal.pbio.0020206&ct=1&SESSID=9999360a804131d0f0009da33ced0db9

For more information on gene duplication, go to:
http://whoisyourcreator.com/gene_duplication.html

d. “concomitant specialization”
This apparently means that two genes have similar yet specialized functions. Evolutionists devise all sorts of redundant and scientific sounding terms when they want to make something sound complicated. This term adds nothing to describe what caused the genetic process to occur.

e. “of association with specific G proteins”
Because of the split in function between the two genes, the molecular switch (G protein) must also be modified to coincide with the specific regulation needed to precisely regulate the new gene. There is NO explanation of how that might occur:
• “Moreover, in order for the organism to respond to an every-changing environment, intercellular signals must be transduced, amplified, and ultimately converted to the appropriate physiological response.”
http://edrv.endojournals.org/cgi/content/full/24/6/765
See movie on G-proteins: http://www.youtube.com/watch?v=NB7YfAvez3o&feature=related

Example #3

The following two links are video presentations that attempt to explain the evolution of an eye. They both use the same progressive steps but forget to mention how the components appeared and/or any mention of what genetic change was used to create new features to appear.

Video narration by Richard Dawkins on YouTube:
http://www.youtube.com/watch?v=lEKyqIJkuDQ
‘Scientific’ highlights from the video:
“Skin cells like these often have a little light sensitive pigment to start with, so something interesting could happen …”
“Let’s drop ourselves lightly into a shallow pit and things begin to get better …
“I brought in a simple in-home camera … now I can resolve images must more accurately.”
“Let’s go on and now just imagine some of those cells happen to secrete a little mucus. It collects into a blob … and lodges in the pinhole. Real progress, I’ve got a crude lens, now the incoming light can be focused.”

Video narration by NCSE’s executive director, Eugenie Scott on YouTube:
http://www.youtube.com/watch?v=fOtP7HEuDYA&feature=related
Ms. Scott uses the same steps but uses ‘next’ repeatedly while pulling in component parts out of thin air. Her most scientific moment comes when she states, “If it can grow, it can evolve …”

Upon discovering that none of the known genetic mechanisms can account for how evolution supposedly occurs, evolutionists are now devising even more absurd fables. This new mechanism is called “preadaptation”:

“The process by which parts accumulate until they’re ready to snap together is called preadaptation. It’s a form of “neutral evolution,” in which the buildup of the parts provides no immediate advantage or disadvantage. Neutral evolution falls outside the descriptions of Charles Darwin. But once the pieces gather, mutation and natural selection can take care of the rest, ultimately resulting in the now-complex form of TIM23 …
“You look at cellular machines and say, why on earth would biology do anything like this? It’s too bizarre,” he said. “But when you think about it in a neutral evolutionary fashion, in which these machineries emerge before there’s a need for them, then it makes sense.””
Brandon Keim, “More ‘Evidence’ of Intelligent Design Shot Down by Science,” August 27, 2009, Wired Science based on “The reducible complexity of a mitochondrial molecular machine,” Yale University, Proceedings of the National Academy of Sciences, Vol. 106 No. 33, August 25, 2009.
http://www.wired.com/wiredscience/2009/08/reduciblecomplexity/
So, complex parts with absolutely NO purpose miraculously assemble themselves, and then “snap” together to form a complex cellular machine? They’re kidding, right?

View user profile http://elshamah.heavenforum.com

Admin


Admin
Simple beginnings

http://web.archive.org/web/20090126042355/http://darwinspredictions.com/#_1.3_Evolution’s_falsifications

Introduction

It is no secret that biology is full of complex designs. Molecules are intricately designed to perform their function, apparently simple bacteria are in fact extraordinarily complicated, and multi-cellular organisms reveal an intricate network of organs and structures. Biology presents to us a seemingly endless list of phenomenally intricate designs and, as Darwin found with the eye, it is not always straightforward to imagine how such designs could have evolved.

Prediction

The eye gave Darwin shudders, but as Fig. 6 suggests, he argued that it could have evolved from a gradual sequence of simpler designs. For evolution to work, it needs to start with simple structures which can be imagined to spontaneously assemble. From there increasingly complex designs are imagined to accumulate, as Darwin hypothesized for the eye.


This evolutionary prediction of a historical lineage, from simple to complex, has strongly influenced evolutionists. In Chapter 10 of Origins Darwin tried to reconcile this expectation with the apparent abruptness of the fossil record. Of course it could be argued that the fossil record was highly imperfect. Older, simpler organisms could have existed without leaving a history in the fossil record. But in addition to that argument, any examples of lower organisms in the older strata would help demonstrate the expected lineage.

The Eozoön canadense fossil, or “dawn animal of Canada,” was one such example Darwin could use. Canadian geologists announced the finding in 1864 and Darwin introduced it in his next edition of Origins as an early species from the lower strata. He described the species as belonging to the most lowly organized of all classes or animals. [1]

Falsification

The only problem with Eozoön was that it was so simple, it was not entirely clear that it actually was a biological organism. Could it have been nothing more than a mineral formation? This seemed obvious and some scientists argued this in the face of staunch opposition by evolutionists who opposed such skepticism. Eozoön nicely fulfilled their expectations, but eventually they would have to concede. Evolution or no evolution, Eozoön clearly was simply a mineral formation. [2]

Long after the Eozoön affair, this evolutionary view that the earliest multi-cellular animals were simple has persisted, and it continues to meet with surprises. One example is Funisia dorothea, an ancient tubular organism. Funisia shows evidence of the same growth and propagation strategies used by most of today’s invertebrate organisms. Funisia clearly shows, remarked one researcher, “that ecosystems were complex very early in the history of animals on Earth.” Another researcher agreed that “the finding shows that fundamental ecological strategies were already established in the earliest known animal communities, some 570 million years ago.” [3,4]

This early complexity is also implied by genome data of the lower organisms. As one researcher observed, the genomes of many seemingly simple organisms sequenced in recent years show a surprising degree of complexity. [5,6] In fact, what we consistently find in the fossil record and genomic data are examples of high complexity in lineages where evolution expected simplicity. As one evolutionist admitted:

It is commonly believed that complex organisms arose from simple ones. Yet analyses of genomes and of their transcribed genes in various organisms reveal that, as far as protein-coding genes are concerned, the repertoire of a sea anemone—a rather simple, evolutionarily basal animal—is almost as complex as that of a human. [7]

Early complexity is also evident in the cell’s biochemistry. For instance, kinases are a type of enzyme that regulate various cellular functions by transferring a phosphate group to a target molecule. Kinases are so widespread across eukaryote species that, according to evolution, they must persist far down the evolutionary tree. Yet the similarity across species of the kinase functions, and their substrate molecules, means that if evolution is true these kinase substrates must have remained largely unchanged for billions of years. The complex regulatory actions of the kinase enzymes must have been present early in the history of life. [8]

This is by no means an isolated example. Histones are a class of eukaryote proteins that help organize and pack DNA and the gene that codes for histone IV is highly conserved across species. Again, if evolution is true the first histone IV must have been very similar to the versions we see today.

Years ago it seemed obvious to evolutionists that the first eukaryote evolved from the simpler prokaryote (bacteria) type of cell. This would nicely fulfill the evolutionary expectation of a simple-to-complex lineage, but this too appears difficult to reconcile with the evidence (see Section 3.2). As one team of evolutionists admitted:

Nevertheless, comparative genomics has confirmed a lesson from paleontology: Evolution does not proceed monotonically from the simpler to the more complex. [9]

And Darwin’s concern with the eye is now known to be appropriate. Darwin worried that it was too complex, but hoped that it could have evolved from simpler intermediates. It is true that there are different types of eyes in nature which might be aligned in a sequence. But when examined closely even what appear to be simple types of eyes are now known to be phenomenally complex.

Consider, for instance, the so-called third eye which merely provides for light sensitivity in some species. In fact, the third eye contains the same cellular signal transduction pathway that is found in image-forming eyes. As Fig. 7 illustrates, that pathway begins with a photon interacting with a light-sensitive chromophore molecule (11-cis retinal). The interaction causes the chromophore to change configuration and this, in turn, influences the large, trans-membrane rhodopsin protein to which the chromophore is attached.


Figure 7 Illustration of the remarkable cellular signal transduction vision pathway which begins with light interacting with the retinal chromophore molecule.

The chromophore photoisomerization is the beginning of a remarkable cascade that causes electrical signals (called action potentials) to be triggered in the optic nerve. In response to the chromophore photoisomerization, rhodopsin causes the activation of hundreds of transducin molecules. These, in turn, cause the activation of cGMP phosphodiesterase (by removing its inhibitory subunit), an enzyme that degrades the cyclic nucleotide, cGMP.

A single photon can result in the activation of hundreds of transducins, leading to the degradation of hundreds of thousands of cGMP molecules. cGMP molecules serve to open non selective, cyclic nucleotide gated (CNG) ion channels in the membrane, so reduction in cGMP concentration serves to close these channels. This means that millions of sodium ions per second are shut out of the cell, causing a voltage change across the membrane. This hyperpolarization of the cell membrane causes a reduction in the release of neurotransmitter, the chemical that interacts with the nearby nerve cell, in the synaptic region of the cell. This reduction in neurotransmitter release ultimately causes an action potential to arise in the nerve cell.

All this happens because a single photon entered the fray. In short order, this light signal is converted into a structural signal, more structural signals, a chemical concentration signal, back to a structural signal, and then back to a chemical concentration signal leading to a voltage signal which then leads back to a chemical concentration signal.

This incredible cellular signal transduction design is the biochemical foundation in image-forming eyes. But it is also found in the third eye. In fact, the third eye includes two antagonistic light signaling pathways in the same cell. Blue light causes the hyperpolarizing response as described above, but green light causes a depolarizing response. How is this done? By the inhibition of the cGMP phosphodiesterase enzyme. Specifically, there are two opsins, one that is sensitive to blue light which activates the cGMP phosphodiesterase enzyme, and another that is sensitive to green light which inhibits the cGMP phosphodiesterase enzyme. It appears that initially these are two separate pathways and they come together at the point of influencing the cGMP phosphodiesterase enzyme. [10]

Yet another example of early complexity in eyes is found in the long-extinct trilobite. It had eyes that were perhaps the most complex ever produced by nature. One expert called them “an all-time feat of function optimization.” [11] Reviewing the fossil and molecular data, one evolutionist admitted that there is no sequential appearance of the major animal groups “from simpler to more complex phyla, as would be predicted by the classical evolutionary model.” [12]

Reaction

The evolutionary expectation of a lineage progressing from simple to complex designs has become optional for evolutionists. Where hints of simplicity can be found they are employed, but clear and obvious cases of intricate complexity early in the evolutionary tree are typically said to be the result of an early, rapid evolution process.

But these explanations have become increasingly problematic and some evolutionists are dropping altogether the idea of a simple-to-complex evolutionary trend. Instead, a new evolutionary hypothesis is that early on in the history of life, near the bottom of the evolutionary tree, there appeared a small organism with a Universal Genome that encodes all major developmental programs essential for every animal phylum. In other words, all the important DNA sequences were present in an ancient organism. From there, new species emerged depending on which genes were activated, inactivated, or deleted. [12]

The feasibility of this hypothesis may be difficult to determine, but what we do know is that another prediction of evolution has failed and consequently the theory has become much more complex.

References

1. Charles Darwin, On the Origins of Species, 6th ed. (1872; reprint London: Collier Macmillan, 1962), 332.

2. J. Adelman, “Eozoön: debunking the dawn animal,” Endeavour 31 (2007): 94-98.

3. M. L. Droser, J. G. Gehling, “Synchronous aggregate growth in an abundant new ediacaran tubular organism,” Science 319 (2008): 1660-1662.

4. University of California – Riverside, “Rethinking early evolution: Earth's earliest animal ecosystem was complex and included sexual reproduction,” ScienceDaily 20 March 2008.

5. University of California – Berkeley, “Genome of marine organism tells of humans’ unicellular ancestors,” ScienceDaily 20 February 2008.

6. Yale University, “Trichoplax genome sequenced: ‘Rosetta stone’ for understanding evolution,” ScienceDaily 8 September 2008.

7. U. Technau, “Evolutionary biology: Small regulatory RNAs pitch in,” Nature 455 (2008): 1184-1185.

8. S. H. Diks, K. Parikh, M. van der Sijde, J. Joore, T. Ritsema, et. al., “Evidence for a minimal eukaryotic phosphoproteome?,” PLoS ONE 2 (2007).

9. C. G. Kurland, L. J. Collins, D. Penny, “Genomics and the irreducible nature of eukaryote cells,” Science 312 (2006): 1011-1014.

10. C. Su, et. al., “Parietal-Eye phototransduction components and their potential evolutionary implications,” Science 311 (2006): 1617-1621.

11. Riccardo Levi-Setti, Trilobites, 2d ed. (Chicago: University of Chicago Press, 1993), 29-74.

12. M. Sherman, “Universal genome in the origin of metazoa: Thoughts about evolution,” Cell Cycle 6 (2007): 1873-1877.

View user profile http://elshamah.heavenforum.com

Admin


Admin
http://www.apologeticspress.org/apcontent.aspx?category=9&article=1412

In his book, The Wonder of Man, Werner Gitt explains how the retina is a masterpiece of engineering design.

One single square millimetre of the retina contains approximately 400,000 optical sensors. To get some idea of such a large number, imagine a sphere, on the surface of which circles are drawn, the size of tennis balls. These circles are separated from each other by the same distance as their diameter. In order to accommodate 400,000 such circles, the sphere must have a diameter of 52 metres... (1999, p. 15).

Alan L. Gillen also praised the design of the retina in his book, Body by Design.

The most amazing component of the eye is the “film,” which is the retina. This light-sensitive layer at the back of the eyeball is thinner than a sheet of plastic wrap and is more sensitive to light than any man-made film. The best camera film can handle a ratio of 1000-to-1 photons in terms of light intensity. By comparison, human retinal cells can handle a ratio of 10 billion-to-1 over the dynamic range of light wavelengths of 380 to 750 nanometers. The human eye can sense as little as a single photon of light in the dark! In bright daylight, the retina can bleach out, turning its “volume control” way down so as not to overload. The light-sensitive cells of the retina are like an extremely complex high-gain amplifier that is able to magnify sounds more than one million times (2001, pp. 97-98, emp. added).

View user profile http://elshamah.heavenforum.com

Admin


Admin
The Eye and Irreducible Complexity - Creationism Debunked

John Connolly: So you spoke a little bit about flagella. The other argument that is always drawn up is the Mammalian Eye, so could you maybe dispel that a little bit for us, the idea that the eye is too complex of a system to have evolved, that it must have had some intelligence involved? Eugenie Scott: You know if you read the creationist literature, and I don’t want to wish that on anyone, but if you do you’ll find that they are very fond of quoting a statement that Darwin made on the Origin of Species where, and I haven’t memorized it but it’s something like “It is really quite preposterous to imagine something like the vertebrate eye, it’s so snazzy (he didn’t say snazzy), it’s got all these parts that work together to bring light to the eye and form an image, and nobody would think it would be possible for my natural selection to produce this”, and the creationists all say “see, Darwin himself says that the eye can’t evolve”. But they’ve never really looked at the book because they just keep quoting each other, and if you actually go to the Origin of Species and you find that passage and you continue reading it, the very next sentence is “But I can assure you that that’s not the case, that I can do this” and then he goes on with this wonderful description of how it’s quite possible to take a very simple structure, and with very few modifications improve its ability to assist an organism, in other words in Darwin’s own terms it had adaptive value. And he then does this wonderful thing, which Darwin did all his life of course [as] he was a wonderful naturalist, he went out to nature and he looked at nature and said “there’s something that’s kind of like what I’m talking about”, and if you look at the eye of a snail it’s hardly more than just a slight pigmented spot on the surface of the skin there, but having a pigmented spot does allow you to tell light from dark, so that’s adaptive to a snail; that would actually help a snail get along better, so any ancestral primitive snail or creature that had this light sensitive spot would be at an advantage and so it would live longer and as we say today would pass on its genes more than a creature of the same species that didn’t have that. And then he goes on and says “Well you know here’s another kind of creature, another little invertebrate creature the limpet that has that pigmented spot, but it also has kind of a little bit of an indentation on the skin where that pigmented spot occurs and that’s an advantage; that’s actually better than that snail eye because having an indentation as well as that pigmented spot allows you to get an idea of what direction the light is coming from, so that’s even better than being able to tell light from dark. And by the way, if you look at the physiology of this, being able to tell light from dark is useful for many creatures, I mean lots and lots and lots of organisms, for setting the biological clock for certain physiological reactions that happen. Being able to tell what direction the light is coming from is very useful because that might help you navigate toward food or away from heat or away from other kinds of phenomenon that you might want to avoid or be attracted to, and then Darwin goes on and find another animal, and he points to it as having not only some wiring down here at the bottom and this cup shaped thing, but actually the cup is formed almost to a pinhole, and it’s kind of the equivalent of the old fashioned pinhole cameras that I know people had in the early 20th century. Nobody has them now of course because we’ve all gone far beyond that, but a pinhole camera is a big advantage over just having a cup because a pinhole camera actually can allow an image to focus on the back of the eye. So anyway, he [Darwin] builds up this system step by step by step by step and actually on NCSE’s website we’ve got a little video talking about the evolution of the eye in the same fashion. And then you add a lens and that’s an improvement as well, so what Darwin does is look at the eye, the final product of the vertebrate eye which is a very snazzy kind of organ, it’s really good about getting images to the eye and getting that information to the brain, but he shows you how from very, very simple beginnings there is an adaptive value to each step until you finally build up to the final product. Now what the intelligent design folks want to do is they want us to start there, they want us to start at that final complex snazzy multi-component form and say “[it] couldn’t possibly form by natural causes”, but actually it’s very possible for it to form. And what’s kind of interesting about the eye story is that there were some Swedish scientists (and I am sorry I am having a senior moment, I don’t remember the exact reference) who did some computer modeling for how long would it take, given such and such a mutation rate for changing the surface of the skin and causing the cup forming and the pinhole eye and the formation of the lens from crystalline structures that are already there, and how long would it take to evolve an eye from something like Darwin’s original pigmented patch and it found that it could be done in something like 100 million years or something which, geologically speaking, is a drop in the bucket.

View user profile http://elshamah.heavenforum.com

Sponsored content


View previous topic View next topic Back to top  Message [Page 1 of 1]

Permissions in this forum:
You cannot reply to topics in this forum