Now THIS is really interesting. Quotable:
I've had a suspicion for a while that despite the astonishing success of the first generation of computer scientists like Shannon, Turing, von Neumann, and Wiener, somehow they didn't get a few important starting points quite right, and some things in the foundations of computer science are fundamentally askew. In a way I have no right to say this and it would be more appropriate to say it once I've actually got something to take its place, so let me just emphasize that this is speculative. But where might things have gone wrong?The leaders of the first generation were influenced by the metaphor of the electrical communications devices that where in use in their lifetimes, all of which centered on the sending of signals down wires. This started, oddly enough, with predecessors of the fax machine, continuing in a much bigger way to the telegraph, which turned into the telephone, and then proceeded with devices that carry digital signals that were only machine readable. Similarly, radio and television signals were designed to be relayed to a single wire even if part of their passage was wireless. All of us are guided by our metaphors, and our metaphors are created by the world around us, so it's understandable that signals on wires would become the central metaphor of their day.
If you model information theory on signals going down a wire, you simplify your task in that you only have one point being measured or modified at a time at each end. It's easier to talk about a single point in some ways, and in particular it's easier to come up with mathematical techniques to perform analytic tricks. At the same time, though, you pay by adding complexity at another level, since the only way to give meaning to a single point value in space is time. You end up with information structures spread out over time, which leads to a particular set of ideas about coding schemes in which the sender and receiver have agreed on a temporal syntactical layer in advance.
If you go back to the original information theorists, everything was about wire communication. We see this, for example, in Shannon's work. The astonishing bridge that he created between information and thermodynamics was framed in terms of information on a wire between a sender and a receiver.
This might not have been the best starting point. It's certainly not a wrong starting point, since there's technically nothing incorrect about it, but it might not have been the most convenient or cognitively appropriate starting point for human beings who wished to go on to build things. The world as our nervous systems know it is not based on single point measurements, but on surfaces. Put another way, our environment has not necessarily agreed with our bodies in advance on temporal syntax. Our body is a surface that contacts the world on a surface. For instance, our retina sees multiple points of light at once.
We're so used to thinking about computers in the same light as was available at the inception of computer science that it's hard to imagine an alternative, but an alternative is available to us all the time in our own bodies. Indeed the branches of computer science that incorporated interactions with the physical world, such as robotics, probably wasted decades trying to pretend that reality could be treated as if it were housed in a syntax that could be conveniently encoded on a wire. Traditional robots converted the data from their sensors into a temporal stream of bits. Then the robot builders would attempt to find the algorithms that matched the inherent protocol of these bits. Progress was very, very slow. The latest better robots tend to come from people like Ron Fearing and his physiologist cohort Bob Full at Berkeley who describe their work as "biomimetic". They are building champion robots that in some cases could have been built decades ago were it not for the obsession with protocol-centric computer science. A biomimetic robot and its world meet on surfaces instead of at the end of a wire. Biomimetic robots even treat the pliability of their own building materials as an aspect of computation. That is, they are made internally of even more surfaces.
And this:
I think the idea that von Neumann and others were misled by technological metaphors gets things the wrong way around. It is clear from von Neumann's speculations in the First Draft on EDVAC that he was utilizing the then state of the art computational neurobiology — McColloch and Pitts’ (1943) results on Turing equivalence for computation in the brain — as grounds for the digital design of the electronic computer. In other words, it was theoretical work in neural computation that influenced the technology, not the other way around. While much has been made of the differences between synchronous serial computation and asynchronous neural computation, the really essential point of similarity is the nonlinearity of both neural processing and the switching elements Shannon explored, which laid the foundation for McColloch and Pitt's application of computational theory to the brain.Posted by Greg Ransom