Rosenblatt’s perceptron began to garner quite a bit of attention, and one person in particular began to take notice. Marvin Minsky, who is often thought of as one of the father’s of AI, began to sense that something was off with Rosenblatt’s perceptron. Minsky is quoted here saying:
However, I started to worry about what such a machine could not do. For example, it could tell ‘E’s from ‘F’s, and ‘5’s from ‘6’s—things like that. But when there were disturbing stimuli near these figures that weren’t correlated with them the recognition was destroyed.
Along with the double-PhD wielding Seymor Papert, Minksy wrote a book entitled Perceptrons … They showed that the perceptron was incapable of learning the simple exclusive-or (XOR) function. Worse, they proved that it was theoretically impossible for it to learn such a function, no matter how long you let it train. Now this isn’t surprising to us, as the model implied by the perceptron is a linear one and the XOR function is nonlinear, but at the time this was enough to kill all research on neural nets and usher in the first AI winter.
Also why it is impossible to solve the binding problem/hard problem of consciousness, as in writing down in paper what you are. The being function, f(b), is not moving through a sequential landscape where it can stumble upon sequential knowledge that maps to its own existence.
Lines indicate the binding of eternal events in special relativity’s fabric.
These do not compose a discrete observable.
The eternal events are a continuous function that furthermore contains a hardcoded uncertainty by virtue of being composed of (belief + amplitude distribution) and not discrete observables.
It is an uphill climb in which Mind can gain more knowledge of its workings but never map itself unto a complete description from external God’s-eye-view.