Welcome, dear readers, to the first iteration of Engadget's newest series, Hitting the Books. With less than one in five Americans reading just for fun these days, we've done the hard work for you by scouring the internet for the most interesting, thought provoking books on science and technology we can find and delivering an easily digestible nugget of their stories.
Will Computers Revolt?
Charles J. Simon
Consciousness and free will have long been the hallmarks of human intelligence, what sets us apart from "lower" forms of life. But as artificial intelligences grow increasingly advanced, the gap between biological and mechanical minds continues to shrink. Although machines can't currently express free will, Charles J. Simon argues in his book "Will Computers Revolt? Preparing for the future of artificial intelligence," doesn't mean they won't be able to in the near future. And what even constitutes free will in the first place? The excerpt below explores his theories behind free will and consciousness as they relate to both the human mind and AI/machine learning systems.
Charles J. Simon, BSEE, MSCS, is a nationally-recognized computer software/hardware expert. Mr. Simon's experience includes pioneering work in AI, neuroscience, and CAD. His technical experience includes the creation of two unique Artificial Intelligence systems along with software for EEGs and other neurological test equipment.
Will future computers be conscious entities? Will they have free will? Or will they just be simulating these capabilities?
A popular argument against computers being able to think in a way analogous to humans goes like this.
We humans are conscious beings and have free will and these are essential to our thinking. Computers are mechanisms which run the same way every time and therefore cannot have free will. Computers are made of materials which cannot possess the essence of consciousness. Therefore, without free will and consciousness, computers will never be able to think.
With the discussion of free will and consciousness, we have reached the pinnacle of human mental processes and also a point of philosophical discussion.
To me, the question devolves into one of whether or not you accept modern physics as describing reality. While there are certainly areas of physics which are yet to be discovered, the essential point is that any physical system can be represented by information which can be replicated in computers. If what we observe in the electrochemistry of the brain includes consciousness and free will, then there is no reason a computer cannot equally possess these capabilities as well. If we believe there is some unobserved essential "magic", then we have a choice. Either the magic will eventually be observed, defined as a part of physics and included in computers OR the magic is beyond the scope of observation, meaning it is outside the scope of any future conceivable physics.
My contention is that human thought is the sum total of a multiplicity of general mental functions working in parallel on an unimaginably large scale. These functions were presented in the last few chapters, and each function can be described and understood. Some of these functions are already working in computers at levels higher than in humans and those that are not, conceivably, could be in the near future.
I contend that all the functions presented so far are necessary to AGI and are also necessary to any appearance of consciousness. Without the "sensation" of the world and the modeling and imagination necessary to comprehend it, no AI system could ever put its chess game, mathematical proof or car-driving skill in context—the context being a real-world environment.
I further contend that the functions presented so far are sufficient for AGI to have the appearance of consciousness. With these capabilities, a robot or computer with appropriate peripherals could sense its surroundings, remember previous situations, and learn how various situations affect it. It could simulate several possible actions at any given time and select and perform the one it determined was best.
We already have computers which can speak and understand speech to some extent. Coupled with a robotic "body" with vision, such a system could learn about objects in its environment and appear to reason. It will appear to make reasoned decisions and will be able to explain its rationale.
Given its simulation capabilities, your mind is able to select several choices, play them out somewhat through simulation, then select the one which results in your most beneficial outcome.
If you were to try to prove to yourself that you have free will, you might place yourself repeatedly in a situation which is as identical as possible. Then sometimes you make one choice and subsequently a different choice. Unfortunately, you can never place yourself in a truly identical situation because after the first time, your experience, your choice, and the outcome all become a part of your mind and so the state of your mind is different on the next try. As a result, we do not have any method of measuring whether or not free will actually exists because we can never set up truly identical situations to determine if we could make different choices.
Here's a demonstration. Raise your right index finger.
Did you raise it? If you did, was it simply to play along with my demonstration in hopes of learning something? Or if you didn't raise it, was it because you wanted to assert your "free will?" I contend that whatever decision you made, it depended completely on your current state of mind -- based on your experience with similar demonstrations.
OK, now raise your left index finger. Did you do the opposite of what you did on the previous paragraph? Did you assert your free will? Either way, did you think about raising your finger? I bet you did. I bet that when you read the text, you couldn't avoid thinking about it.
There is no way to prove or disprove free will in either instance. Your first decision is based on your previous experience and your second decision is based on the same experience plus your experience with the first decision.
When computers become learning systems they will likewise incorporate the experience of a past decision into the process of making a present decision. So the future computer, left to its normal operation, may make a different choice when re-encountering an identical situation, just as you might.
On the other hand, with computers we can set up situations which are truly identical. Computers can be restarted to the specific point of their previous backup so their previous experience need not become a part of their present operation. Restoring a backup can completely erase the experience of the first decision.
So if you were to make a backup of the computer's entire state, have the computer make a decision, reload it from the backup again, put it in the identical situation and let it make the decision again, it would always make the same decision. If it did not, we would consider this a malfunction. One of the convenient things about computer situations is that we can control all the inputs (including access to real-time clocks) to make the situation absolutely identical.
There are theories of human free will and consciousness which rely on complex mechanisms or quantum mechanics and these may eventually be shown to be relevant. The simpler theory, as uncomfortable as it seems, is that the human's free will is just like the learning computer's. It is simply that we can never set up identical situations for ourselves and so we cannot test if the theory is correct. We each only make the choice for the best expected outcome for each situation we encounter.
We see from examining the brain and the operation of neurons that there is nothing observable in the brain which makes it appear to be detached from the laws of physics. The laws of physics are deterministic until you reach the level of (very small) quantum particles. You might argue that the human brain would make a different choice in a truly identical situation because its computing mechanism is governed by unpredictable quantum mechanics and chaos theory. That our synapses may send a few more, or a few less, molecules of neurotransmitters, depending on quantum effects. Therefore we might reach a different conclusion for an identical situation. For me, this is an unsatisfying argument because it only replaces the concept of deterministic "free will" with the free will of a roulette wheel. It is disquieting enough to believe that your mind is a deterministic mechanism without saying it's unreliable as well (and then going further by claiming the mind's superiority because of its unreliability).
Google search always returns its best search results (ignoring sponsorship). You might disagree with the algorithmic definition of "best" but Google computers can only do what they are directed to do. However, if users never click on the top search entry, it will eventually be de-rated and appear further down the list. So, for a given search request, you could theoretically get a different search result every time the search was requested as Google's servers attempt to please you—producing the "best" result. Google computers are incorporating the qualitative experience of a specific search result into ranking decisions for future searches—whether or not it pleased its users—by putting a certain result at the top.
Are Google's search servers aware that they have free will? Of course not. Do they actually have free will in the same context that humans do?
Think of it this way. Are your decisions different because you believe you have free will? Absolutely. One of your mind's innate objectives seems to be to assert its own individuality. Google's search computers don't consider the possibility of presenting different results in order simply to demonstrate their free will (as you might have with your index fingers above). It seems obvious that one of the inputs to any decision you make is your belief in your ability to make a decision... your belief that you have free will.
Because your belief in free will is another input into the decision process, you would likely make different decisions if you didn't believe you had free will. If you really don't believe you have free will, why would you ponder making any decision at all? You'd be a purely reactive entity.
The reason to ponder a decision (the reason to believe in free will) is the probability of making a better decision by examining the ramifications of different possibilities. Your brain doesn't seem to have the ability to simulate different possibilities simultaneously, so it examines them one at a time. The process of examining different possibilities leads you to the belief in free will and the belief leads you to examine more possibilities.
But belief requires consciousness...
Will Computers Revolt? Preparing for the future of artificial intelligence by Charles J Simon. Copyright 2018 by Charles J Simon. Published by Future AI. Used by permission of the publisher. All rights reserved.