Written by David Tebbutt, MicroScope 12/82 item 02 - scanned
Have you ever wondered where artificial intelligence research might be leading us? We seem determined to create intelligent computers which can learn from their environment and reprogram themselves as additional information comes in. Some people, like Edinburgh's Professor Donald Michie, believe that such computers should provide a 'human window' which enables the user to interrogate and follow the computer's line of thought. This, of course, means that the speed of such systems will be hobbled to the speed of the human mind rather like the early motor cars being restricted to the walking speed of the man with the red flag.
The problem with this approach, and its benefit I suppose, is that computers will never be able to out-perform their creators and users when it comes to certain decision-making activities such as whether to press the red button (or whatever it is they need to do to start World War III). Of course, if the other side is using its computers to make the decision and 'our side' isn't then we could all be dead before the computer has finished explaining its reasoning. So, do we let computers rip or do we hobble them so that we can continue to understand what they're up to?
Another thing to consider is robotics. Lots of robots are in use all over the world. Most of them are firmly anchored to the ground and they perform repetitive, preprogrammed tasks like building cars, computers and robots. Now that's an interesting thing: robots building robots. It's a bit like humans 'making' humans. What would happen, I wonder, if we gave these robots the sort of unfettered artificial intelligence mentioned earlier. They might redesign the robots they're making so that, by using the same raw materials provided, they make mobile intelligent robots. These could be programmed to look for electric sockets and plug themselves in before they run out of power. 'Shakey' - a mobile robot built at the Stanford Research Institute - was taught to seek out dark corners of rooms where it would always find a suitable socket.
Suppose we decide to go ahead and pre-empt the robot's desire to make a mobile version of itself. Let's say that we build a super-intelligent robot with arms and legs and with the ability to derive power from naturally occurring resources. I, for one, would be dead worried that the robots might decide that humans could be processed in some way to make their fuel. So, if this fear is shared, development will come to a halt. Unless, that is, we could find some way of trying the things out at no risk to ourselves.
One answer to the problem would be to put the robots on a suitable planet which has an abundant variety of natural resources where we could keep an occasional eye on them from the safety of a space ship. Since they would be super intelligent, they'd soon figure out how to obtain power and raw materials all they'd soon need would be motivation to do something rather than just 'die'. This is where the ROMs come in. Each of the robots landed on the planet would have a ROM giving basic instructions like 'you must make more, and ever-improved robots, roughly in your own image'.
The ROM would also tell the robots to program these instructions into all robots made this way.
Of course, this is all a bit ridiculous. It is hard to imagine it happening and, anyway, we might all lose interest in them if they took too long to get on with the job, so to speak. If this were the case then we could probably safely let such robots loose here on earth without too much of a problem. But what if, somehow, they do manage to get on with things. Since their processors would be capable of much faster thought than their human creators, presumably they'd do things a whole lot quicker than we could. Let's say that they could perform a week's thinking in just one of our seconds. This means that one of our hours would span something like ten of their years. In a week we would see six or seven of their 'generations' presuming that each robot accumulates wisdom from its environment for a while before going on to produce its own quota of robots.
We could keep a close eye on things by sending out space probes to watch the robot's activities. We could occasionally land on the planet's surface, pick up a couple of specimens and bring them back for study. We would have to watch out for them getting smart enough to build their own space vehicles, just in case they started coming our way. I suppose that, given sufficient time, they might decide to build their own superfast, super-intelligent mobile robots based on a different technology. Like us they might be afraid that the new 'beings' would turn on their creators. They might even consider sending their creations to a suitable planet where they in turn could be studied at minimal risk.
And let's say that they weren't too keen on the silicon and metal approach we adopted. In their wisdom, they may have figured that a sort of self-renewing cellular approach was better. They might provide a control system, a central processor and a memory driven by very low power electrical impulses. And let's suppose that they chose somewhere like Earth for their experiment. Finally, let's say that they wanted to test different versions of their chosen robot, each slightly different but all programmed to improve the quality of successive generations.
Maybe something like this has already happened. It would explain so many mysteries. Flying saucers may have come from the experimenters, the Easter Island statues and the Peruvian desert markings could have been put there to see what we (colour-coded robots?) would make of them.
It might even explain why we are so highly motivated to pursue artificial intelligence research and development.