Written by David Tebbutt, MicroScope 04/84 item 01 - scanned
Did you see that dreadful television program Voices in which our very own Donald Michie faced MIT's Joseph Weizenbaum? Broadly speaking, Michie was there to defend artificial intelligence and Weizenbaum to attack it. No doubt one could have a week long debate on the precision of that definition but I've only got a few hundred words. Our participants had several thousand words each with which to state their respective cases but neither of them managed to use them very effectively.
I met Donald Michie recently and I found him to be a courteous and charming man. This contrasted greatly with the professor Michie in this television program. He came across as rather supercilious and more interested in scoring petty points than in moving the great debate forward. He spent a lot of time interrupting Weizenbaum when the poor fellow was clearly having great problems stringing his words together. Weizenbaum looked defeated throughout and Michie looked immensely smug.
To summarise Weizenbaum's case, and this is drawn from his writings as well as the television debate, he questions the wisdom of pursuing technological development for its own sake. He feels that we are creating solutions in search of problems and, without considering the consequences of our actions, we rely on the marketplace as much as anything else to dictate which developments are adopted or dropped. He sees that computer systems are already growing beyond human comprehension and that this trend will accelerate rather than slow. He is deeply concerned about the way in which human values do not form part of the deliberations of artificially intelligent machines. As an illustration of how computers can mislead, Weizenbaum has quoted illicit bombing raids on Cambodia during the Vietnam war. The American air force apparently bombed Cambodia and when the map coordinates of these targets were input to the local computer it would spot that they were illegitimate targets and replace them with legitimate Vietnamese ones before sending battle details to the Pentagon. Mission summaries were produced by the Pentagon computer to 'prove' that no illicit bombing raids had taken place. Because a computer produced these reports, everyone assumed that the information was accurate.
Now we are building knowledge bases around the world and, sooner or later, computers of all shapes and sizes will be accessing this information. The data stored may be correct, it may be accidentally wrong, it may be deliberately wrong or, more likely, it may be wrong by omission. I can see the situation arising where computers are rummaging around these knowledge bases and recommending actions and decisions based on the available computerised information. The knowledge bases could become a sort of electronic equivalent of a vociferous minority interest group. Weizenbaum is rightly concerned at the lack of balance in such knowledge bases; scientific stuff which ends up denuded of much of its original meaning. Many decisions which are assisted by the computer affect human beings yet the essentially human aspects of knowledge are unlikely to be held in the machines.
Weizenbaum reminds us of the consequences of decisions made in the past without due reference to the human dimension: schooling, medicine and urban planning in both America and Britain have failed to live up to their expectations, despite all the 'scientific' information to hand when these plans were first approved.
All is not doom and despondency. Some things have been tackled with success - airline reservation systems, space flight and astronomy are three examples. None of these, you will notice, greatly affect the way people actually live. Weizenbaum advocates that we look carefully at who benefits and who will be the victims of the course we are taking. He suggests that we consider carefully the sort of world that we are leaving our children. He seems to recognise that we are trapped into going forward simply because our present generation of computers have been given tasks to perform which are quite beyond their designs. In particular NORAD, the American defence system has had America on Red Alert at least once.
Weizenbaum's message is almost one of despair. He sees systems already so complicated that no-one can grasp them. He sees that they cannot be dismantled without leaving something else in their place. This will lead to more advanced systems which may have consequences which we cannot foresee but which may; be to humanity's disadvantage. At least if these advanced systems incorporate Donald Michie's suggestions that they should be able to explain their line of reasoning (assuming they are allowed time), then there's a chance that they will be an improvement on present computers.
Whichever way we turn there seems to be a cleft stick waiting for us to get stuck in. Some will say that an expert system is more likely to be right than a human being because it can process a larger volume of information more swiftly and accurately than a human. Yet the fear is that underneath all this the computers have no sense of human values.
I think that Weizenbaum has a point. We are being swept along by what he calls 'technological inevitability'. We should be aware of the dangers of developing expert systems and the like that do not have 'human values' as part of their make up. The bottom line for businesses and governments is all too often 'profit'. Perhaps it's time to add in 'quality of life' as being an equally important part of that line.