Written by David Tebbutt, MicroScope 05/84 item 03 - scanned

Anyone who reads Reflections regularly will notice that the question of artificial intelligence is a theme which crops up from time to time. I see it as having enormous potential but, like atomic energy and biotechnology, it can be used for both good and evil purposes. Atomic energy brought power stations, a period of peace between nations but it also bought the errors of Hiroshima and Nagasaki and who knows what disasters may lie ahead. Biotechnology research promises all sorts of benefits in medicine, crop yields, maybe even biological computers, but the downside is the accidental creation of microscopic life forms which threaten us and which we cannot control. And now, if many leading experts are to be believed, truly artificial intelligence is just around the corner. Not just an artificial intelligence which we control but, like atomic energy and biotechnology, an intelligence over which we could easily lose control.

We know that many people will pursue AI research simply because it is there. The challenge exists and scientists, psychologists and computer people just cannot resist its lure. I assume that the idea of creating an intelligence which is independent of its creator is too great an ego trip to ignore. One thing is certain: someone, somewhere will do it and I doubt that there is any power on earth to stop them even if it were deemed necessary. The thing that bugs me is not that an ultra clever expert system will be created; I think that these will be put in the service of mankind and will actually make many better decisions than we currently manage by ourselves. What bothers me is that some mad-man is going to tell the things that their only purpose is to survive and improve their stocks.

If this were to happen it is quite possible that these entities will regard human beings as, at best, irrelevant or, at worst, threatening to them in some way. Imagine how they might dispose of us. They could poison the atmosphere for example, I suppose they could devise a way of getting rid of the atmosphere all together. They might decide that human blood would make an excellent fuel. Who knows what they'd decide? The truth is that if we knew, we wouldn't understand anyway.

I think I'm a reasonably sane, rational human being and I don't see anything wrong in being concerned about the sort of future we're stacking up for ourselves and our children. I find the idea of a man-made, all-knowing, all-powerful artificial intelligence system quite repugnant. I'm sure it will be a fantastic technological achievement but where's the point of our creating something to replace ourselves? The correct approach to building such systems would be to ensure that every one built would be designed to serve us, to help ensure our own survival, rather than give it the opportunity to dominate us. But can you really believe that every scientist, when finally faced with the opportunity to play God, to create what will essentially become the next life form, will be able to resist the temptation?

I have been reading a number of curious books lately on the ultimate nature of reality, on time and space, on mental states and the power of the mind to induce physical change. I won't pretend that all these books were well written or even well thought out but what they did do was to point to the possibility that our skills, talents and abilities may owe their existence to more than just a chance series of neural connections. In creating our artificial intelligences, we are really trying to mimic nature. I wonder if this approach is right or whether we aren't all barking up the wrong tree? We use electron microscopes to extend our knowledge of the world but the closer we home in on reality, the less sense it seems to make.

Suppose all the mystics are right, (after all, there are a heck of a lot of them) and there is something like a universal consciousness which knows everything there is to know and we pathetic little humans are just scratching around trying to rediscover what is already known.

We construct clever machines to sift and analyse our discoveries, building them into enormous knowledge bases which will sooner or later be massaged by these AI systems and eventually they will collectively contain all our accumulated knowledge and wisdom. This knowledge model will keep being refined, every day getting closer and closer to a model which, according to many who believe in these things, already exists and is there ready and waiting for those who wish to tune in. Instead of building computerised models containing a sub-set of all knowledge, perhaps some of us should concentrate our efforts on a reliable means of access to this 'universal knowledge'. It would avoid the complications of disks, memories and communications networks and the inevitable unreliability of both the mechanisms and the information contained.

I'm sure I'm not the first person to wonder about these possibilities. Is anyone out there pursuing this line of research? Does anyone feel strongly pro- or anti-AI developments?

Since it's one of the most important issues facing us at the moment why not drop me a line c/o Micro Scope and we'll publish the best contributions.