Written by David Tebbutt, MicroScope 02/86 - scanned

A lot of people have said to me "Why all the fuss about the explosion of the space shuttle Challenger?" I can think of three reasons immediately.

One, the cynical one, is that since the disaster was broadcast on the television, swift and visible reaction was necessary.

The second reason is that the explosion of the Challenger was symbolic of destruction of our hopes and dreams for the future, for to the lay person, space travel is one of the clearest manifestations of the advancement of science. The third reason is natural regret at the loss of life particularly that of the schoolteacher Christa McAuliffe.

I'll come back to Challenger in a minute, but now I'd like to mention something a little nearer home: software bugs. We have seen one or two major software companies squirming recently as bugs have been found in their latest, most complex offerings. Indeed, one or two companies have replaced all faulty disks at huge expense.

Now I can understand a software publisher wanting to get a product out the door as quickly as possible but I think these companies long ago realised that to do this was to risk crippling future expense. So why is it still happening?

The answer must lie in the product's complexity and the inability of any software testing to match the sorts of things only users can dream up.

Interestingly, 1986 will see more efforts to make software publishers responsible for their costly bugs. If bills are passed, this will have a profound effect on the software industry. Many players may simply give up rather than run the risk, however remote, of being put out of business by potential bugs in their programs.

The fact is that programming even at a modest level, is bug-prone, so as programs increase in complexity the danger increases. Multiply these dangers by the extra perils posed by multi-tasking and multi-user activities, and you may be wondering whether there will be any authors and publishers left.

Let's go back to Challenger now and see what went on at mission control. Because so much information needs to be fed into mission control (five computers reporting up to 25 times per second), it is impossible for everything to be monitored by humans. The result is that most of the data is saved on tape. It was by examining this tape the following weekend that NASA discovered the four percent drop in pressure in the booster rocket which tended to confirm the 'blowout' theory of why things went wrong.

Now someone, somewhere, had obviously decided that a four percent fluctuation in pressure on the booster casing was within normal operational limits. This meant that no warnings or alarms were given, which in turn meant that the first mission control knew of the tragedy was when the booster rockets separated prematurely and the fuel tank exploded.

A NASA cameraman was posted to the north of Cape Canaveral to record the flight and his camera was the only one to see the fault. Now, of course, people are wondering why the boosters and the fuel tank didn't carry their own lightweight video cameras so that mission control could keep an eye on the outside of the assembly, rather in the style of the monitoring cameras used in so many offices, warehouses and factories today. Such a camera would have given a 14-second warning, perhaps enough time to jettison all the external paraphernalia and give the shuttle a chance to glide to earth.

I know, it's easy to be wise after the event. But that leads me to the point of this Reflections: if we can't produce bug-free software and we can't think of everything in advance, what the heck are we doing pursuing SDI, or Star Wars, technology?

It's lovely to think of all that American Defence money sloshing around our universities and hi-tech companies, even though the question of intellectual property rights still hasn't been sorted out. All the bits and pieces of technology required are very attractive - optical systems to scan the earth's aerial activities, light-driven computers. You name it, Star Wars needs it.

But right at the heart of the whole exercise is this reliance on computers and, more seriously, the software that drives them. Anyone who has ever been involved with developing software, especially that written by more than one person, will tell you that only a fool will give a guarantee of absolutely no bugs.

So we have a number of dangers. One is that bug-free Star Wars are almost certainly unachievable. Another is that those responsible for SDI won't think of everything in advance. A third is that the hardware may go wrong anyway and give the computer duff information. And finally, like the Challenger, the first anyone in control will know is when everything has gone horribly, irreversibly wrong.