Written by David Tebbutt, MacUser Nov 1988 (guess) - scanned
The world seems to be getting the hots for hypertext right now. You may remember I mentioned it in this column a few weeks ago. At the end of June Britain and America even ran overlapping conferences on the subject. Since Ted Nelson was the keynote speaker at each event, it's just as well they didn't start on the same day.
The UK conference, held at the University of York, involved almost 40 presentations, all jammed into two days. Of course, the arguments, discussions and demonstrations ran well into the night. Sessions were run by a healthy mix of academic and commercial researchers. All dealt with the issues which underpin good hypertext product design and many brought their experience from other fields to bear on the subject. The presentations covered everything from the design of hypertext systems to the psychology of navigation.
Given that Nelson's ideas are fairly clear, it's amazing how many variations on his original theme now exist. The most obvious departure from his dream of a unified global 'docuverse' is reflected by the number of developers who want to store data in their own local formats with their own internal structures. Each has found a different way of storing and accessing information without any thought for the wider requirements of a global storage system which everyone can access. Nelson refers to these closed systems as 'the Balkanisation of information'.
Perhaps developers are scared to buy into the Nelson dream just now. After all, it has been 29 years coming. But the fact is that he's now publishing delivery timetables for Xanadu (his hypertext server system). Here they are: mid 1989 - full specifications to developers, first quarter 1990 - single user and LAN version available (then third party application programs, he hopes); third quarter 1990- multiple server version; 1991 - addition of the royalty feature; 1992 - public access open hypertext publishing begins.
Of course, you could say that Nelson just wants to impose his own storage method on everyone else. This may be true, but the difference is that he has always striven towards an open system rather than the closed 'Balkanised' type of system already mentioned. Another area in which his view differs from many of today's implementations is in the way information is presented on the screen. Instead of chopping it into artificial card-like chunks, as HyperCard does, he favours the scrolling model in which documents scroll up the screen. Further information from other, possibly remote, documents can be accessed on the fly and incorporated into the document on display: Guide, for example, takes this approach.
The card-like approach creates a whole set of problems which do not exist in other systems. A favourite and much hackneyed accusation is that with card systems you can easily become 'lost in hyperspace'. The scrolling model brings everything to your screen, you don't have to root around other screens using cascades of 'goto' buttons, becoming lost and having to go 'home' to reorientate yourself.
Hypertext, like database theory or programming language design, has become a respectable area of research. If you take HyperCard, which was much in evidence despite its imperfections, we learned a lot about where people clicked their mouse buttons.
Yes, grown men and women do make studies of where exactly on the screen users tend to click their mice. There's an amazing cluster of near misses, just below clickable objects. The reason for this becomes apparent when you consider that the mouse pointer is a diagonal, upward pointing arrow. To significantly improve the usability of your HyperCard applications, simply extend the active area of your objects down by a fraction of an inch.
One interesting study compared a HyperCard stack with paper and wordprocessor equivalents. Its aim was to see how well users were able to estimate the size of a body of information, to understand its contents and to find particular information within it. The time spent in the contents/index portion as a proportion of the total time was also recorded. The paper document comprised 13 pages and the Hypercard stack contained 53 cards. The users were first given time to familiarise themselves with the document and then they were tested.
On estimating document size the paper estimates were spot on, not least because the pages were numbered. Users tended to underestimate the wordprocessor document size by about 10% and overestimate the HyperCard documents by 90%.
While HyperCard users finished the tasks in the quickest time, they also ranked second lowest for understanding. To assess comprehension, 12 questions were asked . It turns out that the HyperCard users were four times more likely to abandon their quest for answers than the paper users. Finally, a HyperCard user spent five times longer in the index/contents area than the user of a paper document.
It's hard to say whether the tests were entirely fair. A larger document and a larger stack may have tilted the scales in favour of HyperCard. Or a more interesting subject may have increased the user's determination to find answers. (The subject of this trial was wine.) On the face of it though, paper appears to have HyperCard well and truly beaten.
It's good to know that serious research is taking place around the world on how to improve the usability of hypertext systems. It is vital that some good practical methodologies emerge if we are ever to replace tree-gobbling paper with an effective electronic information delivery system.