Discussion Document for the Allerton Institute
The purpose of this discussion document is to encourage you to think about the issues of user support, to cause questions to trigger in your mind that you would like to raise at the session, and to plead for you to bring with you and be willing to share the findings, results experiences, anecdotes, methods and agendas of your own research, that of your colleagues or that which you are aware of in your own field of expertise, to share with the interdisciplinary community. The document lays out some issues that I think are interesting and important and merit discussion. I don't necessarily have the answer or any answers to them. If you disagree with the issues raised or think that I have missed the point or have completely overlooked a crucial issue, then the document has served its purpose. Bring your objections to the session and we can learn from a multiplicity of perspectives. As it is a discussion document, the style is fairly informal and I haven't bothered to shovel in the references to the work which has inspired this thinking.
I see one of the strengths of work on Digital Libraries in the involvement of researchers from a wide range of academic disciplines. However it can also be a problem in that we have to establish a lingua franca to understand each others aims and needs. It may help in reading this document to know that I am a Computer Scientist, interested in building systems that can more actively support users and facilitate collaboration. I am also interested in studying the use of prototypes gain insights into how to build a better system. This position paper builds on the issues I summarised in last year's position paper. You can read that, and our other work and loads of screendumps of our prototype system at our web site. This paper addresses three themes, setting the requirements of a system to support users, issues in designing systems to support those needs and the potential of multidisciplinary research to tackle those issues.
This section considers the kinds of user of the Digital Library and the activities they will undertake in order to determine what sorts of support users are likely to need. It focuses mainly on helping users to learn about how to the Library.
Who are we supporting, and to do what?
* Learning how to use the DL
* How to use the DL to learn
* Using the DL in the Information Economy
There are various kinds of support that a Digital Library (DL) should provide to its users:
* Teaching users about the whole process of information searching
* Teaching users about the specifics of a particular function / program / database
* Providing an environment for learning (not necessarily the same thing as teaching)
* Offering advice
* Providing information services via an intermediary
* Helping people to help themselves and each other
* Supporting people in the way they do their work
* Providing functionality people never thought they wanted but find useful
Support activities can involve many of the above simultaneously. Some can be done automatically and some may involve various kinds of collaboration with people.
Kinds of user
Despite the dangers of putting people in boxes and indeed seeing the 'user' as the problem to be fixed, as an unredeemed engineer I think it can be useful in informing systems design to think about the different kinds of people and contexts in which DL systems might be used, and the implications that has for supporting the user. So from the perspective of user support we could come up with different scenarios involving:
* The trainee Information Professional
* The person who just wants to learn just enough
* The returning user who has forgotten things and remembers the old system
* The novices:
Paper library expert / computer systems novice
Paper library novice / computer systems expert
Paper library novice / computer systems novice
* The kids of the Nintendo Generation (Sherry Turkle's observations of a group with a very different world-view to the textually oriented scholarly perspective)
* Students and academics
* Workers in the Information Economy
As usual an individual may belong in many categories at the same and different times.
The calculus of learning
Users are rational but uninformed in their decision making:
* They handle complexity well by ignoring lots of stuff
* They are unaware of the accessibility of the next level of understanding
* They think the learning costs are high
* They think the initial payoffs are low
* They have many other claims on their time
* So they see no point in learning up to the next level of competence.
Challenges in education / training
* Getting people started.
Finding a minimal subset of skills and functions that will allow users to begin doing something useful as fast as possible.
* Engendering a learning culture
Helping users realise that one does not just learn to use a DL in one lump of training.
Computer systems are now processes, not products. There are always:
new versions of existing systems
new databases / kinds of data
So we need to help people to realise that there is in fact more to learn and to know what the benefits are of knowing more and what the true costs are of learning the next bit.
Supporting bite-sized incremental learning (opportunistic, small scale lesson-ettes)
For example, Alta Vista's hints and the Microsoft Word 6 Tip of the Day
* Teaching transferable skills
Many of the skills you learn in one DL ought to transfer to:
learning the next system
learning the next interface to the same system
learning the next subject area
Such activities involve metalearning; learning how to learn (in this case about computerised information systems)
* Teaching the teachers (trainee librarians)
* Teaching the really hard topics:
Choosing your databases
Evaluating data quality
Coping with truly immense quantities of data
How to ask the right questions
Managing your time
Sorting out your strategies and tactics
Systems Design Issues
Assuming that we have got the requirements right, how do we go about building systems that do support the user? This section looks at some of the broad issues to consider when designing particular support elements.
Collaboration is the solution, now what's the problem?
As you may know from last year's position paper, I believe that many of these problems can be effectively addressed by building systems that actively support collaboration between users. We have been looking at the sharing of a visualisation of the search process as a means of facilitating collaboration about what the user did, what they want to do and how they should go about doing this. In a more general sense we should consider how to provide mechanisms for sharing work, notations and vocabularies for sharing, and interfaces for supporting complex planning of searching (goalstack management).
To add to the problem, we need to allow for the possibility that at least one of the participants in such collaborations may not be an Information Systems professional, nor have any desire to be and so is unwilling to learn and unable to use a complex technical vocabulary dealing with information searching (even though they may be happy to use an equally complex vocabulary relating to the domain of the search). An area we are looking at is how pointing at a search activity process representation can allow the conveying of complex search heuristics without the need to understand a specialist vocabulary. A related interest is whether giving users case histories of other search successes and failures can enable users to abstract out the relevant issues, even when the cases given are not from a subject domain the user is familiar with.
There are various kinds of collaboration to tackle the range of issues raised in this document. These include:
* groups of more than two
* synchronous and asynchronous
* re-use of others work, using technologies like answer garden
* recommending, awareness, introducing and filtering
* informal altruistic help
Surfing the wave
The surfing metaphor used with relation to the Internet is based on analogy with channel surfing on TV (so it's a metaphor of a metaphor!). That's a shame: real surfing is a more powerful metaphor for systems designers: how to create programs, learning activities, social structures etc. that can ride the fluid moving ever-changing wave of the underlying functionality, content and interface of the rapidly evolving digital library. Anyone who has installed at least three successive versions of Netscape this year will know the problem.
From a systems development perspective we can perceive user assistance and teaching as a kind of error detection and recovery. We strive mightily to produce systems that will be easy for users to learn by themselves, that will help them to find what they want, and in the case of agents will even go off and find it for them. But what happens when one of these aspects fails? In the traditional library you go and find someone and ask for help. In the digital library, we should provide similar facilities. By providing a human-centred help-giving structure, albeit mediated computationally, we provide a mechanism for users to cope with the novel and the unexpected, and hopefully a mechanism that is self sustaining.
One of the problems with the expert systems developed during the boom time of AI was that they were brittle; they did what they did very well when all the parameters were in their area of competence, but even a slight straying at the edges produced a catastrophic failure in performance unlike the graceful degradation of human experts. I see the current hype about agents as all too reminiscent of the 1980s hype about AI (also the use of similar analogies, technologies and increasingly extravagant claims). A system that can accommodate agents but can cope when they fail is the one I would want to buy.
Cost: There's no such thing as a free librarian
If we are to advocate the development of systems that support collaboration rather than automation or the rendering unnecessary of user support then we must be concerned with minimising the costs of collaboration. Experts are always scarce and expensive. Cost related support can involve:
* Helping users to be able to quickly explain what they have already managed to do and what they now need help on
* Creating an environment where users can and will try to solve the problem for themselves before calling in help
* Ensuring that this isn't harder to rectify than if the user had asked for help at the outset
* Helping users try to help each other before calling in the experts
* Allowing users to make use of help offered to other users and is now generally accessible
* Ensuring that help requests are forwarded to the most appropriate person (or agent, or CAL program)
* Supporting teaching by stealth when ostensibly helping users
* Supporting efficient asynchronous help-giving
* Supporting help giving by more than one person (and in combination with agents)
Another kind of collaboration: interdisciplinary research
This is an interdisciplinary research area. At Lancaster we have considerable experience of the advantages and problems of working closely with just two disciplines Computing and Sociology, how ethography can inform systems design and evaluation and how its results mesh with those of user studies and traditional computing formative evaluation techniques. However in DL research there are also people from Library and information Science and Psychology, and maybe others as well. This leads to people using a number of different paradigms, all of which are useful, but can lead to a lot of talking at cross purposes unless one is aware of where the other participants are coming from. Here is a rough draft of the approaches:
software metrics, bibliometrics, sociometrics
controlled, hypotheses, disprovability
Build and test:
proof by construction of prototype,
Resource allocation and research agendas
With such a rich resource it is important to decide how to choose between the options, especially how to allocate scarce resources of time and money. All of these approaches could be applied to understanding better how to support library users. It is all to easy to say that we should do all of everything.
An example of the cost trade-off that we face with a computer systems engineering perspective is how to allocate resources between:
a) Background study of user needs, understanding, current activities etc.
b) Developing prototypes to better understand the issues and provide quite new functionalities
c) Evaluation of those prototypes.
Discussions of how to cut the cake can degenerate into endless circular arguments of which is the most important for overall success.
One area to discuss is how to apply low cost methods that provide just enough information to get by. In user interface design and evaluation this is called discount usability engineering. Colleagues in the Sociology Department at Lancaster talk about 'Quick and dirty ethnography'. What are the cost-effective methods that others have used?
Systems developers inevitably have a view of how their system is going to be used that influences the thousands of design decisions that they make. In order to build more effective robust systems that can facilitate the kinds of learning and collaboration we might like to occur, it helps to know more about how these systems will work in practice, and what users requirements really are. A useful thing to discuss is where we should allocate research resources to in order to address this question.
Where to Study
One way of doing this is to look for analogies with existing structures. It would be useful to know more about:
Learning, working and collaboration in traditional Paper Libraries.
* How users learn existing electronic information systems (OPACs CD-Roms etc.)
* The affordances provided in the innovative architectures of new libraries such as those at UCLA, Michigan and Limerick deliberately designed to acknowledge and support collaboration.
* What other organisations can teach us, for example the telephone helplines of user support in software companies.
* Other studies already done in other disciplines whose results we could use.
* Which other places should be studied?
* What in particular should we look at?
What to Study
To support learning about how to use the system more effectively it would help to know more about what users are actually thinking. This user diagnosis may reveal certain classic misconceptions or mental models that allow an expert (or even the system itself) to predict certain error patterns, or to diagnose from them and propose the most effective explanation.
Some candidate mental models for an information system's search engine:
* A smart person that goes and gets what you want (the agent ideal)
* A genie / Rumpelstiltskin: fey, mischievous, inconsistent
* Legalistic magic (does literally what you say with disastrous consequences)
The sorcerer's apprentice)
* A criminal to be interrogated (the word is actually used in the literature)
Batter the system with a barrage of questions until it gives in and yield the results.
Some answers are bound to be wrong, irrelevant or deliberately misleading
What Not to Study: Academic Heresy
However, there are just too many things we could study, and neither the resources nor the time even to allocate to the really good ones if we want the results to inform DL design now. So what can we get away with not studying? It is all too easy to build elegant observational studies or controlled experiments that provide definitive proof of the blindingly obvious. Maybe we only need such rigour to prove the counter-intuitive. This is a resource allocation problem: how to choose the most important things to study. Here are some examples:
We hold these truths to be self-evident....
Do we? Are they? What do you think?
Is there any evidence? Is there any point collecting evidence?
Is the cost of collecting the evidence too expensive?
The panel that launched a thousand Masters projects...
Can we treat these as givens and spend our time developing systems that cope with them?
Are there any others?
* Novices don't go to formal training sessions
* Regardless of what we do users will never go to formal training sessions
* Users rarely use the online tutorial
* Users rarely use the online help
* Users rarely read the online examples even when they are on the data entry screen
* Users learn from their friends
* Informal, Situated, Authentic, Collaborative Learning is A Good Thing
* Even self professed computer phobes can fall in love with their OPAC
* Most users have a very minimal competence with the system
* Most users have a very poor understanding / mental model of the system
* Most users learn the basics without too many problems, but they don't improve
* Users find it very difficult to describe what they want to do or what they have done