HUMAN-COMPUTER INTERACTION
SECOND EDITION
There is a counter-argument however. First of all there is some evidence to suggest that teaching managers to recognize their speech acts improved their communication. The extrapolation is that making the acts explicit improves communication, but that is a major extrapolation. A more measured claim would be that explicit representation is at least a good tool for training communication skills. The second argument concerns the nature of electronic communication. Although we are all experts at face-to-face communication with all its subtleties, our expertise is sorely challenged when faced with a blank screen. We lack the facilities to make our intentions implicit in our communications and thus explicit means will help.
A special case of a linear transcript is structured message systems such as Coordinator, where not only the order but also the function of each message is determined. The other extreme is where the transcript is presented as a single stream, with no special fields except the name of the contributor. Figure 14.2 shows a screenshot of the York Conferencer system showing such a transcript on the left of the screen. On the right is an electronic pin-board, an example of spatially organized text.
The group may also divide into subgroups for detailed discussion and then reform. Tools must be able to support this. For example, early versions of CoLab's software only catered for a single WYSIWIS screen - that is, it only supported a single group. In later versions they were forced to allow subgroups to work independently and then share results. Note that the CoLab meeting room only has room for six persons; in larger meeting rooms subgroup working is the norm.
As well as being unobtrusive, the orientation of computing equipment can affect group working. If we wish to encourage conversation, as we do in a meeting room, the participants must be encouraged to look towards one another. Meeting rooms have a natural focus towards the screen at the front of the room, but inward-facing terminals can counteract this focus and thus encourage eye contact [151].
The designers of Capture Lab, an eight-person meeting room, considered all these features and many other subtle effects. However, the users still had some difficulty in adapting to the power positions in the electronic meeting room. At first sight, the electronic meeting room is not unlike a normal conference room. If the shared screen were a whiteboard or an overhead projector, then the most powerful position would be towards the front of the room (seat 1 or 6 in Figure 14.7). Managers would normally take this seat as they can then easily move to the whiteboard or overhead projector to point out some item and draw the group's attention.
Unless primed beforehand, managers of groups using Capture Lab took one of these seats, but quickly became uncomfortable and moved. In the electronic meeting room, there is no advantage to being at the front, because the screen can be controlled from any terminal. Instead, the power seat is at the back of the room (seat 3 or 4), as from here the manager can observe other people whilst still seeing the screen. Also the other participants have to turn round when the manager speaks, again adding to the manager's authority over the meeting.
Even in a single-user experiment we may well use several video cameras as well as direct logging of the application (see Chapter 11). In a group setting this is replicated for each participant. So for a three-person group, we are trying to synchronize the recording of six or more video sources and three keystroke logs. To compound matters these may be spread over different offices, or even different sites. The technical problems are clearly enormous. Four-into-one video recording is possible, storing a different image in each quadrant of the screen, but even this is insufficient for the number of channels we would like.
Smell provides us with other useful information in daily life: checking if food is bad, detecting early signs of fire, noticing that manure has been spread in a field. Touch too is a vital sense for us: tactile feedback forms an intrinsic part of the operation of many common tools - cars, typewriters, pens, anything that requires holding or moving. It can form a sensuous bond between individuals, communicating a wealth of non-verbal information. Examples of the use of sensory information are easy to come by (we looked at some in Chapter 1), but a vital feature is that our everyday interaction with each other and the world around us is a multi-sensory one, each sense providing different information that is built up into a whole. Since our interaction with the world is improved by multi-sensory input, it makes sense to ask whether multi-sensory information would benefit human-computer interaction. As we consider ourselves to be disabled if we are without one or more of our senses,
Other problems occur when using synthesized speech. Existing as a transient phenomenon, spoken output cannot be reviewed or browsed easily. It is intrusive, requiring either an increase in noise in the office environment, or the wearing of headphones by the user, either of which may be too large a price to pay for whatever benefits the system may offer. There are a few application areas in which speech synthesis has been successful. Particularly for blind users, speech offers a medium of communication to which they have unrestricted access, and they are highly motivated to overcome the inherent limitations within current systems. Screen readers, which read the textual display back to the user, and systems that
However, in spite of this, the auditory channel is comparatively little used in standard interfaces. Information is provided almost entirely visually. There is a danger that this will overload the visual channel, demanding that the user attend to too many things at once and select appropriate information from a mass of detail in the display. Reliance on visual information forces attention to remain focused on the screen, and the persistence of visual information means that even detail that is quickly out of date may remain on display after it is required, cluttering the screen further. Careful use of sound in the interface would alleviate these problems. Hearing is our second most used sense and provides us with a range of information in everyday life, as we saw in Chapter 1. Humans can differentiate a wide range of sounds and can react faster to auditory than to visual stimuli. So how can we exploit this capability in interface design?
processed in 0.009 seconds
| |
HCI Book 3rd Edition || old HCI 2e home page || search
|
|
feedback to feedback@hcibook.com | hosted by hiraeth mixed media |
|