HUMAN-COMPUTER INTERACTION SECOND EDITION
Dix, Finlay, Abowd and Beale


Search Results


Search results for screens
Showing 261 to 270 of 281 [<< prev] [next >>] [new search]


Chapter 15 Out of the glass box 15.4 Non-speech sound Page 563

Sound can be used to provide a second representation of actions and objects in the interface to support the visual mode and provide confirmation for the user. It can be used for navigation round a system, either giving redundant supporting information to the sighted user or providing the primary source of information for the visually impaired. Experiments on auditory navigation [196] have demonstrated that purely auditory clues are adequate for a user to locate up to eight targets on a screen with reasonable speed and accuracy, so there is no excuse for ignoring the role of sound in interfaces on the grounds that it may be too vague or inaccurate.


Chapter 15 Out of the glass box 15.4 Non-speech sound Page 563

Soundtrack is an early example of a word processor with an auditory interface, designed for visually disabled users [76]. The visual items in the display have been given auditory analogs, made up of tones, with synthesized speech also being used. A two-row grid of four columns is Soundtrack's main screen (see Figure 15.3); each cell makes a different tone when the cursor is in it, and by using these tones the user can navigate around the system. The tones increase in pitch from left to right, while the two rows have different timbres. Clicking on a cell makes it speak its name, giving precise information that can reorient a user who is lost or confused. Double clicking on a cell reveals a submenu of items associated with the main screen item. Items in the submenu also have tones; moving down the menu causes the tone to fall whilst moving up makes it rise. A single click causes the cell to speak its name, as before, whilst double clicking executes the associated action. Soundtrack allows text entry by speaking the words or characters as they are entered, with the user having control over the degree of feedback provided. It was found that users tended to count the different tones in order to locate their position on the screen, rather than just listen to the tones themselves, though one user with musical training did use the pitch. Soundtrack provides an auditory solution to representing a visually based word processor, though the results are not extensible to visual interfaces in general. However, it does show that the human auditory system is capable of coping with the demands of highly interactive systems, and that the notion of auditory interfaces is a reasonable one.


Chapter 15 Out of the glass box 15.4.1 Auditory icons Page 564

Auditory icons [93] use natural sounds to represent different types of objects and actions in the interface. The SonicFinder [94] for the Macintosh was developed from these ideas. It is intended as an aid for sighted users, providing support through redundancy. Natural sounds are used since people recognize, not timbre and pitch, but the source of a sound and its behaviour [250]. They will recognize a particular noise as glass breaking or a hollow pipe being tapped; a solid pipe will give a different noise indicating not only the source but also the behaviour of the sound under different conditions. In the SonicFinder, auditory icons are used to represent desktop objects and actions. So, for example, a folder is represented by a papery noise, and throwing something in the wastebasket by the sound of smashing. This helps the user to learn the sounds since they suggest familiar actions from everyday life. However, this advantage also creates a problem for auditory icons. Some objects and actions do not have obvious, naturally occurring sounds that identify them. In these cases a sound effect can be created to suggest the action or object but this moves away from the ideal of using familiar everyday sounds that require little learning. Copying has no immediate analogous sound; in the SonicFinder it is indicated by the sound of pouring a liquid into a receptacle, with the pitch rising to indicate the progress of the copying. These non-speech sounds can convey vast amounts of meaning very economically; a file arrives in a mailbox, and being a large file it makes a weighty sound. If it is a text file it makes a rustling noise, whereas a compiled program may make a metallic clang. The sound can be muffled or clear, indicating whether the mailbox is hidden by other windows or not, while the direction of the sound would indicate the position on the screen. If the sound then echoes, as it would in a large, empty room, the system load is low. All this information can be presented in a second or so.


Chapter 15 Out of the glass box 15.4.1 Auditory icons Page 565

Natural sounds have been used to model environments such as a physics laboratory [95], called SharedARK (Shared Alternate Reality Kit) and a virtual manufacturing plant, ARKola [96]. In SharedARK, multiple users could perform physics experiments in a virtual laboratory. Sound was used in three different ways: as confirmation of actions, for status information and as aids to navigation. Confirmatory sounds use similar principles to the SonicFinder, providing redundant information that increases feedback. Process and state information sounds exist on two levels, global and local. Global sounds represent the state of the whole system and can be heard anywhere, while local sounds are specific to particular experiments and alter when the user changes from one experiment to another. Navigational information is provided by soundholders, which are auditory landmarks. They can be placed anywhere in the system and get louder as the user moves towards them, decreasing in volume when moving away. This allows the user to wander through an arena much greater than the size of the screen without getting lost and lets him return to specific areas very easily by returning to the soundholder.


Chapter 15 Out of the glass box 15.5.1 The technology Page 568

Digitizing tablets have been refined by incorporating a thin screen on top to display the information, producing electronic paper. Advances in screen technology mean that such devices are small and portable enough to be realistically useful in hand-held organizers such as the Apple Newton. Information written onto the digitizer can simply be redisplayed, or stored and redisplayed for further reference. However, while this has limited use in itself, systems are most useful when they are able to interpret the strokes received and produce text. It is this recognition that we will look at next.


Chapter 15 Out of the glass box 15.6 Gesture recognition Page 569

The Media Room at MIT uses a different approach in order to incorporate gestures into the interaction. The Media Room has one wall that acts as a large screen, with smaller touchscreens on either side of the user, who sits in a central chair. The user can navigate through information using the touchscreens, or by joystick, or by voice. Gestures are incorporated by using a position-sensing cube attached to a wristband worn by the user. The put that there system uses this gestural information coupled with speech recognition to allow the user to indicate what should be moved where by pointing at it. This is a much more natural form of interaction than having to specify verbally what it is that has to be moved and describing where it has to go, as well has having the advantage of conciseness. Such a short, simple verbal statement is much more easily interpreted by the speech recognition system than a long and complex one, with the resolution of ambiguity done by interpreting the other mode of interaction, the gesture. Each modality supports the other. An extension to this has used the eyegaze system (see Chapter 2) instead of gesture recognition to control the display.


Chapter 15 Out of the glass box 15.8 Ubiquitous computing applications research Page 570

What is ubiquitous computing technology? Our general working definition is any computing technology that permits human interaction away from a single workstation. This includes pen-based technology, handheld or portable devices, large-scale interactive screens, wireless networking infrastructure, and voice or vision technology.


Chapter 15 Out of the glass box 15.9 Interfaces for users with special needs Page 576

The rise in the use of graphical interfaces reduces the possibilities for visually impaired users. In text-based interaction, screen readers using synthesized speech or braille output devices meant that such users had complete access to computers: input relied on touch-typing, with these mechanisms providing the output. However, today the standard interface is graphical. Since it is not possible to use a screen reader or braille output to represent pictures, access to computers, and therefore work involving computers, for visually impaired users has been reduced rather than expanded. A number of systems attempt to provide access to graphical interfaces for this user group, by adding sound to the interface - we have previously discussed Soundtrack, which uses tones to represent menus. Outspoken is a Macintosh application that uses synthetic speech to make other Macintosh applications available to visually impaired users. This has had some success but (in common with Soundtrack) suffers from the problem of the sheer amount of information that needs to be represented.


Chapter 15 Out of the glass box 15.10.1 VR technology Page 579

Since the user has to 'see' a new environment, a headset is usually used in a VR setup. With independent screens for each eye, in order to give a 3D image, the headset is often a large, relatively cumbersome piece of head-mounted gear. However, smaller, lighter VR goggles are now available and may soon become only slightly oversized spectacles. Powering the headset are a couple of very fast graphics computers, one for each eye.


Chapter 15 Out of the glass box 15.10.3 VR on the desktop and in the home Page 580

VR has been made possible by the advent of very fast high-performance computers. Despite the exponential rise in processor speeds, high-resolution immersive VR is still not available for mass-market applications, and many systems are primarily research projects. Desktop VR is a lower-cost alternative. In desktop VR, 3D images are presented on a normal computer screen and manipulated using mouse and keyboard, rather than using goggles and datagloves. Many readers may have used such systems on personal computers or games consoles: flight simulators, or interactive games such as DOOM or MYST.


Search results for screens
Showing 261 to 270 of 281 [<< prev] [next >>] [new search]

processed in 0.008 seconds


feedback to feedback@hcibook.com hosted by hiraeth mixed media