Navigation Patterns and Usability of Overview+Detail and Zoomable User Interfaces for Maps
Department of Computing,
University of Copenhagen,
DK-2100 Copenhagen Ø, Denmark,
Benjamin B. Bederson,
Department of Computer Science,
Human-Computer Interaction Laboratory,
University of Maryland,
College Park, MD 20742, USA,
+1 301 405 2764
Human-Computer Interaction Laboratory,
University of Maryland,
College Park, MD 20742, USA,
+1 301 405 2768
The literature on information visualizations establishes the usability of overview+detail interfaces, but for zoomable user interfaces, results are mixed. We compare overview+detail and zoomable user interfaces to understand the navigation patterns and usability of these interfaces. Thirty-two subjects solved navigation and browsing tasks on maps organized in one or multiple levels. We find no difference between interfaces in subjects' ability to solve tasks correctly. Eighty percent of the subjects prefer the overview+detail interface, stating that it supports navigation and helps keep track of their position on the map. However, subjects are faster using the zoomable user interface, especially in combination with the multi-level map and when solving navigation tasks. The combination of the zoomable user interface and the multi-level map also improves subjects' recall of objects on the map. Switching between overview and detail windows was correlated with higher task completion time, suggesting that integration of overview and detail windows require mental and motor effort. We found large individual differences in navigation patterns and usability, but subjects' visualization ability influenced usability similarly between interfaces.
Categories and Subject Descriptors: H.5.2 [Information Interfaces and Presentation]: User Interfaces–Evaluation/methodology, Interaction styles; I.3.6 [Computer Graphics]: Methodology and Techniques: Interaction techniques
General Terms: Experimentation, human factors, measurement, performance
Additional Key Words and Phrases: Information visualization, zoomable user interfaces, overview+detail interfaces, navigation, usability, maps
Information visualization [Card et al. 1999] has become a successful paradigm for human-computer interaction. Numerous interface techniques have been proposed and an increasing number of empirical studies describe the benefits and problems of information visualizations, e.g. Beard & Walker , Schaffer et al. , Hornbæk & Frøkjær , Chen & Czerwinski . Overview+detail and zoomable user interfaces have been extensively discussed in the literature on information visualization. Overview+detail interfaces [Plaisant et al. 1995] show the details of an information space together with an overview of the entire information space. Overview+detail interfaces can improve subjective satisfaction, e.g. North & Shneiderman , and efficiency, e.g. Beard & Walker . Zoomable user interfaces [Perlin & Fox 1993] organize information in space and scale, and use panning and zooming as their main interaction techniques. Research prototypes of zoomable user interfaces include interfaces for storytelling [Druin et al. 1997], web browsing [Hightower et al. 1998], and browsing of images [Combs & Bederson 1999]. However, few empirical studies have investigated the usability of zoomable user interfaces and the results of those studies have been inconclusive.
In this article we empirically investigate zoomable user interfaces by comparing them to the successful overview+detail interfaces. Specifically, we investigate:
- How overview+detail and zoomable user interfaces affect usability;
- How overview+detail and zoomable user interfaces influence the way users navigate information spaces;
- How navigation patterns and usability are influenced by different organizations of information spaces; and
- How individual differences, especially differences in visualization ability, shape navigation patterns and usability.
With this work we aim to strengthen the empirical literature on zoomable user interfaces, thereby identifying challenges for researchers and advising designers of user interfaces.
In the next section, we review the literature on overview+detail and zoomable user interfaces. Then, we present our empirical investigation of differences in navigation patterns and usability in overview+detail and zoomable user interfaces. Finally, we discuss the trade-off between time and satisfaction in such interfaces and explain the interaction between differently organized information spaces and usability.
This section summarizes the research questions and empirical findings about overview+detail and zoomable user interfaces. It explains our design decisions and motives for the evaluation, both described in subsequent sections.
Overview+detail interfaces present multiple views of an information space where some views show detailed information about the information space (so-called detail windows), while other views show an overview of the information space (so-called overview windows). Examples of overview+detail interfaces include editors for program code [Eick et al. 1992], interfaces for image collections [North et al. 1995], and commercial programs such as Adobe Photoshop. Overview+detail interfaces have four benefits. First, navigation is more efficient because users may navigate using the overview window rather than using the detail window [Beard & Walker 1990]. Second, the overview window aids users in keeping track of their current position in the information space [Plaisant et al. 1995]. Third, the overview window itself might give users task-relevant information. Fourth, overview+detail interfaces give users a feeling of control [Shneiderman 1998]. A drawback of overview+detail interfaces is that the spatially indirect relation between overview and detail windows might strain memory and increase the time used for visual search [Card et al. 1999, p. 307].
Taxonomies and design guidelines for overview+detail interfaces [Beard & Walker 1990; Plaisant et al. 1995; Carr et al. 1998; Baldonado et al. 2000] present three main findings about overview+detail interfaces. First, the overview and detail windows need to be tightly coupled [Ahlberg & Shneiderman 1994], so that navigation or selection of an information objects in one window is immediately reflected in the other windows. Tight coupling of overview and detail views have been found useful in several studies, e.g. North & Shneiderman . Second, for any relation between detail and overview windows, the zoom factor is the ratio between the larger and smaller of the magnification of the two windows. For overview+detail interfaces, this factor is recommended to be below 25 [Plaisant et al. 1995] or 30 [Shneiderman 1998]. It is unclear, however, if the sizes of the detail and overview windows influence the recommended zoom factor. Third, the size of the overview window influences how much information can be seen at the overview and how easy it is to navigate on the overview. However, a large overview window might take screen real estate from the detail window. Plaisant et al.  argue that most usable size of the overview and detail windows are task dependent. A large overview window, for example, is required for a monitoring task, while a diagnostic task might benefit from a large detail window.
A number of studies have found that overview+detail interfaces improve user satisfaction and efficiency over detail-only interfaces. Beard & Walker  investigated the effect of providing an overview window compared to having only scrollbars. In a 280 word ordered tree, subjects used an overview window that allowed dragging a field-of-view and one that allowed both dragging and resizing the field-of-view. For tasks where subjects tried to locate a word in the tree and tasks where they repeatedly went from one side of the tree to the other, the overview window lead to significantly faster task completion. North & Shneiderman  compared 18 subjects’ performance with a detail-only, an uncoordinated, and a coordinated interface for browsing textual population data. Compared to the detail-only interface, the coordinated interface was 30-80% faster and scored significantly higher on a satisfaction questionnaire. Hornbæk & Frøkjær  compared an overview+detail interface for electronic documents with a fisheye and a detail-only interface. Essays produced with aid of the overview+detail interface were scored significantly higher than essays produced with aid of the detail-only interface. However, for tasks that required subjects to answer a specific question, the overview+detail interface was 20% slower compared to the detail-only interface. All but one of the 21 subjects preferred the overview+detail interface.
While zoomable user interfaces have been discussed since at least 1993 [Perlin & Fox 1993], no definition of zoomable user interface has been generally agreed upon. In this article, we consider the two main characteristics of zoomable user interfaces to be (a) that information objects are organized in space and scale, and (b) that users interact directly with the information space, mainly through panning and zooming. In zoomable user interfaces, space and scale are the fundamental means of organizing information [Perlin & Fox 1993; Furnas & Bederson 1995]. The appearance of information objects are based on the scale at which they are shown. Most common is geometric zoom, where the scale linearly determines the apparent size of the object. Objects may also have a more complex relation between appearance and scale, as in so-called semantic zooming [Perlin & Fox 1993; Frank & Timpf 1994], which is supported in the zoomable user interface toolkit Jazz [Bederson et al. 2000]. One example of semantic zooming is commonly used on maps, where the same area on the map might be shown with different features and amount of detail depending on the scale. Constant density zooming [Woodruff et al. 1998a] introduces a more complex relation between scale and appearance where the number of objects currently shown controls the appearance of objects, so that only a constant number of objects are visible simultaneously.
The second main characteristic of zoomable user interfaces is that the information space is directly visible and manipulable through panning and zooming. Panning changes the area of the information space that is visible, zooming changes the scale at which the information space is viewed. Usually, panning and zooming are controlled with the mouse or the keyboard, so that a change in the input device is linearly related to how much is panned or zoomed. Non-linear panning and zooming have been proposed in three forms: (a) goal directed zoom, where direct zooming to an appropriate scale is supported [Woodruff et al. 1998b]; (b) combined zooming and panning, where extensive panning automatically leads to zooming [Igarishi & Hinckley 2000]; and (c) automatic zoom to objects, where a click with the mouse on a object automatically zooms to center on that object [Furnas & Zhang 1998; Ware 2000]. Two ways of changing scale in a zoom action are commonly used. In jump zooming the change in scale occurs instantaneously, e.g. Pad [Perlin & Fox 1993], Schaffer et al.'s  experimental system, and in commercial systems such as Adobe PhotoShop or MapQuest. In animated zooming the transition from the old to the new scale is smooth [Bederson & Hollan 1994; Pook et al. 2000; Bederson et al. 2000]. An important issue in animated zooming is the duration and user control over the zooming speed, i.e. the ratio between the zooming time and the zooming factor. Guo et al.  provide preliminary evidence that a zoom speed around 8 factors per second is optimal. Card et al.  argues that the zoom time should be approximately one second; though in some zoomable user interfaces, e.g. Jazz, the user can control both the zoom time and the zoom factor. Bederson & Boltman  investigated whether an animated or jump zoom technique affected 20 subjects’ ability to remember the topology of and answer questions about a nine-item family tree. Subjects were better at reconstructing the topology of the tree using animated zooming, but no difference in satisfaction or task completion time was found.
However, these characteristics do not sharply define zoomable user interfaces. Increasingly, zoomable user interfaces are combined with other interfaces techniques, such as transparent overviews [Pook et al. 2000]; some overview+detail interfaces are extended with animated zooming [Ghosh & Shneiderman 1999]; and some effort has been put into extending zoomable user interfaces with navigation mechanisms that supplement direct zoom and pan, see for example Jul & Furnas .
The empirical investigations of zoomable user interfaces are few and inconclusive. Páez et al.  compared a zoomable user interface based on Pad++ [Bederson & Hollan 1994] to a hypertext interface. Both interfaces gave access to a nine-page scientific paper. In the zoomable user interface the scale of the sections and subsections of the paper is manipulated, so that the entire paper fits on the initial screen. No significant difference was found between the two interfaces for the 36 subjects’ satisfaction, memory for the text, or task completion time. Schaffer et al.  compared 20 subjects’ performance with a zoomable user interface and a fisheye interface. Subjects had to locate a broken link in a telephone network and reroute the network around the link. Subjects used 58% more time for completing the task in the zoomable user interface. Subjects seem to prefer the fisheye interface, although this is not clearly described in the paper. Hightower et al.  present two experiments that compare the history mechanism in Netscape Navigator with a graphical history in a zoomable user interface called PadPrints. In the first experiment, 37 subjects were required to answer questions about web pages. No significant difference in task completion time was found, but subjects preferred the PadPrints interface. In the second experiment, subjects were required to return to already visited web pages. Subjects were around 40% faster using the PadPrints interface and preferred PadPrints to Netscape Navigator. Combs & Bederson  compared four image browsers: two commercial 3D interfaces, one commercial 2D interface, and an image browser based on Pad++. Thirty subjects searched for images in an image database, that they had just browsed. Subjects were significantly faster using the 2D and the zoomable user interfaces, especially as the number of images in the database went from 25 to 225. The study also presents some evidence that recall of images is improved in the zoomable user interface, but found no difference in subjective satisfaction between interfaces. Ghosh & Shneiderman  compared 14 subjects’ use of an overview+detail and a zoomable user interface to personal histories, LifeLines [Plaisant et al. 1996]. The zoomable user interface was marginally slower than the overview+detail interface. No difference in subjective satisfaction was found.
In general, the experimental results about zoomable user interfaces are mixed, reflecting the difference in the interfaces that zoomable user interfaces are compared to, in the organization and size of the information spaces used, and in the implementation of zooming. This unclearness provides us with the motivation for comparing zoomable user interfaces to overview+detail interfaces and for varying the organization of the information spaces used in the experiment described next.
To investigate the differences in navigation patterns and usability between overview+detail interfaces and zoomable user interfaces, we conducted a controlled experiment. In the experiment, subjects used the two interfaces to solve ten tasks on each of two differently organized maps. In addition to the four aims mentioned in the introduction, five hypotheses guided the design of the experiment. We hypothesized that (1) subjects would prefer using the overview+detail interface, because of the added information contained on the overview window and the additional navigation features; (2) that the overview+detail interface would be faster for tasks that required comparison of information objects and scanning large areas; (3) that recall of map locations would be better in zoomable user interfaces; (4) that organization of information in multiple levels leads to faster and more accurate solutions to task with the zoomable user interface compared to the overview+detail interface; and (5) that the overview+detail interface would be faster and more accurate for subjects with poor spatial visualization ability.
Thirty-two subjects participated in the experiment, 23 males and 9 females. Subjects were recruited at the University of Maryland and received 15 US dollars for participating in the experiment. The age of the subjects ranged from 18 to 38; the mean age was 23.4 years. Twenty-three subjects were computer science or engineering students, four had other majors, and five were research staff or loosely affiliated with the university. Thirty-one subjects used computers every day. Twenty-three of the subjects had never used zoomable user interfaces, while nine subjects had seen or used a zoomable user interface prior to participating in the experiment. We required that subjects had spent less than two weeks in the states of Washington and Montana, because the experiment used maps of those states.
For the experiment, we constructed a zoomable user interface and an overview+detail interface; both based on the zoomable user interface toolkit Jazz [Bederson et al. 2000]. When users hold down the left mouse button, zooming in begins after a delay of 400 milliseconds. Users zoom out by holding down the right mouse button. The maximum zoom factor is 20, meaning that subjects can view the map at scale 1 through scale 20. At scale 1 is shown the initial, unmagnified view of the map; at scale 20 the initial view of the map is magnified 20 times. The zoom speed is eight factors per second, i.e. subjects can zoom from the initial view of the map to the maximum magnification in 2.5 seconds. Users pan by holding down the left mouse button and moving the mouse in the opposite direction of which they wish to see (i.e. the map follows the mouse). In the lower right corner of both interfaces is an icon showing the four compass points, which are referred to in some tasks. Next to this icon is a button labeled 'zoom out', which when pressed will zoom out to the initial view of the map.
The zoomable user interface is shown in Figure 2. Subjects may only interact with this interface using the zoom and pan techniques described above.
The overview+detail interface is shown in Figure 1. In the top-right corner of the interface, an overview window shows the entire map at one-sixteenth the size of the detail window. The current location of the detail window on the map is indicated in the overview window by a 70% transparent field-of-view box. The overview and detail windows are tightly coupled, so that zooming or panning in the detail window immediately updates the overview window and dragging the field-of-view box change which part of the map is shown in the detail window. The subjects can also click in the overview window outside of the field-of-view box which will center the field-of-view box on the point clicked on. The field-of-view box can be resized by dragging the resize handle in the bottom right corner of the field-of-view box. The subjects can also draw a new field-of-view box by holding down the left button and moving the mouse until the desired rectangle has been drawn. Notice that the field-of-view box always keep the same ratio between width and height as the detail window and overview window.
The motivation for using maps for the experiment is threefold. First, interfaces for maps constitute an important area of research. Second, maps include characteristics of other, commonly used information structures, for example hierarchical information (nesting of information objects) and network information (connections between information objects). Therefore, results concerning maps may be generalized to other information structures. Third, the direct relation between representation and physical reality aids interpretation of maps compared to the often difficult interpretation of abstract information spaces [Hornbæk & Frøkjær 1999].
We created two maps based on data from the 1995 United States Census. The maps contain eight types of map objects: counties, cities, parks, airports, lakes, railroads, military installations, and other landmarks. Each map object, except railroads, consists of a shape and a label. A distinct color identifies each type of map object. In addition, county names are shown in bold type and city names in italic type. Because we hypothesized that different organization of the maps might influence the navigation and usability measures, we created a multi-level and a single-level map. The maps are organized by placing labels for map objects at different scales, changing the apparent size of the labels as follows:
− The multi-level map shows map objects in the state of Washington at three levels of scale (see Figure 3): county level (scale 1, 39 labels), city level (scale 5, 261 labels), and landmark level (scale 10, 533 labels). At the county level, labels are the same size as a 10-point font when the map is zoomed out (i.e. at scale 1) and larger when the map is magnified. When labels are shown at city or landmark level, they have the size of a 10-point font when the user has magnified the map 5 or 10 times, respectively.
Figure 1—The overview+detail interface showing the single-level map. In the top right corner of the interface is the overview window, which shows an overview of the entire map. The gray area in the overview window is the field-of-view box that indicates which part of the map is currently shown in the detail window. In the bottom right corner of the field-of-view box is the resize handle that allows the user to make the field-of-view smaller or larger, i.e. zooming in or out. The two buttons in the lower right corner is similar to the buttons in the zoomable user interface. The map is shown at scale four, meaning that the objects in the detail window are magnified four times.
Figure 2—Zoomable user interface showing the multi-level map. The user may zoom and pan to change the area of the map shown. In the lower right corner of the window a button is shown that will zoom out to the initial view of the map. Next to this button is an indication of the four compass points. The colors of the map are reproduced here as different shades of gray. The map is shown at scale one, i.e. the initial view of the map.
− On the single-level map (see Figure 4), all 806 labels are displayed at the scale 7, i.e. similar in size to a 10-point font when the map is magnified 7 times. The single-level map shows the state of Montana. To aid visual search, county names are also shown in capital letters.
The main difference between the maps is the organization in levels. The multi-level and single-level maps are similar with respect to the number of map objects (1591 vs. 1540); the area the state occupies (50% vs. 57% of the initial screen); and the information density (measured as the mean distance to the nearest map object it is 7.1 vs. 7.8).
Figure 3—Four screenshots of the multi-level map. The four screenshots show the multi-level map at scale 1 (top left), scale 3 (top right), scale 7 (lower left), and scale 20 (lower right). On the multi-level map, map objects are labeled at three different levels: county level (39 counties, for example Snohomish in the figure), city level (261 cities, for example Everett), and landmark level (533 landmarks, barely readable in the lower right screenshot).
Figure 4—Four screenshots of the single-level map. The four screenshots show the single-level map at scale 1 (top left), scale 3 (top right), scale 7 (lower left), and scale 20 (lower right). All map objects are labeled at the same scale, i.e. all labels are the same size but can appear very small at low scales. At scale 7, labels are as big as a 10-point font.
Tasks were created to cover a large number of the types of tasks previously discussed in the literature [Plaisant et al. 1995] and to investigate specific hypotheses about when zoomable user interfaces or overview+detail interfaces would be especially useful. We created ten tasks for each map, five navigation tasks and five browsing tasks, which are described in the appendix.
- Navigation tasks required subjects to find a well-described map object. Two navigation tasks require subjects to locate an object on the map, two tasks require subjects to find and compare objects, and one task requires the subject to follow a route between two places specified in the task.
- Browsing tasks required subject to scan a larger area, possibly the entire map, for objects fulfilling certain criteria. Two browsing tasks required a scan of the entire map for objects of a certain type; two tasks require subjects to scan an area of the map to find the county with most cities or the largest cities in the area, and one task required subjects to find the first object of a certain type east of some county.
Between the maps, the tasks differed only in the map objects referred to. The answers to the tasks were evenly distributed over the map, and answers were also located at different scales.
Zoomable user interfaces have been speculated to improve understanding of large information spaces [Furnas & Bederson 1995]. Therefore we also gave the subjects two recall tasks that test their memory of the structure and content of the map. The first recall task consisted of five small maps showing the outline of the state depicted on the map. For three of these small maps, a part of the map was darkened and the subjects were asked to write down as many objects within the dark area as they remembered. For two of the maps, subjects themselves could mark a county on the map with a cross, and write down any map objects they remembered within that county. The second recall task consisted of three county names, each associated with a list of ten cities. Subjects were told to circle all cities within a county and cross out cities they were confident were not located in the mentioned county. The list of cities consisted of the three largest cities within the county mentioned, the three largest cities in counties just next to the county mentioned, and four cities in entirely different areas of the map.
The experiment varied interface type (overview+detail vs. zoomable user interface), map type (multi-level vs. single-level map), and task type (navigation vs. browsing tasks) within-subjects in a balanced, factorial design. Each subject participated in two sessions. In the first session subjects used one interface giving access to one map and solved five navigation and five browsing tasks. In the second session, subjects used the other interface in combination with the not yet explored map. Subjects were randomly assigned to one of the four possible combinations of interface and map type. Within each of these four combinations, subjects were further randomly assigned to one of four permutations of task types in the two sessions. Each of the resulting 16 groups contained two subjects. The order of the five tasks within a task type was the same for all subjects.
We used the following dependent variables for comparing the interfaces:
- Accuracy in answering questions. Accuracy was calculated as the number of correct (all map objects given as answer to a task is correct), partially correct (one correct and one wrong map object), and wrong (all map objects are wrong).
- Recall of map objects. For the recall task that required subjects to mark counties and cities on the map, we counted as correct the number of counties and cities within one centimeter from the actual location of the county or city. For the recall task that required subjects to remember the cities in a county, we measured the number of correct indications corrected with a penalty for guessing (the number of wrong guesses divided by the number of wrong answer possibilities for the question).
- Task completion time. Task completion time was measured as the time subjects could see the map. The time subjects used for the initial reading of the task, as well as the time used for entering answers, was not included.
- Navigation measures. We logged all interaction with the interfaces and used the number of pan and zoom actions in the two interfaces as indicators of navigation patterns. For the overview+detail interface, we further measured the number of resize and move actions on the overview window.
- Satisfaction with interfaces. Satisfaction was measured using seven questions with nine-point semantic differentials. Five of the questions were taken from the Questionnaire for User Satisfaction [Chin et al. 1988] and two questions were custom made. The wording of the questions appear in figure 7.
- Preference. Preference was determined from subjects’ indication of which interface they preferred using and from the reasons subjects gave for their indication.
In addition to the above measures, we measured the subjects' visualization ability [Ekstrom et al. 1976] and took observation notes taken during the experiment.
In section 4, this design is analyzed in two ways: (a) for measures related to task completion (accuracy, task completion time, and usage of pan and zoom), we use one solution to a task as an observation. The factors in the models used are interface (1 degree of freedom, df), map type (1 df), the interaction between interface and map type (1 df), session (1 df), task type (1 df), the interaction between task and interface (1 df), subjects nested within interface and map type (60 df), and tasks nested within task type and map type (17 df). This model leaves 553 df for the error term in the multivariate analysis, 556 for univariate analysis; (b) for measures related to sessions (subjective satisfaction and recall), we use as factors interface (1 df), map type (1 df), and the interaction between interface and map type (1 df), leaving 52 and 60 dfs for the multivariate and univariate analysis, respectively.
The interfaces were run on a 650MHz Pentium III laptop with an ordinary mouse. The screen was 13 inches with a resolution of 1024*768.
Upon arriving to the lab, subjects filled out a questionnaire about gender, occupation and familiarity with computers, and answered an electronic version of the VZ-2 test [Ekstrom et al. 1976]. The VZ-2 test measures subjects' spatial visualization ability by having them imagine how a folded piece of paper with holes punched through would look when unfolded. Then, subjects were introduced to the two interfaces and tried three practice tasks that lasted on average 11 minutes.
The main phase of the experiment consisted of two sessions. For each of the ten tasks in a session, subjects initially saw a window that covered the entire map. After reading a piece of paper that described the task, subjects clicked on a button to see a zoomed out view of the map. When subjects were ready to answer a task they entered their answer using a tightly coupled text field and list box containing the labels of all objects on the map. For all tasks, subjects were asked to proceed to the next task when they had searched for five minutes. After solving the ten tasks in a session, subjects received the recall task and filled out a satisfaction questionnaire about the interface just used. After a five minute break, subjects began the second session, which used the same procedure as the first session.
After the second session of the experiment, subjects filled out a form about which interface they preferred. On average, the experiment lasted one hour and 30 minutes.
Using a multivariate analysis of variance on the performance measures related to task completion, we find a significant difference at the .05 significance level between interfaces, Wilk’s Λ=.77, F(4,553)=41.99; between map types, Λ=.84, F(4,553)=25.56; for the interaction between map type and interface, Λ=.942, F(4,553)=8.56. For the seven measures of subjective satisfaction and the two measures of recall, significant differences are found for interface, Λ=.73, F(9,52)=2.20; between map types, Λ=.68, F(9,52)=2.77; and for the interaction between map type and interface, Λ=.68, F(9,52)=2.69. In the next sections we use univariate analyses of variance to investigate further the differences between answers to tasks and recall tasks, satisfaction measures, and how subjects navigate.
Figure 5 summarizes the correctness of the answers to the experimental tasks. Using a rank-based test, we find no difference in the correctness of answers between interfaces, F(1,556)=.40, p>.5. Between the two map types, a significant difference in the number of tasks correctly answered can be found, F(1,556)=10.45, p<.001. Tasks solved on the multi-level map are more often answered correctly than tasks solved on the single-level map.
Figure 5—The percentage of wrong, partially correct, and correct answers for the experimental tasks. The figure shows the mean percentage of wrong, partially correct, and correct answers for the zoomable user interface (ZUI) and the overview+detail interface (O+D).
Figure 6 shows the results of the recall task. With the
subjects do better at the recall task with the single-level map compared to the multi-level map. The zoomable user interface shows the opposite pattern. These patterns are confirmed with a rank-based test for the number of marked map objects by a significant interaction between interface and map type, F(1,60)=6.96, p<.05. No such interaction was found for the number of recognized cities, F(1,60)=1.95, p>.1, only a marginally significant difference between interfaces for the multi-level map was found, F(1,60)=3.27, p<.08.
Figure 6—Mean number of correct answers to recall tasks. Panel a shows the mean number of correctly marked map objects (counties and cities); panel b shows the mean number of correctly recognized cities, adjusted for guessing. Error bars show the standard error of the mean.
Large individual differences exist in subjects' ability to answer the browsing, navigation, and the recall tasks. One subject correctly answered 19 of the 20 questions; another subject answered only nine questions correctly. Subjects’ visualization ability were positively correlated with the accuracy of their answers, Spearmans’ rho=.124, p<.01. In the recall task, one subject marked on average 11 cities or counties on the map; another subject marked none.
Twenty-six subjects stated that they preferred using the overview+detail interface, while six subjects preferred the zoomable user interface. Thus, significantly more subjects prefer the overview+detail interface, χ2(1,N=32)=12.5, p<.001. Subjects explained their preference for the overview+detail interface as follows:
- The overview window provides information about the current position on the map, for example one subject wrote “It is easier to keep track of where I am”. N=9 subjects made similar comments.
- The overview window supports navigation (N=7), for example: “[It was] easier to navigate in the overview box while looking at the detail map for answers”. Two subjects wrote similar comments at the end of the session in which they had used the overview+detail interface.
- The overview window is helpful when scanning a large area (N=4), for example: “It made surveying a large map less disorienting especially when small landmarks had to be spotted”.
- The overview window is useful for zooming (N=2), for example “The zoom feature in the top right was extremely helpful”.
- The overview window supports comparing objects (N=2), for example: ”Easier to move between counties while at the same zoom level -> easier to compare the size of objects”.
The six subjects who preferred the zoomable user interfaces mentioned that:
- Locating objects felt faster using the zoomable user interface (N=2), for example “I found myself answering my tasks much quicker using the zoomable user interface”.
- One subject preferred the zoomable user interface because the overview window got in the way when using the overview+detail interface: “Overview+detail would seem to be more powerful, but the abundance of features got in the way to the effect of imposing on usability”. Three subjects made similar comments at the end of the session where they used the overview+detail interface. Nevertheless, these subjects preferred the overview+detail interface.
In addition, four subjects commented that they found it hard to resize the field-of-view box; three subjects commented that the map seemed larger using the zoomable user interface; two subjects commented that when using the zoomable user interface it was sometimes unclear where they were on the map; and two subjects commented that it was useful that the overview window gave a visual indication of the current zoom factor.
Figure 7 shows the subjects’ satisfaction with the overview+detail and zoomable user interface. The overview+detail interface scored significantly higher than the zoomable user interface on the dimensions ‘Terrible…Wonderful’, F(1,60)=10.26, p<.01; ‘Rigid…Flexible’, F(1, 60)=7.33, p<.01; and ‘Keeping track of objects were difficult…easy’, F(1,60)=9.54, p<.01). Between map types, we find a significant difference for four satisfaction questions, showing that subjects give the interfaces higher satisfaction scores when they use the multi-level map.
Figure 7—Satisfaction with the interfaces. The figure shows the mean score for the seven satisfaction questions in the two interfaces. Error bars indicate the standard error of the mean. The questions were answered on a nine point semantic differential going from 1 (lowest score) to 9 (highest score). Significant differences at the .01 level are marked in the figure with two asterisks (**).
Figure 8, panel a, shows the task completion time with the two interfaces and on the two maps. We found a significant interaction between interface and map type, F(1,556)=6.08, p<.05. Tasks solved with the zoomable user interface on the multi-level map are solved 22% faster (M=68.76, SD=43.38) than tasks solved with the overview+detail interface (M=84.23, SD=59.42). Tasks solved on the single-level map are solved with comparable mean completion times (Zoomable user interface: M=107.81, SD=68.05; overview+detail interface: M=105.85, SD=59.42). A significant difference is also found between interfaces, F(1,556)=4.01, p<.05, indicating that the zoomable user interface is faster overall.
into more detailed analysis, we found no significant interaction between task
types and interfaces, F(1,556)=1.98, p>.1. However, as can be seen in Figure 8, panel b,
the zoomable user interface is significantly faster for navigation tasks
(M=86.9, SD=60.4), compared to the overview+detail interface (M=99.1, SD=64.4),
F(1,566)=6.01, p<.05. All navigation tasks solved on the multi-level map
with the zoomable user interface had faster task completion times compared to
the overview+detail interface. Contradicting our task level hypothesis (see
section 3), we find that one of the navigation tasks that required subjects to
compare map objects was solved significantly faster with the zoomable user
interface (estimated marginal mean=73.5,
SE=11.12) compared to the overview+detail interface (estimated marginal mean =113.9, SE=11.12), F(1,556)=6.47, p<.05. On the multi-level map, four of five browsing tasks were completed faster with the zoomable user interface. One of these, a task that requires finding the first airport east of some county, is solved significantly faster using the zoomable user interface (estimated marginal mean =81.81, SE=11.3) compared to the overview+detail interface (estimated marginal mean =122.2, SE=11.2), F(1,556)=6.20, p<.05. This contradicts our hypotheses.
Figure 8—Task completion time in seconds. This figure shows the mean task completion time in seconds. Error bars show the standard error of the mean. Panel a shows the task completion time for the multi-level and the single-level map. Panel b shows the task completion time for navigation and browsing tasks.
For the single-level map, no significant differences for individual tasks were found. This rejects our hypotheses that comparison tasks should be performed faster using the overview+detail interface and that browsing tasks involving scanning the entire map should be solved faster using the overview+detail interface.
Large differences between subjects exist. The slowest subject used on average 169 seconds per task, or 3.4 times as much as the fastest subject. For individual tasks, differences between subjects are as 1 to 23. For the overview+detail interface, subjects visualization ability is negatively correlated with time usage, r=-.123, p<.05, but not for the zoomable user interface, r=-.054, p>.3.
In the following, we investigate the differences between navigation in the two interfaces and try to provide some data that might explain the differences in task completion time, recall tasks, and satisfaction measures discussed on the preceding pages.
Dragging the field-of-view box is the preferred way of panning on the overview. Half of the tasks solved with the overview used this way of panning. Since the overview is only panned by clicking outside of the field-of-view box in 9% of the tasks, the calculations that follow only consider panning by dragging the field-of-view box. Figure 9, panel a, shows the mean number of pan actions subjects make for each task. Subjects perform a significantly higher number of pans in the detail window using the zoomable user interface compared to the overview+detail interface, F(1,556)=23.31, p<.001. However, in the overview+detail interface subjects also pan extensively by dragging the field-of-view box in the overview window. On the average, dragging the field-of-view box moves the detail window 2.8 as many pixels as a pan action on the detail view. Consequently, subjects who use the overview+detail interface move the detail view 49% more pixels, i.e. a longer distance, even though they use fewer pan actions in doing so.
Figure 9—Panning and zooming in the two interfaces. Panel a shows the mean number of pans per task in the detail and overview windows. The two topmost lines show the mean number of pan actions in the detail window for the zoomable user interface (ZUI) and the overview+detail interface (O+D). The bottom-most line shows the mean number of pan actions on the overview, i.e., dragging of the field-of-view box. Panel b shows the mean number of zoom actions per task in the detail and overview windows. In both panels, error bars show the standard error of the mean.
In 28% of the tasks solved with the overview, the field-of-view box is resized; in less than 4% of the tasks is the field-of-view box redrawn. Figure 9, panel b, summarizes the zoom actions made by subjects. We find a significant interaction between interface and map type, F(1,556)=33.63, p<.001, meaning that a comparable amount of zooming is done in the two interfaces on the multi-level map, but that on the single-level map twice as much zooming happens in the zoomable user interface compared to the overview+detail interface (ZUI: M=80.86, SD=78.32; O+D: M=41.43, SD=43.77). Subjects seldom zoom by changing the field-of-view box compared to how often they zoom on the detail view. Taking into account that a zoom action on the overview window on average changes the scale approximately half as much as a zoom action on the detail view, we find that in the zoomable user interface the sum of scale changes is 33% higher than in the overview+detail interface. One reason for these differences might be that subjects who use the zoomable user interface zoom out, pan, and zoom in when navigating between two points on the map (as suggested by Furnas & Bederson ), while subjects who use the overview+detail interface pans using the overview.
In 55% of the 320 tasks solved with the overview+detail interface, subjects actively interacted with the overview window, i.e. they moved or resized the field-of-view box. Tasks in which the overview window was used were frequently solved by first interacting with the detail view then switching to navigating using the overview and then possibly back to the detail view. To understand better the benefit of the overview window, we compare the tasks that are solved by actively using the overview window with the tasks solved without using the overview. Tasks solved with active use of the overview are solved significantly slower (marginal mean=103.93, SE=3.98) than tasks where the overview window is not actively used (marginal mean=86.32, SE=4.57), F(1,267)=6.75, p<.01. Another way of understanding the use of the overview window is to look at the transitions between the overview and the detail window. We find that the number of transitions is strongly correlated with the time usage, Spearman’s r=.404, p<.001. The more transitions between the overview and the detail window, the longer the task completion time.
Two subjects did not use the overview at all, while three subjects used the overview at least once for all ten tasks solved with the overview+detail interface. We found no correlation between VZ-2 score and the use of the overview window, r= -.05, p>.5.
We use our notes from observations during the experiment to make three points. First, many subjects experienced occasional problems with the combined zoom and pan button. Even though subjects practiced this combination button during the training tasks, 18 subjects at least one time zoomed when they verbally indicated that they wanted to pan. The delay before zooming begins is sometimes too short, perhaps when subjects begin initiating a pan action without having made up their minds about which direction to pan.
Second, subjects’ habit formation highlighted some limitations in the interfaces. At least eight subjects tried to use a way of navigating from the overview window in the detail window or vice versa. Some subjects tried to click on the detail window, probably with the intention of jumping to the place where they clicked. This way of navigating seems to be taken from the overview window, where clicking on a point centers the field-of-view box on that point. Similarly, some subjects tried to zoom in and out while they had the mouse over the overview window. This way of interacting seems to be mimicked after the interaction with the detail view.
Third, we repeatedly observed that at least six subjects experienced what has been called desert fog [Jul & Furnas 1998], i.e. zoomed or panned into an area of the map that contained no map objects. When we observed the desert fog, two of these subjects were using the overview+detail interface, four the zoomable user interface.
Subjects significantly prefer the overview+detail interface. Subjects also score this interface significantly higher on the seven satisfaction questions, and comment that the overview+detail interface helps to keep track of the current position and that the overview window is useful for navigation. This result is consistent with previous empirical work on overviews [North & Shneiderman 2000; Hornbæk & Frøkjær 2001] and with recommendations in the design literature [Shneiderman 1998].
We find that the zoomable user interface is faster than the overview+detail interface, especially for navigation tasks and for tasks solved on the multi-level map. We also find that subjects who actively use the overview window are slower than subjects who only use the detail window. Our results are surprising considering previous studies, e.g. Beard & Walker  and North & Shneiderman , which found that overview+detail are faster than detail-only. However, in the studies by Beard & Walker  and North & Shneiderman  navigation in the detail-only interface is done with scrollbars. We therefore consider those studies, and the common expectation that overview+detail interfaces improve efficiency over detail-only interfaces, to be weakened by their choice of detail-only interface in comparison to the direct manipulation zoomable user interface used in this study. Considering the difference between browsing and navigation tasks, our results are similar to those of [Hornbæk & Frøkjær 2001]. In their study, and in ours, a detail-only interface was significantly faster for navigation tasks than an overview+detail interface.
In the context of our evaluation, we consider three explanations of the difference in task completion time between overview+detail and zoomable user interfaces. First, the overview might be visually distracting, continuously catching subjects’ attention and thus affecting task completion time. While we can not definitively reject this explanation from the data collected, we note that subjects who do not actively use the overview window achieve task completion times comparable to tasks solved with the zoomable user interface (see section 4.4.2). This observation also weakens the explanation that using the overview+detail interface takes more time because this interface presents more information than the zoomable user interface. A second explanation of the task completion times suggests that switching between the detail and the overview window requires mental effort and time moving the mouse. Our data modestly support this explanation, since the number of transitions between overview and detail window is positively correlated with task completion time. A third explanation is that navigation on the overview window is coarse and that resizing the field-of-view box can be difficult at low zoom factors. The observation that subjects pan 49% longer using the overview+detail interface suggests that the overview window does not support fine-grained navigation and that subjects has to do additional navigation on the detail view. Subjects commented that the overview is hard to resize. In support of those comments, we note that the overview window used in the evaluation occupies 256*192 pixels. When a zoom factor of 20 is reached the field-of-view box is only 13*10 pixels, which is probably hard for most users to resize and move using the mouse.
When using the multi-level map, subjects are faster, more accurate, and score the interface higher on subjective satisfaction measures, irrespectively of which interface they use. The result is consistent with the literature on landmarks [Vinson 1999], since the top-level landmarks, for example the labels at the lowest scale on the multi-level map, are visible at all navigational scales. We also find that the zoomable user interface is faster than the overview+detail interface when used with the multi-level map and improves recall for map locations. The reason might be that the richer navigational cues in the zoomable user interface help the subjects to concentrate navigation and attention on the detail window. Feeling lost and having to reorient oneself, possibly by using the overview window, is less common with the multi-level map than with the single-level map. Our data also show that subjects make more scale changes in the zoomable user interface. Thus, using the scale information on the multi-level map in the zoomable user interface might be more natural since subjects habitually use zoom actions to navigate.
We found large individual differences with respect to all performance measures. Subjects' visualization score was correlated with some performance measures, but we did not find different patterns in the use of the overview window between high and low visualization score subjects. Thus, why the large differences exist and what interface features would be beneficial for poor and good map-readers, we currently do not know.
Overall, we found a trade-off between the two interfaces, with the zoomable user interface being fast and the overview+detail interface leading to higher satisfaction. Our results challenge some of the common criticism of zoomable user interfaces, e.g. that users lose their overview when zooming [Card et al. 1999, p. 634], because zoomable user interfaces were found to be comparable to overview+detail interfaces with respect to accuracy; for navigation tasks and on the multi-level map, the zoomable user interface was faster than the overview+detail interface. However, overview+detail interfaces lead to higher subjective satisfaction. We do not know whether the speed difference observed might diminish when users learn to cope with the complexity of the overview+detail interface.
We recommend that designers closely consider the trade-off in subjective satisfaction and task completion time between interfaces discussed above. The preferred interface type depends on the critical usability parameters in the particular context designed for. We recommend using multi-level maps, independent of which interface is used.
We suggest two challenges for researchers in zoomable user interfaces. First, the method for interacting used in the evaluation occasionally causes subjects to zoom instead of panning. Experiments are needed to find a method for interacting with zoomable user interfaces using a two-dimensional input device that are intuitive and supports habit formation. We have used other interaction techniques ourselves, but picked the present interface because we believed it was easier to use for novices. Second, empirical research is needed in interface components that provide navigational cues. Our observations and subjects’ comments suggest that the interface could include cues about the current zoom factor, e.g. Furnas et al. , cues about the current position in the information space, and aids for avoiding desert fog, e.g. Jul & Furnas .
For overview+detail interfaces, research should aim at improving the usability of the overview window. Usability might be improved by changing the size of the overview window or to explore the use of distorted overview windows, which might give users better control over local navigation without losing the possibility of coarse global navigation. We realize that our interface, and other overview+detail interfaces we have tried, should eliminate navigation commands that are specific only to the overview window or to the detail window, i.e. aim at unifying navigation [Raskin 2000].
Three areas for further research grow out of the limitations of our study. First, we suggest to explore further the visual and mental attention when using information visualization interfaces. The tradeoffs between dual information displays, such as the overview+detail interface, and zoomable user interfaces might better be understood that way. Second, lacking guidelines for how to design multi-level environments, the information spaces in this study were designed ad hoc. We suggest that guidelines for designing multi-level environment be developed. Finally, we acknowledge that more complex and longer term tasks should supplement the present evaluation. While we consider the present selection of tasks to reflect essential tasks in navigating on maps, it is not exclusive. In addition to support our findings with respect to different types of information spaces, we suggest to further explore individual differences in navigation and usability preferences.
We compared the navigation patterns and usability of an overview+detail interface and a zoomable user interface. Thirty-two subjects spent an average of one hour and 30 minutes on solving tasks on a single-level and a multi-level map. Our results suggest a tradeoff between the two interfaces in subjective satisfaction and task completion time. Subjects score the overview+detail interface higher on seven subjective satisfaction questions and 80% prefer this interface. In contrast, subjects are faster with the zoomable user interface when used with the multi-level map or for navigation tasks. Subjects prefer using the multi-level map independently of the interface used; they are also significantly faster at completing tasks on this map. We also find large individual differences in subjects' ability to navigate the map, in task completion times, and in accuracy. However, subjects’ spatial ability did not seem to influence usability differently between interfaces. Based on our work, we recommend that the usability of overview+detail interfaces be improved, as should navigational aids for zoomable user interfaces. A better understanding of visual and mental attention in information visualization interfaces would help better explain the usability tradeoff found. Common expectations about difficulties with zoomable user interfaces and the relation between overview+detail and detail-only interfaces were not confirmed in this study. On the contrary, we found that zoomable user interfaces offer certain benefits compared to overview+detail interfaces that should be further explored by researchers and designers.
We thank all members of the Human-Computer Interaction Laboratory at the University of Maryland for valuable help and encouragement. Erik Frøkjær's and Ben Shneiderman's thoughtful comments improved the paper substantially. For thorough advice on statistics, we thank Per Settergren Sørensen. This work was funded in part by DARPA's Command Post of the Future project. The work was done during a six month visit of the first author to the Human-Computer Interaction Laboratory.
1. Which city is closest to the city Colton in Whitman County?
2. Which state park is located north of the city Ione in Pend Oreille County?
3. Which of the following two cities is located most to the north: Shelton in Mason County or Warden in Grant County?
4. Which of the following cities covers the largest area: Sequim in Clallam County, Sumas in Whatcom County, or Deer Park in Spokane County?
5. Which are the two largest parks passed on the railroad going from Westport in Grays Harbor County to Vancouver in Clark County?
1. Which two national parks in Washington are biggest?
2. Find and name two counties in Washington that contain two or more military facilities.
3. Find and name the first airport east of the county Skamania.
4. Which two cities in the counties on the northern border of Washington cover the largest area?
5. Which of the counties on the southern border of Washington contains the most cities?
1. Which city is closest to Baker City in Fallon County (in the eastern part of Montana)?
2. Which city is located west of the city Eureka in Lincoln County (in the north-west part of Montana)?
3. Which of the following two cities is located most to the north: Darby in Ravalli County (western part of Montana) or Columbus in Stillwater County (southern part of Montana)?
4. Which of the following cities in the eastern part of Montana covers the largest area: Wolfpoint in Roosevelt County, Glendive in Dawson County, or Ekalaka in Carter County?
5. Which are the two largest cities on the railroad from the city Wibaux in Wibaux County (eastern part of Montana) to the city Red Lodge in Carbon County (southern part of Montana)?
1. Which two lakes in Montana are biggest?
2. Find and name two counties in Montana that contain at least three airports or airfields.
3. Find and name the first state park east of Furgus County (central Montana).
4. Which two cities in the counties on the northern border of Montana cover the largest area?
5. Which of the counties on the southern border of Montana contains the most cities?
Ahlberg, C. & Shneiderman, B. 1994. Visual Information Seeking: Tight coupling of dynamic query filters with starfield displays. In Proceedings of ACM Conference on Human Factors in Computing Systems (CHI '94, Boston MA, Apr. 24-28). ACM Press, New York, NY, 313-317.
Baldonado, M. Q. W., Woodruff, A., & Kuchinsky, A. 2000. Guidelines for Using Multiple Views in Information Visualization. In Proocedings of the 5th International Working Conference on Advanced Visual Interfaces (AVI'2000, Palermo, Italy, May 24-26). Tarrantino, L. Ed. ACM Press, New York, NY, 110-119.
Beard, D. B. and Walker, J. Q. 1990. Navigational Techniques to Improve the Display of Large Two-Dimensional Spaces. Behaviour and Information Technology, 9, 6, 451-466.
Bederson, B. B. & Boltman, A. 1999. Does Animation Help Users Build Mental Maps of Spatial Information. In Proceedings of IEEE Symposium on Information Visualization (INFOVIZ'99, San Fransisco, CA, Oct. 24-29). IEEE Press, New York, NY, 28-35.
Bederson, B. B. & Hollan, J. D. 1994. Pad++: A Zooming Graphical Interface System. In Proceedings of the 7th ACM Symposium on User Interface Software and Technology (UIST'94, Marina del Rey,CA, Nov. 2-4). ACM Press, New York, NY, 17-26.
Bederson, B. B., Meyer, J., & Good, L. 2000. Jazz: An Extensible Zoomable User Interface Graphics ToolKit in Java. In Proceedings of the 13th Annual ACM Symposium on User Interface Software and Technology (UIST'00, San Diego, CA, Nov. 6-8). ACM Press, New York, NY, 171-180.
Card, S. K., Mackinlay, J. D. & Shneiderman, B. 1999. Readings in Information Visualization. San Francisco CA: Morgan Kaufmann.
Card, S. K., Robertson, G. G., & Macinlay, J. D. 1991. The information visualizer, an information workspace. In Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI'91, New Orleans, LA, Apr. 27- May 2). ACM Press, New York, NY, 181-188.
Carr, D., Plaisant, C., and Hasegawa, H. 1998. Designing a real-time telepathology workstation to mitigate communication delays. Interacting with Computers, 11, 1, 33-52.
Chen, C. and Czerwinski, M. P. 2000. Special Issue on Empirical Evaluation of Information Visualizations. International Journal of Human-Computer Studies, 53, 5.
Chin, J. P., Diehl, V. A., & Norman, K. L. 1988. Development of an instrument for measuring user satisfaction of the human-computer interface. In Proceeding of the ACM Conference on Human Factors in Computing Systems (CHI '88, Washington, DC, May 15-19). Soloway, E., Frye, D., and Sheppard, S. B. Eds. ACM Press, New York, NY, 213-218.
Combs, T. & Bederson, B. B. 1999. Does Zooming Improve Image Browsing? In Proceedings of the ACM Conference on Digital Libraries (DL '99, Berkeley, CA, Aug. 11-14). ACM Press, New York, NY, 130-137.
Druin, A., Stewart, J., Proft, D., Bederson, B., & Hollan, J. D. 1997. KidPad: A Design Collaboration Between Children, Technologists, and Educators. In Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI '97, Atlanta, GA, Mar. 22-27). Pemperton, S. Ed. ACM Press, New York, NY, 463-470.
Eick, S. G., Steffen, J. L., and Sumner, E. E. 1992. Seesoft-A Tool for Visualizing Line Oriented Software Statistics. IEEE Transactions on Software Engineering, 18, 11, 957-968.
Ekstrom, R. B., French, J. W. & Harmon, H. H. (1976). Manual for kit of factor-referenced cognitive tests. Princeton, NJ: Educational Testing Service.
Frank, A. U. and Timpf, S. 1994. Multiple Representations for Cartographic Objects in a Multi-Scale Tree-An Intelligent Graphical Zoom. Computers & Graphics, 18, 6, 823-829.
Furnas, G. W. & Bederson, B. B. 1995. Space-Scale Diagrams: Understanding Multiscale Interfaces. In Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI '95, Denver, CO, May 7-11). Katz, I. R., Mach, R., Marks, L., Rosson, M. B., and Nielsen, J. Eds. ACM Press, New York, NY, 234-241.
Furnas, G. W. & Zhang, X. 1998. MuSE: a multiscale editor. In Proceedings of the 11th Annual ACM Symposium on User Interface Software and Technology (UIST '98, San Fransisco, CA, Nov. 1-4). ACM Press, New York, NY, 107-116.
Ghosh, Partha and Shneiderman, Ben 1999. Zoom-Only vs Overview-Detail pair: A study in Browsing Techniques as Applied to Patient Histories. University of Maryland Technical Report, CS-TR-4028, ftp://ftp.cs.umd.edu/pub/hcil/Reports-Abstracts-Bibliography/99-12html/99-12.html.
Guo, Huo, Zhang, Weiwei, and Wu, Jing 2000. The Effect of Zooming Speed in a Zoomable User Interface. Report from Student HCI Online Research Experiments (SHORE), http://otal.umd.edu/SHORE2000/zoom/.
Hightower, R. R., Ring, L. T., Helfman, J. I., Bederson, B. B., & Hollan, J. D. 1998. Graphical Multiscale Web Histories: A Study of PadPrints. In Proceedings of the Ninth ACM Conference on Hypertext (Hypertext '98, Pittsburgh, PA, June 20-24). ACM Press, New York, NY, 58-65.
Hornbæk, K. & Frøkjær, E. 1999. Do Thematic Maps Improve Information Retrieval? In IFIP TC.13 International Conference on Human-Computer Interaction (INTERACT '99, Edingburgh, Scotland, Aug. 30- Sep. 3). IOS Press, Amsterdam, 179-186.
Hornbæk, K. & Frøkjær, E. 2001. Reading Electronic Documents: The Usability of Linear, Fisheye, and Overview+Detail Interfaces. In Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI 2001, Seattle, WA, Mar. 31- Apr. 5). ACM Press, New York, NY, 293-300.
Igarishi, T. & Hinckley, K. 2000. Speed-dependent automatic zooming for browsing large documents. In Proceedings of the 13th Annual ACM Symposium on User Interface Software and Technology (UIST 2000, San Diego, CA, Nov. 5-8). ACM Press, New York, NY, 139-148.
Jul, S. & Furnas, G. W. 1998. Critical zones in desert fog: aids to multiscale navigation. In Proceedings of the 11th Annual ACM Symposium on User Interface Software and Technology (UIST '98, San Fransisco, CA, Nov. 1-4). ACM Press, New York, NY, 97-106.
North, C. and Shneiderman, B. 2000. Snap-Together Visualization: Evaluating Coordination Usage and Construction. International Journal of Human Computer Studies, 53, 715-739.
North, C., Shneiderman, B., & Plaisant, C. 1995. User controlled overviews of an image library: A case study of the Visible Human. In Proceedings of the 1st ACM International Conference on Digital Libraries (DL '96, Bethesda, MD, Mar. 20-23). ACM Press, New York, NY, 74-82.
Páez, L. B., da Silva-Fh., J. B., & Marchionini, G. 1996. Disorientation in Electronic Environments: A Study of Hypertext and Continuous Zooming Interfaces. In Proceedings of the 59th Annual Meeting of the American Society for Information Science (ASIS '96, Baltimore, MD, Oct. 19-24). Harding, S. Ed., 58-66.
Perlin, K. & Fox, D. 1993. Pad: An Alternative Approach to the Computer Interface. In Proceedings of the 20th Annual ACM Conference on Computer Graphics (SIGGRAPH '93, Anaheim, CA, Aug. 2-6). Kajiya, J. T. Ed. ACM Press, New York, NY, 57-64.
Plaisant, C. M. B., Rose, A., & Shneiderman, B. 1996. Life Lines: Visualizing personal histories. In Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI '96, Vancouver, Canada, Apr. 13-18). ACM Press, New York, NY, 221-227.
Plaisant, C., Carr, D., and Shneiderman, B. 1995. Image browsers: Taxonomy, guidelines, and informal specifications. IEEE Software, 12, 2, 21-32.
Pook, S., Lecolinet, E., Vaysseix, G., & Barillot, E. 2000. Context and Interaction in Zoomable User Interfaces. In Proocedings of the 5th International Working Conference on Advanced Visual Interfaces (AVI 2000, Palermo, Italy, May 23-26). ACM Press, New York, NY, 227-231.
Raskin, J. 2000. The Humane Interface: New Directions for Designing Interactive Systems. Reading, MA: Addison-Wesley.
Schaffer, D., Zuo, Z., Greenberg, S., Bartram, L., Dill, J., Dubs, S., and Roseman, M. 1996. Navigating Hierarchically Clustered Networks through Fisheye and Full-Zoom Methods. ACM Transactions on Computer-Human Interaction, 3, 2, 162-188.
Shneiderman, B. 1998. Designing the User Interface. Reading, MA: Addison-Wesley.
Vinson, N. G. 1999. Design Guidelines for Landmarks to Support Navigation in Virtual Environments. In Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI '99, Pittsburg, PA, May 15-20). Williams, M. G., Altom, M. W., Ehrlich, K., and Newman, W. Eds. ACM Press, New York, NY, 278-285.
Ware, C. 2000. Information Visualization: Perception for Design. San Fransisco, CA: Morgan Kaufmann Publishers.
Woodruff, A., Landay, J., & Stonebreaker, M. 1998. Constant Information Density in Zoomable Interfaces. In Proocedings of the 4th International Working Conference on Advanced Visual Interfaces (AVI '98, L'Aquila, Italy, Maya 24-27). 110-119.
Woodruff, A., Landay, J., & Stonebreaker, M. 1998. Goal-Directed Zoom. In Summary of the ACM Conference on Human Factors in Computing Systems (CHI '98, Los Angeles, CA, Apr. 18-23). Karat, C.-M., Lund, A., Coutaz, J., and Karat, J. Eds. ACM Press, New York, NY, 305-306.