Understanding Single-Handed Mobile Device Interaction


Amy K. Karlson, Benjamin B. Bederson
Human-Computer Interaction Lab
Computer Science Department
University of Maryland, College Park, MD

{akk, bederson}@cs.umd.edu

Jose L. Contreras-Vidal
Cognitive-Motor Behavior Laboratory
Kinesiology Department
University of Maryland, College Park, MD

pepeum@umd.edu

 


ABSTRACT

A major challenge faced in the design of mobile devices is that they are typically used when the user has limited physical and attentional resources available. We are interested in the circumstances when a user has only a single hand available. To offer insight for future one-handed mobile designs, we conducted three foundational studies: a field study to capture how users currently operate devices; a survey to record user preference for the number of hands used for a variety of mobile tasks, and an empirical evaluation to understand how device size, target location, and movement direction influence thumb mobility. We have found that one-handed use of keypad-based phones is widespread, and in general, a majority of phone and PDA users would prefer to use one hand for device interaction. Additionally, our results suggest that device size is not a factor in how quickly users can access objects within thumb reach, but that larger devices have more areas that are out of reach, and thus inappropriate for one-handed access. Finally, regardless of device size, diagonal thumb movement in the NW↔SE direction is the most difficult movement for right handed users to perform.

Categories and Subject Descriptors

H.5.2 [Information Interfaces and Presentation]: User Interfaces – ergonomics, evaluation, interaction styles.

General Terms

Measurement, Performance, Design, Experimentation, Human Factors.

Keywords

Thumb movement, mobile devices, one-handed designs.

1.     INTRODUCTION

 

 

The handheld market is growing at a tremendous rate; the technology is advancing rapidly and experts project that mobile phone sales will top 1 billion by 2009 [18]. To meet customer demand for portability and style, device manufacturers continually introduce smaller, sleeker profiles to the market. Yet advances in battery power, processing speed and memory allow these devices to come equipped with increasing numbers of functions, features, and applications. Unfortunately these divergent trends are at direct odds with usability: richer content accessed via shrinking input and output channels simply makes devices harder to use. The unique requirements for mobile computing only compound the problem, with use scenarios including unstable environments, eyes-free interaction, competition for attention resources, and varying hand availability [16]. While each of these constraints requires attention in design, it is the final one with which we are currently interested.

Devices that accommodate single-handed interaction can offer a significant benefit to users by freeing a hand for the host of physical and attentional demands common to mobile activities. But there is little evidence that current devices are designed with this goal in mind. Small, light phones that are easy to control with one hand are unfriendly to thumbs due to small buttons and crowded keypads. Larger devices are not only harder to manage with a single hand, they tend to feature more (rather than larger) buttons, as well as stylus-based touchscreens whose rich interface designs maximize information content, but offer targets too small, and/or too distant, for effective thumb interaction.

While it seems obvious which features inhibit single-handed use, there has been relatively little systematic study of enabling technologies and interaction techniques. Most commercial and research efforts in one-handed device interaction have focused primarily on either a specific technology (e.g., accelerometer-augmented devices [5,6,19], touchscreens [4,9]) or task (e.g., media control [8,17], text entry [21]). In the varied landscape of mobile devices and applications, solutions must ultimately extend to a wide range of forms and functions. So, instead, we take the approach of considering the basic human factors involved in one-handed device interaction.

Here we report on three studies conducted to understand different aspects of one-handed mobile design requirements. We first ran a field study to capture the extent to which single-handed use is currently showing up “in the wild”. Second, we polled users directly to record personal accounts of current and preferred device usage patterns. The results from these studies help motivate one-handed interface research, and offer insight into the devices and tasks for which one-handed techniques would be most welcomed. Finally, we performed an empirical evaluation of thumb tap speed to understand how device size, target location, and movement direction influence performance. From these results we suggest hardware-independent design guidelines for the placement of interaction objects. In sum, our findings offer foundational knowledge in user behavior, preference, and motor movement for future research in single-handed mobile design.

2.     RELATED WORK

The physical and attentional demands of mobile device use was perhaps first reported for fieldworkers [10,16], from which design recommendations for minimal-attention and one-handed touchscreen interface designs emerged [16]. Though well suited to the directed tasks of fieldwork, the guidelines do not generalize to the varied and complex personal information management tasks of today’s average user. Research of the effects that mobility has on attention and user performance continues (e.g. [15]), as well as how these factors can be replicated for laboratory study [2].

Several approaches for one-handed device interaction have been proposed. Limited gestures sets have been explored for mobile application control with both the thumb [8,9,16] and index finger [17], but none have specifically considered ergonomic factors. Since text entry remains the input bottleneck for mobile devices, many are working on improvements, and some targeting one-handed use. Peripheral keyboards for one-handed text entry are available (e.g., Twiddler [13]), but the support required by a hand, desk or lap violates our definition of one-handed device control. Text entry on numeric keypads is generally one-handed, but methods to improve input efficiency have focused on reducing the number of key presses required via techniques such as word prediction (e.g., T9 [20]), rather than by optimizing button sizes, locations, or movement trajectories. Accelerometer-augmented devices support spatial orientation as an input channel, and have been shown to support one-handed panning [5], scrolling [19], and text entry [21]. However, the course level of control and potential confusion with movement due to normal mobile use limit the viability of tilt for generalized input.

Scientists in the medical community have studied the biomechanics of the thumb extensively for the purposes of both reconstruction and rehabilitation. The structure of the thumb is well understood [1], but only now are scientists beginning to reliably quantify the functional capabilities of the thumb. Strength has been the primary parameter used to assess mechanical ability, and the influence of movement direction upon thumb strength has been established [12]. Unfortunately, only standard anatomical planes have been considered, which excludes movements toward the palm that are typical of mobile device interaction. As a complement to force capabilities, others have looked at the extent of thumb movement. Kuo [11] has developed a model for the maximal 3D workspace of the thumb and Hirotaka [7] has quantified an average for thumb rotation angle. The experimental conditions for these studies, however, do not account for constraints imposed by holding objects of varying size, such as alternative models of handheld device.

3.     FIELD STUDY


One motivation for our research in single-handed mobile designs was our assumption that people already use devices in this manner. Since current interaction patterns, whether by preference or necessity, are predictive of future behavior, they are likely to be transferred to new devices. This suggests that designs should become more accommodating to single-handed use, rather than less, as the current trend seems to be. To capture current behavior, we conducted an in situ study of user interaction with mobile devices. The study targeted an airport environment for the high potential of finding mobile device users and ease of access for unobtrusive observation.

3.1     Field Study Method

We observed 50 travelers (27 male) at Baltimore Washington International Airport’s main ticketing terminal during a six hour period during peak holiday travel in November of 2004. Because observation was limited to areas accessible to non-ticketed passengers, seating options were scarce. We expected to observe the use of both PDAs and cell phones since travelers are likely to be coordinating transportation, catching up on work, and pursuing entertainment. Since most users talk on the phone with one hand, we recorded only the cell phone interactions that included both the dialing and talking phases of use. All observations were performed anonymously without any interaction with the observed.

3.2     Field Study Measures

For each user observed, we recorded sex, approximate age, and device type used: candy bar phone, flip phone, Blackberry, or PDA. A “candy bar” phone is the industry term for a traditional-style cellular phone with a rigid rectangular form, typically about 3 times longer than wide. For phone use, we recorded the hand(s) used to dial (left, right or both) and the hand(s) used to speak (left, right or both). We also noted whether users were carrying additional items, and their current activity (selected from the mutually exclusive categories: walking, standing, or sitting).

3.3     Field Study Results

Only two users were observed operating devices other than mobile phones - one used a PDA and the other a Blackberry. Both were seated and using two hands. The remainder of the discussion focuses on the 48 phone users (62.5% flip, 37.5% candy bar). Overall, 74% used one hand to dial. By activity, 65% of one handed users had a hand occupied, 54% were walking, 35% were standing, and 11% were sitting. Figure 1 presents the distribution of users who used one vs. two hands for phone dialing, segmented by concurrent activity (walking, standing, or sitting). The distribution of users engaged in the three activities reflects the airport scenario where many more people were walking or standing than sitting. It is plain from Figure 1 that the relative proportion of one handed to two handed dialers varied by activity; the vast majority of walkers dialed with one hand, about two-thirds of standers dialed with one hand, but more seated dialers used two hands. However, we also noted whether one hand was occupied during the activity, and found walkers were more likely to have one hand occupied (60%), followed by standers (50%), and finally sitters (25%), which may be the true reason walkers were more likely than standers to dial with one hand, followed by sitters. Regardless of activity, when both hands were available for use, the percentage of one vs. two handed dialers was equal.

3.4     Analysis of Field Study

Although Figure 1 suggests a relationship between user activity and dialing behavior, it is unclear whether activity influences hand use or vice versa. Furthermore, since the percentage of users with one hand occupied correlates with the distribution of one-handed use across activities, hand availability may be the more influential factor in number of hands used to dial. While use scenario certainly impacts usage patterns, the fact that users were as likely to use one hand as two hands when both hands were available suggests that preference, habit and personal comfort also play a role. Regardless of scenario, we can safely conclude that one-handed phone use is quite common, and thus is an essential consideration in design.

3.4.1     Generalizability

The choice of observation location may have biased our results from those found in the general population since travelers may be more likely to be: 1) carrying additional items; 2) standing or walking; and 3) using a phone vs. PDA. Different environments, information domains, populations, and scenarios will yield unique usage patterns. Our goal was not to catalogue each possible combination, but to learn what we could from a typical in-transit scenario.

4.     SURVEY

While informative for a preliminary exploration, shortcomings of the field study were a) a lack of knowledge about motivation for usage style; b) the limited types of devices observed (phones); and c) the limited tasks types observed (assumed dialing). To broaden our understanding of device use over these dimensions, we designed a brief survey to capture user perceptions of, preferences for and motivations surrounding their own device usage patterns.

4.1     Survey Method

The survey consisted of 18 questions presented on a single web page which was accessed via an encrypted connection (SSL) from a computer science department server. An introductory message informed potential participants of the goals of the survey and assured anonymity. Notification that results would be posted for public access after the survey was closed provided the only incentive for participation. Participants were solicited from a voluntary subscription mailing list about the activities of our laboratory. In addition the solicitation was propagated to one recipient’s personal mailing list, a medical informatics mailing list, and a link to the survey was posted on two undergraduate CS course web pages.

4.2     Survey Measures

For each user, we collected age, sex and occupation demographics. Users recorded all styles of phones and/or PDAs owned, but were asked to complete the survey with only one device in mind - the one used for the majority of information management tasks. We collected general information about the primary device, including usage frequency, input hardware, and method of text entry. We then asked a variety of questions to understand when and why people use one vs. two hands to operate a device. We asked users to record the number of hands used (one and/or two) for eighteen typical mobile tasks, and then to specify the number of hands (one or two) they would prefer to use for each task. Three pairs of activities were designed to distinguish between usage patterns for different tasks within the same application, which we differentiated as “read” (email reading, calendar lookup, and contact lookup) vs. “write” (email writing, calendar entry, and contact entry) tasks. Users then recorded the number of hands used for the majority of device interaction and under what circumstances they chose one option over the other. Finally, users were asked how many hands they would prefer to use for the majority of interactions (including no preference), and were also asked to record additional comments.

4.3     Survey Results

Two hundred twenty-nine participants (135 male) responded to the survey solicitation. One male participant was eliminated from the remaining analysis because his handheld device was specialized for audio play only, leaving 228. Median participant age was 38.5 years. Participant occupations reflect the channels for solicitation, with 25% in CS, IT or engineering, 23% students of unstated discipline, 20% in the medical field, 10% in education, and the remainder (21%) from other professional disciplines.

4.3.1     Devices

The three most common devices owned were flip phones (52%), small candy bar phones (23%) and Palm devices without a Qwerty keyboard (20%). Palm devices with an integrated Qwerty keyboard tied with Pocket PCs without a keyboard (14%). Since interaction behavior may depend on device input capabilities, we reclassified each user’s primary device into one of four general categories based on the device’s input channels: (i) keypad-only (51%) are devices with a 12-key numeric keypad but no touchscreen, (ii) TS-no-qwerty (23%) are devices with a touchscreen but no Qwerty keyboard, (iii) TS-with-qwerty (21%) are devices with a touchscreen as well as an integrated Qwerty keyboard, and finally (iv) qwerty-only (5%) are devices with an integrated Qwerty keyboard but no touchscreen. For users with multiple devices, we derived primary device type from the text entry method reported.

4.3.2     Current Usage

Of the 18 activities users typically perform with devices, 9 were performed more often with one hand, 6 more often with two hands, and 3 were performed nearly as often with one vs. two hands. Figure 2a displays these results, with the shaded backgrounds grouping the activities preferred with one, either or two hands. Upon inspection, all of the “reading” activities were performed more often with one hand (top) and all “writing” activities with two hands (bottom). Considering users’ device types, we notice that with the exception of gaming, owners of keypad-only devices were more likely to use one hand regardless of activity, owners of TS-no-qwerty were more likely to use two-hands for most activities, and those owning Qwerty based devices were more likely to use two hands when performing writing tasks, but not reading tasks.

Overall, 45% of participants stated they use one hand for nearly all device interactions, as opposed to only 19% who responded similarly for two hands. Considering device ownership, however, users of touchscreen-based devices were more likely to use two hands “always” than they were one hand (Figure 3). When participants use one hand, the majority (61%) perceive they do so whenever the interface supports it, the reason cited by only 10% of those who use two hands. Device form dictated usage behavior when the device was too small for two hands, too large for one hand, or when large devices could be supported by a surface and used with one hand. Participants cited task type as a reason for hand choice, primarily as a trade off between efficiency and resources usage: 14% of users selected one hand only for simple tasks (conserving resources), while 5% selected two hands for entering text, gaming, or otherwise for improving the speed of interaction (favoring efficiency). Finally, according to respondents, the majority of two-handed use occurs when it is the only way to accomplish the task given the interface (63%).

4.3.3    

Preference


When asked how many hands users preferred to use while performing the same 18 tasks, one hand was preferred overwhelmingly to two hands for all tasks (Figure 2b). The activities with the closest margin between the number of participants who preferred one vs. two hands were playing games (13%) and composing email (16%). With one exception (gaming), the activities for which more than 14% of users stated a preference for two hands were “writing” tasks (e.g., those that required text entry): text entry, contact entry, calendar entry, email writing, and text messaging, in decreasing order. Even so, except for users of TS-with-qwerty devices, the majority of users stated a preference for using one hand, regardless of task or device owned. Users of TS-with-qwerty devices preferred two hands for text messaging, email composition, and text entry. Based on these data, it is consistent that 66% of participants stated they would prefer to use one hand for the majority of device interaction, versus 9% who would prefer two hands for all interaction. Twenty-three percent did not have a preference and 6 users did not respond.

4.4     Survey Summary

Considering current usage patterns only, there is no obvious winner between one and two handed use. Excluding phone calls, the number of activities for which a majority of respondents use one (7) vs. two hands (6) is nearly balanced. However, device type certainly influences user behavior; users of keypad-only devices nearly always use one hand, while users of touchscreen devices more often favor two hands, especially for tasks involving text entry. But user justifications for hand choice indicate that the hardware/software interface is to blame for much two-handed use occurring today. Most use one hand if at all possible and only use two hands when the interface makes a task impossible to do otherwise. Other than gaming, tasks involving text entry are the only ones for which users may be willing to use two hands, especially when the device used provides an integrated Qwerty keyboard. It seems, therefore, that the efficiency gained by using two hands for such tasks is often worth the dedication of physical resources, which is also true of the immersive gaming experience.

While most users can imagine the ideal of single-handed text entry, enabling single-handed input may not be enough - throughput is also important. Ultimately, it is clear that interface designers of all device types­­ should make one-handed usability a priority, and strive to bridge the gap between current and desired usage patterns.

5.     THUMB MOVEMENT STUDY

The third component of our work is an empirical study that examines thumb movement in the context of mobile device interaction. As input technologies and device forms come and go, biomechanical limitations of the thumb will remain. Although the thumb is a highly versatile appendage with an impressive range of motion, it is most adapted for grasping tasks, playing opposite the other four fingers [3]. Hence thumb interaction on the surface of today’s mobile devices introduces novel movement and exertion requirements for the thumb – repetitive pressing tasks issued on a plane parallel to the palm. With this in mind, thicker devices operable via side mounted buttons, squeezing or tilting may be more ergonomic one-handed designs than those we have today. But the efficiency of the thumb for direct interaction promises to keep it a primary input device for single handed designs. For this reason, we believe a fundamental understanding of thumb capabilities when holding a device can guide the placement of interaction targets for both hardware and software interfaces designed for one-handed use. Although we can make reasonable guesses about thumb capabilities, empirical evidence is a better guide. Since no strictly relevant studies have yet been conducted, we have developed a study to help us understand how device form and task influences thumb mobility.

Since tapping is the primary interaction method for keypad-based devices, and has also proven useful for touchscreens [9], we focused our investigation on surface tapping tasks. We hypothesized that the difficulty of a tapping task would depend on device size, movement direction, and interaction location. We capture the impact of these factors on user performance by using movement speed as a proxy for task difficulty – the harder the task, the slower the thumb movement.

5.1     Equipment

5.1.1     Device models


For real devices, design elements such as buttons and screens communicate to the user the “valid” input areas of the device. We instead wanted outcomes of task performance to suggest appropriate surface areas for thumb interaction. To remove the bias inherent in existing devices, we modeled four common handheld devices: (1) a Siemens S56 candy bar phone measuring 4.0 x 1.7 x 0.6 in (10.2 x 4.3 x 1.5 cm); (2) a Samsung SCH-i600 flip phone measuring 3.5 x 2.1 x 0.9 in (9 x 5.4 x 2.3 cm); (3) an iMate smartphone measuring 4 x 2.0 x 0.9 in (10.2 x 5.1 x 2.3 cm) and (4) an HP iPAQ h4155 Pocket PC measuring 4.5 x 2.8 x 0.5 in (11.4 x 7.1 x 1.3 cm). We refer to these as simply SMALL, FLIP, LARGE, and PDA. We removed all superficial design features, leaving only the fundamental form. 3D models of each device were printed using Z Corp.’s ZPrinter 310 (http://www.zcorp.com/) rapid prototyping system. Device models were hollow, but weight was reintroduced to provide a realistic feel. Once printed and cured, the models were sanded and sealed to achieve a smooth finish. Device models used in the study are shown at various stages of development in Figure 4.

5.1.2     Target Design

A maximal orthogonal grid of circular targets 1.5 cm in diameter was affixed to the surface of each device (Figure 4e). Circles were used for targets so that the sizes would not vary with direction of movement [14]. The target size was selected to be large enough for the average-sized thumb, while also providing adequate surface coverage for each device. The grid dimensions the devices were: SMALL (2x5), FLIP (3x4), LARGE (3x7) and PDA (4x6).

5.1.3     Measurement

A typical measurement strategy for tapping tasks would involve a surface-based sensor to detect finger contact. Unfortunately, due to the number and variety of device sizes investigated, no technical solution was found to be as versatile, accurate or affordable as required. Instead we used Northern Digital Inc.’s OPTOTRAK 3020 motion analysis system designed for fine-grained tracking of motor movement. The OPTOTRAK uses 3 cameras to determine the precise 3D coordinates of infrared emitting diodes (IREDs). Three planar IREDs affixed to the surface of each device defined a local coordinate system, and a fourth IRED provided redundancy. The spatial positions of two markers affixed to each participant’s right thumb were then translated with respect to the coordinate system of the device to establish relative movement trajectories. Diode positions were sampled at 100Hz, and data were post processed to derive taps from thumb minima.

5.1.4     Software

Data collection and experiment software was run on a Gateway 2000 Pentium II with 256 MB of RAM running Windows 98.

5.2     Participants

Twenty participants were recruited via fliers posted in our Department of Computer Science, with the only restriction that participants be right-handed. Participants (15 male) ranged in age from 18 to 35 years with a median age of 25 years. Participants received $20 for their time.

5.3     Design

For each target on each device (SMALL, FLIP, LARGE, and PDA), users performed all combinations of distance (1 and 2 circles) x direction (N«S, E«W, NW«SE, NE«SW in compass notation) supported by the geometry of the device. For example, SMALL could not accommodate trials of distance 2 circles in the E«W, NW«SE, or NE«SW directions. Note that the grid layout results in actual distances that differ between orthogonal trials (N«S, E«W) and diagonal trials (NW«SE, NE«SW), which we consider explicitly in our analysis. For LARGE and PDA, trials of distance 4 circles were included as the geometry permitted. Finally each device included a NW«SE and NE«SW trial to opposite corners of the target grid. For each device, a small number of trials (1 for SMALL, LARGE and PDA, 3 for FLIP), selected at random, were repeated so as to make the total trial count divisible by four. The resulting number of trials for each device were: SMALL (32), FLIP (48), LARGE (108), and PDA (128). Since the larger devices had more surface targets to test, they required more trials.

5.4     Tasks

Users performed reciprocal tapping tasks in blocks as follows. For SMALL and FLIP, trials were divided equally into 2 blocks. For the LARGE and PDA, trials were divided equally into 4 blocks. Trials were assigned to blocks to achieve roughly equal numbers of distance x direction trials, distributed evenly over the device. Trials were announced by audio recording so that users could focus attention fully on the device. Users were presented with the name of two targets by number. For example, a voice recording would say “1 and 3”. After 1 second, a voice-recorded “start” was played. Users tapped as quickly as possible between the two targets, and after 5 seconds, a “stop” was played. After a 1.5 second delay the next trial began. Trials continued in succession to the end of the block, at which point the user was allowed to rest as desired, with no user resting more than 2 minutes. Device and block orders were assigned to subjects using a Latin Square, but the presentation of within-block trials was randomized for each user.

5.5     Procedure

Each session began with a brief description of the tasks to be performed and the equipment involved. Two IRED markers were then attached to the right thumb with two-sided tape. One diode was placed on the leftmost edge of the thumb nail, and a second on the left side of the thumb. The orthogonal placement was intended to maximize visibility of at least one of the diodes to the cameras at all times. The two marker wires were tethered loosely to the participant’s right wrist with medical tape.

The participant was seated in an armless chair with the OPTOTRAK cameras positioned over the left shoulder. At this point the participant was given more detailed instruction about the tasks, and informed of the error conditions that might occur during the study: if at any point fewer than three of the device-affixed IREDs or none of the thumb IREDs were visible to the cameras, an out-of-sight error sound would be emitted, at which point he or she should continue the trial as naturally as possible while attempting to make adjustments to improve diode visibility. Next, the participant was given the first device and performed a practice session of 24 trials, selected to represent all condition types and a variety of surface locations. During the practice trials, the administrator intentionally occluded the diodes to give the participant familiarity with the out-of-sight error sound and proper remedies. After completion of the practice trials and indication that the participant was ready, the study proper was begun. After all trials for a device were completed, users were allowed to rest while the next device was readied, typically 3 to 5 minutes. After completing all trials for the last device, the participant completed a questionnaire, recording demographics and subjective ratings. Total session time was approximately 2 hours.

5.6     Measures

Raw 3D thumb movement data for each 5 second trial were truncated to the middle 3 seconds to eliminate artifacts resulting from initiation lag and anticipated trial completion, phenomena routinely observed by the administrator. In a post processing phase, taps were identified within the remaining 3 second interval and a single average tap time was computed from the difference in time between the onset of the first tap to the onset of the last tap, divided by one fewer than the total number of taps detected. In a post experiment questionnaire, participants assigned an overall rating of difficulty to each device (1-7, where 1 = easy, 7= difficult), and indicated the device regions that were both easiest and hardest to interact with.

5.6.1     Data Post-Processing

Since the 3D thumb position (x,y,z) was recorded relative to the device held, the z-value represented the thumb height above the device surface. While one might think that taps occurred when the z-distance was 0, the IREDs were mounted on participants’ thumbnails, so they never actually reached the surface of the device. Taps were instead defined as points when both the z-value and change in z-value (velocity) were minimal. For example, plotting z-values over time reveals a wave pattern whose valleys indicate taps (Figure 6).

Raw data was first preprocessed to extract the middle 3 seconds of each trial as well as to select the thumb diode with the most complete data set (e.g., the fewest number, or if equal, most compact windows, of missing frames). Linear interpolation was performed on missing frames if the gap was less than 100 ms. Missing frames included those lost due to out-of-sight errors, as well as occasional frames dropped by the collection hardware.

The data was then analyzed by the PICKEXTR Matlab function to identify extrema in a signal. This function is provided with the RelPhase.Box MsATLAB toolbox for relative phase analysis of oscillatory systems, by Tjeerd Dijkstra. The accuracy of the tap classifier was verified by inspecting a visual representation (Figure 6) of each trial. When required, corrections were made as follows: (1) valid endpoints were preserved, (2) if intermediate taps were missing, they were added, (3) if intermediate taps were incorrect, they were recoded by hand, and (4) if endpoints were invalid, the entire signal was coded by hand. Since average tap time was calculated as the number, not placement, of intervening taps, this method minimized as much as possible the bias of human annotation. Of the trials included for statistical analysis 1.3% were discarded because they could not be encoded by machine or human, or had less than 1.5 seconds of encodable signal.

5.7     Results

The goal of our analysis was to understand whether user performance was influenced by device, task region, and movement direction. For maximal comparison across devices, we limited the analysis to trials with distances of 1 or 2 circles since the geometries all but the smallest device­ supported these trials in all four directions. To address the fact that actual movement distance differed between orthogonal and diagonal trials, we analyzed these groups separately. For all analyses, Huynh-Felt corrections were used when the sphericity assumption was violated, and Bonferroni corrections were used for post hoc comparisons.

5.7.1     Direction

A 2 (distance) x 2 (direction) repeated measures analysis of variance (RM-ANOVA) was performed on mean task time data for both orthogonal (distances: 1, 2; directions: N«S, E«W) and diagonal (distances: 1.4, 2.8; directions: NW«SE, NE«SW) trials for the three largest devices. Since SMALL did not support distance 2 trials in all four directions, a one-way RM-ANOVA was performed on mean task time for trials of distance 1 and 1.4.

SMALL: A main effect of direction was observed for diagonal trials (F (1,19) = 65.1, p<.001). Post hoc analyses showed that trials in the NE«SW direction were performed significantly faster than those in the NW«SE direction (0.26 v. 0.28 ms, p<.001).

FLIP, LARGE, and PDA: Results were similar across the analyses of the largest three devices. Unsurprisingly, a main effect of distance was observed for both orthogonal and diagonal trials, with shorter trials significantly faster than longer trials. There were no further effects of direction or interaction between direction and distance for orthogonal trials. However, for diagonal trials, a main effect of direction was observed, with trials in the NE«SW direction significantly faster than those in the in NW«SE direction for all devices. In addition, a distance x direction interaction showed performance differences between the diagonal trials were more pronounced for longer trials than shorter trials (Table 1).

5.7.2     Device

To determine if device size impacted comparable tasks across devices, we analyzed all trials performed in the lower right 3x4 region of the three largest devices using a 3 (devices) x 43 (trials) RM-ANOVA. While a main effect of trial was observed, this was expected, as trials of every distance and direction were included for analysis. Yet no effects of device or device x trial were found.

5.7.3     Target Location

To determine if target location affected performance, we analyzed task time for the shortest tasks for each. We chose short tasks because they provide high granularity for discriminating among device locations. Since direction was shown to affect task time for diagonal trials, only orthogonal tasks could be considered. For each device, a one-way RM-ANOVA was performed on mean trial time, with the number of trials varying by device.

A main effect of target location was observed for SMALL (F(8.6, 163.3) = 2.1), p=.032), FLIP (F(11.5, 218.4) = 3.5, p<.001) and PDA (F(9.8, 188.1) = 3.9, p<.001), but not for LARGE. However, in post hoc analyses, only PDA had a reasonable number of trials that differed significantly from one another. Since it is difficult to draw helpful conclusions from specific pairs of trials, we explored two aggregation techniques.

5.7.3.1     Data-Derived Regions

For each device we ordered tasks by mean tap time, and then segmented them into seven groups. If the number of trials was not divisible by 7, the remainder trials were included in the middle group. A one-way RM-ANOVA on mean group task time was performed for each device. A main effect of group was found for FLIP (F(5.5, 105.1) = 11.3, p<.001), LARGE (F(3.1, 58.7) = 8.4, p<.001), and PDA(F(4.8, 91.0) = 22.0, p<.001). From these results, groups were labeled fastest and slowest such that all groups in fastest were sigsificantly faster than all groups in slowest, according to post hoc analyses. Trials in these groups are shown visually in the rightmost column of Figure 6. Mean task time for fastest v. slowest trials for each device were FLIP (0.26 v. 0.28 ms), LARGE (0.25 v 0.28 ms), and PDA (0.26 v. 0.29 ms).

5.7.3.2     Subject-Derived Regions

Based on subjective opinion of which regions were easiest for each device, we divided tasks into 3 groups (E)asy, (M)edium, and (H)ard. Tasks for SMALL and FLIP were assigned to only E and M groups. A one-way RM-ANOVA on mean group task time was performed for each device. A main effect of group was found for FLIP (F(5.5, 105.1) = 11.3, p<.001), LARGE (F(3.1, 58.7) = 8.4, p<.001), and PDA(F(4.8, 91.0) = 22.0, p <.001). Post hoc analyses showed all groups differed significantly from each other for FLIP and PDA. For LARGE, E and M were significantly faster than H, but were indistinguishable otherwise, so collapsed as E (Figure 6).

5.7.4     Subjective Preferences

After completing all trials, users were presented with diagrams of each device similar to those in Figure 6 and asked to identify the targets they found most easy and most difficult to interact with. Aggregating results across users yielded a preference “heat map” for the least and most accessible targets of each device (columns 1 and 2 of Figure 6), with darker regions indicating more agreement among participants. We see that for each device the two representations are roughly inverses of one another.

In addition to region marking, we asked users to rate the overall difficulty of managing each device with one hand on a 7 point scale (7 = most comfortable). Average ratings from most to least comfortable were as follows: SMALL (6.4), FLIP (5.4), LARGE (4.1) and PDA (3.0).

5.8     Thumb Movement Summary

The findings from our analysis of thumb movement suggest the following guidelines. First, thumb movement in the NW«SE direction is difficult for users regardless of device size, and thus should be avoided, especially for repetitive tasks such as text or data entry. Presumably the difficulty arises from the considerable flexion required to perform these types of tasks. Under this reasoning, the opposite movement (NW«SE) would impede left-handed operation, so conservative designs should constrain repetitive movement to N«S and E«W directions.

Second, device region affects both task performance and perceived difficulty. Not only did slowest trials correspond to those regions users found most difficult, but fastest trials also matched those regions users found most easy (Figure 6). In general, regions within easy reach of the thumb were fastest and most comfortable, favoring those toward the midline of the device, a “sweet spot” that required movement primarily from the base of the thumb. The lower right corners of the devices present an exception in that they are biomechanically awkward to reach because they are “too close” rather than “too far”.

It is important to recognize that the absolute time differences between fastest and slowest regions are not so great as to provide the basis for our design recommendations. Rather, the significant slowdown (7%-12%) suggests that a mechanical and/or physical encumbrance is to blame, which is of concern primarily from an ergonomics perspective. In fact, we believe that the slowdowns we found should be thought of as optimistic, since they capture only localized movement and required substantial changes in user grip; subjective opinion, user observations and practical experience indicate that designers should be cautioned against using the entire surface for thumb interaction, especially for larger devices. We instead recommend placing interaction objects centrally to accommodate both left and right handed users, or offering configurable displays. Since hand size and thumb length will differ by individual, designs should strive to support a range of users.

Finally, the result that users performed trials in the lower right 3x4 sub-grid of the three largest devices equally well suggests that large devices do not inherently impede thumb movement. Rather, larger devices simply have more areas that are out of thumb reach, and so have more regions that are inappropriate for object placement in one-handed designs.

6.     CONCLUSION

In an effort to understand the one-handed interaction needs of mobile device users, we looked at a broad range of device use. Our field study showed that for at least one class of user (travelers), mobile phones are often used with one hand, and that this behavior seems to correlate with activity, such as walking or holding items in the other hand. Our survey revealed that the vast majority of users want to use one hand for interacting with mobile devices, but that current interfaces, especially for touchscreens, are not designed to support dedicated single handed use. Finally, an empirical evaluation of thumb interaction on varying-sized devices suggests that 1) mid-device regions are easiest to access; 2) the position of a target with respect to the thumb impacts performance more than device size, and finally 3) NW«SE movement is difficult for right-handed users and degrades with movement distance.

7.     ACKNOWLEDGEMENTS

This work was supported in part by Microsoft Research.

8.     REFERENCES

[1]     Barmakian, J.T. Anatomy of the joints of the thumb. Hand Clin, 8, 4 (1992), 681-691.

[2]     Barnard, L., Yi, J.S., Jacko, J.A. and Sears, A. An empirical comparison of use-in-motion evaluation scenarios for mobile computing. Int. Jour. of Human-Computer Studies, 62 (2005), 487-520.

[3]     Bourbonnais, D., Forget, R., Carrier, L. and Lepage, Y. Multidirectional analysis of maximal voluntary contractions of the thumb. Journal of Hand Therapy, 6, 4 (1993), 313-318.

[4]     Brewster, S.A., Lumsden, J., Bell, M., Hall, M. and Tasker, S. Multimodal 'Eyes-Free' interaction techniques for mobile devices. Proc. CHI 2003, ACM Press (2003), 473-480.

[5]     Dong, L., Watters, C. and Duffy, J. Comparing two one-handed access methods on a PDA. Proc. Mobile HCI 2005, ACM Press (2005), 235-238.

[6]     Hinckley, K., Pierce, J., Sinclair, M. and Horvitz, E. Sensing techniques for mobile interaction. Proc. UIST 2000, ACM Press (2000), 91-100.

[7]     Hirotaka, N. Reassessing current cell phone designs: using thumb input effectively. Ext. Abstracts CHI 2003, ACM Press (2003), 938-939.

[8]     iPod, www.apple.com/ipod/, Apple Computer, 2006.

[9]     Karlson, A.K., Bederson, B.B. and SanGiovanni, J. AppLens and LaunchTile: two designs for one-handed thumb use on small devices. Proc. CHI 2005, ACM Press (2005), 201-210.

[10]  Kristoffersen, S. and Ljungberg, F. Making place to make it work: empirical exploration of HCI for mobile CSCW. Proc. GROUP 1999, ACM Press (1999), 276-285.

[11]  Kuo, L., Cooley, W., Kaufman, K., Su, F. and An, K. A kinematic method to calculate the workspace of the TMC joint. Proc. Inst. Mech. Eng. H, 218, 2 (2004), 143-149.

[12]  Li, Z.-M. and Harkness, D.A. Circumferential force production of the thumb. Medical Eng. & Physics, 26 (2004), 663-670.

[13]  Lyons, K., Starner, T., Plaisted, D., Fusia, J., Lyons, A., Drew, A. and Looney, E.W. Twiddler typing: one-handed chording text entry for mobile phones. Proc. CHI 2004, ACM Press (2004), 671-678.

[14]  MacKenzie, I.S. and Buxton, W. Extending Fitts' Law to two-dimensional tasks. ACM Press (1992), 219-226.

[15]  Oulasvirta, A., Tamminen, S., Roto, V. and Kuorelahti, J. Interaction in 4-second bursts: the fragmented nature of attentional resources in mobile HCI. Proc. CHI 2005, ACM Press (2005), 919-928.

[16]  Pascoe, J., Ryan, N. and Mores, D. Using while moving: HCI issues in fieldwork environment. Trans. on Computer-Human Interaction, 7, 3 (2000).

[17]  Pirhonen, P., Brewster, S.A. and Holguin, C. Gestural and audio metaphors as a means of control in mobile devices. Proc. CHI 2002, ACM Press (2002), 291-298.

[18]  Pittet, S., Hart, T.J., Cosimo, C., Garofano, G., Ingelbrecht, N. and Liew, E. Forcast: Mobile Terminals, Worldwide, 2000-2009. Gartner, Inc. (2005).

[19]  Rekimoto, J. Tilting operations for small screen interfaces. in Proc. UIST 1996, ACM Press, 1996, 167-168.

[20]  Tegic, Inc. www.tegic.com, 2005.

[21]  Wigdor, D. and Balakrishnan, R. TiltText: using tilt for text input to mobile phones. Proc. UIST 2003, ACM Press (2003), 81-90.

 

[22]