Determining Causes and Severity of End-User Frustration

Irina Ceaparu1, Jonathan Lazar2, Katie Bessiere3, John Robinson3 and Ben Shneiderman1


1Department of Computer Science, Human-Computer Interaction Laboratory,

Institute for Advanced Computer Studies & Institute for Systems Research

University of Maryland, College Park, Maryland 20742

 

2Department of Computer and Information Sciences &

Center for Applied Information Technology

Towson University, Towson, Maryland, 21252

 

3Department of Sociology
University of Maryland, College Park, Maryland 20742

 

(irina@cs.umd.edu, jlazar@towson.edu, kbessiere@socy.umd.edu, robinson@socy.umd.edu, ben@cs.umd.edu)

 

 

Revised: May 6th, 2003

 

Keywords: user frustration, user interfaces, user experience, time delays, errors, user perception

 

Abstract

While computers are beneficial to individuals and society, frequently, users encounter frustrating experiences when using computers. This study attempts to measure, through 111 participants, the frequency, cause, and the level of severity of frustrating experiences. The data showed that frustrating experiences happen on a frequent basis. The applications in which the frustrating experiences happened most frequently were web browsing, e-mail, and word processing. The most-cited causes of frustrating experiences were error messages, dropped network connections, long download times, and hard-to-find features. The time lost due to frustrating experiences ranged from 47-53% of time spent on a computer depending on the location and study method. After discarding extreme cases the time lost was still above 38%. These disturbing results should be a basis for future study.

 


Determining Causes and Severity of End-User Frustration

 

Computers have many beneficial impacts, but unfortunately, frustration is a universal experience for computer users.  The annoyances of losing work when a crash occurs, struggling to understand an error message, or spending too much time to clear spam and viruses have become symbolic of the struggles associated with modern technologies. Computers can be the cause of many problems, and usually at the worst time possible.

Some problems stem from the users’ lack of knowledge, poor training, or unwillingness to read instructions or take tutorials.  Often frustration results from flaws in the computer hardware, software, networking, troubling interactions among components supplied by diverse manufacturers, or the result of malicious actions by other users. 

 A number of preliminary research steps are necessary to guide developers who are working on the goal of making computer usage less frustrating for users. A first step is to gain a better understanding of what frustrates users of computers. Then taxonomies of frustrating experiences can be developed and means to measure their severity and frequency can be identified.  These three steps should lead to solutions with enough supporting evidence so that requests for improvements will be well received by all parties involved. 

 

 

Background Research

 

The literature on user frustration is just emerging, but there are already a number of research directions related to errors, time delays, and emotional reactions to problematic situations.

 

Errors

Certainly, there is a lot of overlap in the areas of errors and frustration, as users do tend to find errors to be very frustrating. Frustration is a broader topic than errors. Errors produce frustration when users perceive that something is in an incorrect state, regardless of whether it is their fault, a design flaw or an implementation bug (Lazar, Meiselwitz and Norcio, 2003). There are many things that could cause users to be frustrated, even if a computer is operating in a correct state (such as pop-up advertisements, viruses, and spam mail), and users perceive that the computer is operating in a correct state. However, since errors are a major cause of user frustration, the research on errors can provide useful background literature for research on frustration.

While there is no clear definition for an error, it seems that an error could be broadly defined as when users perceive that something in the computing system is not providing the desired outcome and users are therefore unable to reach their task goals (Norman, 1983). This might be due to a hardware or software failure (such as a crash), which is not directly due to the actions of the users. Alternatively, an error might be caused by the actions of users, for example, users either choose the wrong commands to reach their task goals, or choose the correct commands, but enter those commands in an incorrect manner (such as spelling error or a mode error).

Errors can be especially frustrating experiences for novice users, who are unable to fully understand the cause of the error, are unable to understand how to appropriately respond to the error, and therefore, may perform actions that compound the severity of the error (Carroll and Carrithers, 1984; Lazar and Norcio, 2000; Lazar, Meiselwitz, and Norcio, 2003). Even expert users may have trouble responding to errors if the system’s feedback is poor. In addition, error messages tend to be inconsistent, unclear and confusing, which does not help users respond to the error, but more likely frustrates them (Shneiderman, 1998; Lazar and Huang, 2003).

 

Time Delays

While users generally prefer a shorter response time, the appropriate response time is related to the users’ past experiences, users’ knowledge level related to technology, cost of an error, and outside time pressures. For instance, novice users may be willing to wait longer than expert users for a computer to respond (Shneiderman, 1998). In addition, the importance of the task and the related time pressure to complete a task may influence users’ expectations and frustration related to time delays. Frustration can be reduced when delays are predictable and users are made aware of the estimated time until they can move on with their task. Recent research on time delays has focused on the internet and web environment. Time delays are especially frustrating on the web, when users are typically requesting content from a remote site. In these situations, the delay can be caused by numerous factors and components (Sears and Jacko, 2000), and is inherently unpredictable (Johnson, 1995; Johnson, 1998). A number of studies have found that time delays are problematic on the web. As the time delay increases, users may find the content less interesting (Ramsay, Barbesi, and Preece, 1998), and of a lower quality (Jacko, Sears, and Borella, 2000). A long time delay can make it harder for users to remember what they were doing, and the related context in which they had made the request (Shubin and Meehan, 1997). In addition, web pages that take a very long time to load may also cause users to believe that an error has occurred, because the computer has not responded in an appropriate amount of time (Lazar and Norcio, 2000; Lazar and Norcio, 2002).

 
Emotional Reactions

Another related area of research is that of emotional reactions to computing technology (Reeves and Nass, 1996). Schleifer and Amick conducted a study that analyzed the effects of computer system response time (slow vs. rapid) and method of pay (incentive vs. nonincentive) on mood disturbances and somatic discomfort (Schleifer and Amick, 1989). Regardless of method of pay, slow response time generated higher ratings of frustration and impatience than did rapid response time. In addition, ratings of rush and tension were higher with incentive pay than without incentive pay, regardless of system response time. Mood disturbances and somatic discomfort increased linearly with the amount of time spent performing a data entry task over the course of the workday. This effect was independent of system response time or method of pay. The results indicate that computer systems which incorporate features such as rapid response times reduce work stress, while the motivational advantages of computer-based incentive pay programs must be balanced against the stress effects of this method of pay.

            Another study had as a goal the development of a computer system trained to sense a user's emotional state via the recognition of physiological signals (Riseberg, Klein, Fernandez, and Picard, 1998). The researchers designed a controlled study in which participants participated in a vision-oriented computer game using a (seemingly) traditional graphical user interface. The game consisted of a series of puzzles. The researchers created incentives (a $100 prize) for the participants to play the game as fast as possible and achieve a good score. They also created seemingly random obstacles to attaining a good score in the game (at specific, but irregular intervals during the game play, they designed the software interface to simulate the mouse failing or “sticking”), so that the participants would experience frustration. The study found a correlation between psychological signal patterns (skin conductivity, blood volume pressure and muscle tension) and game events. The method used proved efficient in solving some of the problems in building a computer that can recognize affect.

Other contributors to negative emotional responses are system complexity and poorly crafted interfaces, which lead to experiences of confusion, frustration, and failure (Baecker, Booth, Jovicic, McGrenere, and Moore, 2000).  Such experiences may be most strongly felt by novice users who often are confronted with instructions, menu choices, and dialog boxes that they cannot understand.  One of the key challenges in making information and communications technologies universally usable is to bridge the gap between what users know and what they need to know, thereby leading to a more successful, less frustrating user experience.

 

User Satisfaction and Frustration

User satisfaction has been utilized in previous studies as a dependent variable, being used as an affective measure of the success of a technology (Olaniran, 1996; Zviran, 1992; Collins et. al., 1999).  From the socio-psychological literature, satisfaction is also defined as the completion of a goal or task, and goal directed behavior is aimed at the satisfaction of some need, desire, or want.  Frustration occurs at an interruption or inhibition of the goal-attainment process, where a barrier or conflict is put in the path of an individual (Dollard et al., 1939).  Sigmund Freud defined frustration as both external and internal barriers to goal attainment and internal obstacles blocking satisfaction (Freud, in Strachey 1958).  In other words, a person is frustrated if they are prevented from achieving an expected satisfying result (Berkowitz, 1978).  However, users can still achieve satisfaction in their tasks despite the presence of frustration in the path of task achievement. The Technology Acceptance Model identifies usefulness and ease of use as the two biggest influences on the user acceptance of technology (Davis, 1993). This model suggests that, even with a computer application that is not easy to use, users will persevere in their attempts to reach a task goal if it is important to them.

One large study of user frustration was sponsored by Compaq. A survey of 1,255 workers in the United Kingdom assessed their frustrations with information technology (Compaq, 2001). Of those who had their own personal computers at work, nearly half have felt frustrated or stressed by the amount of time it takes to solve problems. Two in five blame computer jargon for exacerbating their frustration, while three quarters of respondents who suffer daily problems with their computers say that their colleagues “swear at their monitors” out of frustration. The survey also analyzed the business cost of computer frustration. Nearly a quarter (23%) of respondents said that their work was interrupted daily due to computer crashes and other faults. Two in five who suffer daily breakdowns claim that this delay has caused them to miss deadlines, while one in ten have felt like criticizing their company to clients as well as friends because of frustration with the ineptness of their information technology departments. This is despite the fact that one in six admit that their problems are normally due to their own lack of knowledge and understanding.

 An important contribution to the collection and analysis of the source of frustrating experiences (application and system crashes) has been made by Bugtoaster (www.bugtoaster.com). The Bugtoaster software consists of a client program installed on a computer and a web site that work in concert to capture, track and display information about the crashes that affect the computer. Normally Bugtoaster sits silently on a user’s system and waits for an application to crash. When it does, it captures the details relating to the crash. The details of the crash are packaged up and stored on the user’s computer hard disk. Periodically, crash details are sent to the Bugtoaster database server where they are compared and correlated with the crashes of other Bugtoaster community members. Summaries of crashes can be viewed on the web site, along with large collections of statistical data regarding top 50 applications that cause crashes, which operating systems and which vendors are involved in most crashes, and which bugs have been repaired.

 

Research Methodology

 

To learn more about what users find frustrating, data was collected about hundreds of experiences of frustration. First, a pilot study was conducted in a Computer Science class at the University of Maryland. 37 participants were asked to describe, in written form, frustrating experiences with computers. From an analysis of the data a list of categories of problems and the frequency with which they appear was developed. The 5 categories were internet problems, application problems, operating system problems, hardware problems and other problems.  Table 1 lists the top 3 frustrations for each of the 5 categories.

 

 

 

Table 1

Top sources of frustration from 37 student reports in pilot study

Internet

Applications

Operating System

Hardware

Other

pop-up ads

(7)

Windows blue screen of death

 (5)

freezes

(16)

installation incompatibilities (4)

spam

(1)

long download time

(5)

“illegal operation” error message in Windows Explorer (3)

low memory

(5)

 

mouse problems (3)

viruses

(1)

slow/dropped connection

(3)

 

Excel problems

(2)

booting problems (3)

printer problems (2)

 

file problems

(locate, open) (1)

 

 

After the pilot study, a number of instruments were developed for use in the research study: frustrating experience reports, pre session survey, and post session survey.  The time diary method was adopted. Traditional time diaries require users to keep a journal in which they log each activity and its duration throughout the day. The modified time diaries in this study required users to log each frustrating experience as it occurred during their session.  This is an improvement over retrospective survey questions, because estimates from memory often lead to inflated or incorrect answers.  In addition, modified time diaries enable researchers to capture the session length and time lost due to frustrating experiences, with reasonable accuracy.

A key determinant of frustration is the importance of the users’ goal. Research indicates that individuals are more committed to goals when the goals are important to them than when they are not (Locke, 1996).  For this reason, users were asked to record their frustrations during a time when they would be using a computer for their personal use as opposed to tasks assigned to them. 

Subjects were asked to spend at least an hour using a computer, and report their frustrating experiences via the frustrating experience reports (see Appendix A). No specific tasks were assigned or expected. Rather, users were simply asked to carry on with their normal tasks, and report experiences, which were frustrating. This approach to collecting data was more likely to result in data that was representative of the actual tasks that users would perform. A pre-session survey (to be filled out before beginning the session) and a post-session survey (to be filled out after the session) were also administered.  Subjects had to fill a time sheet on the frustrating experience reports, recording the start time and the stop time of each session (one session in case they did not take any breaks) and the number of frustrating experiences per session.

The pre-session survey (see Appendix B) asked for demographic information, computer experience and attitudes, level of computer anxiety, and mood.  Previous research indicates that level of computer experience or perception of computer self-efficacy can affect subsequent user behavior (Brosnan 1998; Murphy, Coover, & Owen 1989).  Our questions were chosen after reviewing previous research on the Computer Aptitude Scale, assessing computer attitudes, computer anxiety/confidence, and computer liking (Loyd & Gressard 1984; Nash & Moroz 1997).  These studies suggested that prior experience and level of perceived knowledge would affect an individual level of frustration as well.  Therefore, the overall state of participants was assessed with three questions dealing with life satisfaction, general mood, and how often the individual gets upset. 

The post session survey (see Appendix C) consisted of five questions to assess mood after the session, how frustrated overall the individual was after the session, how these frustrations affect the rest of the day, and the frequency and typical nature of the frustrating experiences during the session. The pre- and post-session surveys were then tested with students and with a number of people in the HCI field to improve the clarity of questions.

Once the surveys and frustrating experience report had been developed on paper, they were implemented on the web. A database-driven web site on user frustration was developed at Florida Institute of Technology (FIT) to collect the pre-session and post-session surveys, as well as the frustrating experience report. The web is an accepted method for collecting surveys for research, and there are established ways to enhance the validity of data collected (Lazar and Preece, 2001).

 

The scenario for data collection included:

 

  1. Users go to the FIT web site, register, and fill out the pre-session survey
  2. Users perform their typical tasks for an hour or more
  3. When users encounter a frustrating experience, they fill out a paper copy of the frustrating experience report (note: for users to fill out the frustrating experience report on-line at this time would have taken more time, and at the same time, the users would have been more distracted from the task at hand. We felt that it was more likely to model the task environment if the users were less distracted, and were able to quickly fill out a paper form and continue with their tasks)
  4. After completing an hour or more of typical computer task work, users would log into the FIT web site, and fill out the post-session survey. After completing the post-session survey, the users would transfer their paper-based frustrating experience reports onto the web-based database. This took place outside of the pre-or-post-session surveys, as well as the hour-long session.

 

Two data collection phases were required: self-report diaries and observation of another user. These dual approaches were used to determine if there were differences in results when users reported their own experiences vs. when they were observing others.  Self reports might intrude in user work and lead to inflated estimates, whereas observations had the benefit of an external observer who might be more objective.  Since subjects at the University of Maryland and Towson University took part in both phases, we kept their data separate to see if these groups had different outcomes. Knowledgeable undergraduates at both institutions were available for participation as part of their courses in human-computer interaction and information technology.

 

Self-report phase

 

In the self-report phase, 33 computer science undergraduate students at the University of Maryland (UMD) and 26 computer information systems undergraduate students at Towson University (Towson) reported personal frustrating experiences. The participants were 37 males and 22 females, with an average age of 22.7 years (S.D.:3.8). As discussed in the research methodology section, the participants had to go to the User Frustration Project web site and register. They filled out a pre-session survey on demographic data and their experience with computers. Once the registration was completed, they were asked to report at least 3 frustrating experiences that took place when they were performing their common tasks. Then they had to login to the web site, and answer a short post-session questionnaire intended to capture their mood after the frustrating experiences and fill out a form for each frustrating experience.  Almost half of the users spent an hour or a little longer, but 31 users spent between 100 and 450 minutes documenting frustrating experiences.

 

Observation phase

 

In the observation phase, the subjects observed someone performing their usual computer tasks, and asked the person to fill out the pre-session survey, note their frustrations, and fill out the post-session survey. Essentially, the method for the observation phase was the same as the method for the self-reporting phase of the study. They had to ask the person they observed to go to the User Frustration Project web site and register. The person observed had to fill out a pre-questionnaire regarding some demographic data and their experience with computers. Once the registration was completed, the subjects had to sit beside the person observed and fill out (on paper) at least 3 frustrating experience reports. The subjects were asked to encourage the persons observed to think aloud and describe what they were trying to do and to ask questions if they were not sure if the person was experiencing frustration.  Think out loud studies are a common strategy in usability research and are believed to minimally impact user performance (Shneiderman 1998). When they were done with the observation, they had to ask the person observed to login to the web site and fill out a post-questionnaire form intended to capture their mood after the frustrating experiences. 

The subjects were responsible for transferring the frustrating experience reports from paper to the online database. For this phase, the subjects from UMD observed 31 participants, and the subjects from Towson observed 21 participants. The participants were 21 males and 31 females, with an average age of 26.1 years (S.D.:13.1). Half of the users spent an hour or a little longer, but 26 users spent between 100 and 515 minutes documenting frustrating experiences.

 

Results

 

Tables 2 and 3 contain data collected from the self-reports and observations in terms of problem source and solution taken:

Table 2

 

Problem source for self-reports and observations for UMD and Towson universities (N=number of subjects, FE=number of frustrating experiences)

 

 

 

Problem source

Frequency of problem sources

 

 

Total

Self

UMD

N=33,FE=120

Observation

UMD

N=31, FE=108

Self

Towson

N=26, FE=79

Observation

Towson

N=21, FE=66

Web browsing

34

32

31

25

122

email

14

18

  9

  8

  49

system (OS)

14

11

  1

  4

  30

other internet use

12

  4

  6

  4

  26

video/audio software

10

  6

  4

  0

  20

word processing

  5

20

10

10

  45

chat and instant messaging

  7

  6

  5

  1

  19

file browsers

  7

  2

  1

  0

  10

programming tools

  7

  4

  4

  3

  18

spreadsheet programs

  2

  2

  1

  3

    8

graphic design programs

  4

  0

  2

  4

  10

presentation software

  1

  1

  2

  1

    5

database programs

  2

  0

  2

  0

    4

hardware

  1

  2

  1

  3

    7

 

 

Table 3

 

Solution taken for self-reports and observations for UMD and Towson universities (N=number of subjects, FE=number of frustrating experiences)

 

 

 

Solution taken

Frequency of solutions

 

 

Total

Self

UMD

N=33,FE=120

Observation

UMD

N=31,FE=108

Self

Towson

N=26,FE=79

Observation

Towson

N=21, FE=66

I knew how to solve it because it happened before

44

36

11

11

102

I figured out a way to fix it myself

17

17

12

  7

  53

I was unable to solve it

16

18

16

10

  60

I ignored the problem or found an alternative

16

  8

  9

  9

  42

I tried again

  7

  5

  6

  6

  24

I restarted the program

  3

  4

  3

  4

  14

I consulted online help

  5

  2

  3

  3

  13

I asked someone for help

  8

13

  9

10

  40

I rebooted

  3

  5

10

  4

  22

I consulted a manual or a book

  1

  0

  0

  2

    3

 

The analysis of data confirmed the first findings from the pilot study and helped better define the categories of problems, frequency with which they appear, cost they involve, and frustration they provoke.        

The 5 categories (internet, applications, system, hardware, other) found in the pilot study appeared both in the self-reports and in the observation reports. However, subcategories that might be helpful in finding specific solutions to specific problems were defined (Table 4).

 

Table 4

 

The 5 categories of problems with their subcategories

 

Internet

Applications

Operating System

Hardware

Other

timed out/dropped/

refused connections

(32)

error messages

(35)

crashes (16)

installation/update incompatibilities

(8)

typing errors

(4)

long download time

(23)

freezes (24)

response inconsistent with action (10)

mouse problems

(5)

spam (1)

web page/site

not found  (17)

missing/

hard to find features (23)

slow response

(8)

printer problems

(5)

 

email failures (15)

(not sent/received or

with attachements not opening)

crashes (13)

unexpected message boxes  (6)

 

 

pop-up ads (13)

not opening/

closing (13)

low resources (4)

 

 

 

response inconsistent with action (13)

missing software

(4)

 

 

 

annoying features (12)

unexpected/

improper shutdowns (3)

 

 

 

unexpected message boxes (6)

virus problems (2)

 

 

 

unrecognized file type (4)

upgrading  (1)

software

 

 

 

“blue screen of death” (3)

insufficient help (1)

 

 

 

 

The frequency chart (Figure1) indicates that most frustrating experiences had happened before, as frequently as several times a month, week, or even several times a day.

 

Figure 1. Frequency with which problems occurred for the 4 groups of subjects studied.

 

In terms of frustration, on a scale from 1 to 9, 1 being least frustrating and 9 being most frustrating (Fig.2), the results collected for all the frustrating experiences reported show a high level of frustration.

Figure 2. Level of frustration experienced by subjects from 4 groups studied

 

The cost of the frustrating experiences, measured in minutes lost, range from 0 to 1537 minutes (avg. 21, st. dev. 49 – skewed because of the outliers) (Tables 5 and 6). From the users’ comments we find that the minimum cost usually appears in the situation when a web page needed to be reloaded to display, or when users were not doing something important and just abandoned their tasks. The maximum cost usually appears when users had to install/reinstall some software or clean the computer of viruses. For installation only, users lost a total of 713 minutes (from which 300 minutes were from one user to install a new operating system). The majority of users reported costs of 3-30 minutes.

Table 5

 

Total minutes lost, number of frustrating experiences and average time lost per frustrating experience comparing UMD and Towson (N=number of subjects, FE=number of frustrating experiences)

Problem source

UMD  (9485 usage min)

N=64

Towson (7968 usage min)

N=47

 

Total min. lost

# of FE

Avg

per FE

Total min. lost

# of FE

Avg

per FE

system (OS)

  877

25

35.1

  353

  5

  70.6

Email

  902

32

28.2

  294

17

  17.3

web browsing

  568

66

  8.6

1537

56

  27.4

other internet use

  319

16

19.9

  202

10

  20.2

word processing

  280

25

11.2

  281

20

  14.1

file browsers

  320

  9

35.6

    15

  1

  15.0

video/audio software

  356

16

22.2

  200

  4

  50.0

programming tools

  126

11

11.4

  134

  7

  19.1

graphic design programs

  215

  4

53.7

  101

  6

  16.8

database programs

    48

  2

24

  260

  2

130.0

chat and instant messaging

  134

13

10.3

    85

  6

  12.6

presentation software

    32

  2

16.0

    36

  3

  12.0

hardware

    30

  3

10.0

    70

  4

  17.5

spreadsheet programs

    44

  4

11.0

  108

  4

  27.0

Total

4251

228

 

3676

145

 

 

Table 6

 

Total minutes lost, number of frustrating experiences and average time lost per frustrating experience comparing self-reports and observations (N=number of subjects, FE=number of frustrating experiences)

 

Problem source

Self(10658 usage min)

N=59

Observations (6795 usage min)

N=52

 

Total min. lost

# of FE

Avg

per FE

Total min. lost

# of FE

Avg

per FE

System (OS)

  613

15

40.9

  617

15

41.1

Email

  537

23

23.3

  659

26

25.3

web browsing

1408

65

21.7

  697

57

12.2

other internet use

  384

18

21.3

  137

  8

17.1

word processing

  259

15

17.2

  302

30

10.0

file browsers

  320

  8

40.0

    15

  2

  7.5

video/audio software

  296

14

21.1

  260

  6

43.3

programming tools

  194

11

17.6

    66

  7

  9.4

graphic design programs

  257

  6

42.8

    59

  4

14.7

database programs

  308

  4

77.0

      0

  0

  0.0

chat and instant messaging

  122

12

10.2

    97

  7

13.8

presentation software

    13

  3

  4.3

    55

  2

27.5

hardware

    35

  2

17.5

    65

  5

13.0

spreadsheet programs

    56

  3

18.7

    96

  5

19.2

Total

4802

199

 

3125

174

 

 

 

 

 

Discussion of the results

 

Our findings can be discussed in three broad topic areas: causes of frustration, frequency of frustration, and time lost.

 

Causes of Frustration

     The three tasks applications that were the cause of the most frustrating experiences were web browsing (122 frustrating experiences), e-mail (49 frustrating experiences), and word processing (45 frustrating experiences). This by itself does not necessarily identify the greatest causes of frustration in general, nor does it identify these applications as the greatest offenders, but rather, this reflects some of the most popular task applications for the users who participated in the study. We felt that it was more powerful to let users perform tasks that were relevant and important to the users themselves, rather than using pre-assigned tasks chosen by the researchers. With pre-assigned tasks, users might not correctly identify the level of true frustration, since the users might view the pre-assigned tasks as unimportant.

     The specific causes of frustration may cross task applications, and are important to look at, for a discussion of possible solutions. The specific causes of frustration most often cited (from Table 4) were error messages (35), timed out/dropped/refused connections (32), freezes (24), long download time (23), and missing/hard-to-find features (23). Some of these frustrating problems are challenging to solve (such as freezes and dropped connections). However, some of these frustrating problems are well documented, and the pathway to improvement is clear. Guidelines for clear, positive error messages appear in the research literature as early as 1982 (Shneiderman, 1982), however, many computer applications continue to incorporate error messages that are poorly worded and confusing. Long download times can be improved by having web designers write web pages that are smaller and have fewer graphics, and by having users upgrade their personal connection speeds to the Internet (Lazar, 2001). Improved interface design can assist in helping users find features that are not immediately obvious.

 

Frequency of Frustration

     Frustration is a common event. The data indicates that frustrating experiences happen on a regular basis (Figure 1). Most participants indicated that the frustration experiences they encountered during the testing had occurred before (74.3% frustrating experiences had occurred before), as frequently as several times a month (10.7%), week (14.5%), or even several times a day (16.1%). This illuminates the fact that users must deal with frustrating experiences on a frequent basis.

In terms of how to respond to a frustrating experience, participants most frequently indicated that they “knew how to solve it because it happened before” (27.3%), they “were unable to solve it” (16.1%), or “figured out a way to fix it myself” (14.2%). Participants reported (Figure 2) that most of the frustrating experiences were highly frustrating (74% of the frustrating experiences were rated with 6-9 on the frustration scale). Furthermore, new types of frustrating experiences which have not previously occurred (and which users might not be able to respond to) can cause large amounts of time to be wasted by users, if the users can even complete their tasks. The amount of time wasted is discussed in the next section. The least commonly adopted solutions were: the participant consulted a manual (3), the participant consulted online help (13), and the participant restarted the program (14) (Table 3). This supports the assertion that providing post-hoc assistance by way of electronic or paper manuals is not a sufficient solution to the problem of user frustration.

 

Time Lost Due to Frustrating Experiences

     One of the most surprising findings was that, in terms of minutes lost, one third to one half of the time spent in front of the computer was lost, due to frustrating experiences. This assertion is true regardless of how the data is analyzed: comparing UMD and Towson participants, or comparing self and observation reports. The total time (in minutes) was defined as the total time in front of the computer (recorded by the participant in the modified time diary). Minutes lost was defined as:

 

  = (minutes spent to solve the problem)

    + (minutes spent to recover from any work loss due to the problem)

 

Figure 3 illustrates findings in terms of minutes lost.

Figure 3. Minutes lost compared to total minutes of usage and top 3 time consuming problems.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 


Pursuing the data in Table 5 in detail we normalized the data for length of sessions and found that the average time lost per individual for UMD reports was 47.8% and for Towson 53.1%.  Similarly, pursuing the distinction between study methods in Table 6, our analysis led to the average time lost of 50.1% from self-reports and 49.9% from observations. These differences suggest robustness of the results independent of location or study method.

 

Some applications caused a small number of problems, but the problem was significant in terms of minutes lost (e.g. databases, which caused only 4 frustrating experiences but with an average of 77 min lost each). Other applications caused a large number of frustrating experiences but each problem was less significant in terms of time lost (such as web browsing). In some cases, when there was a system crash, the participant reported as problem source all applications that were open at the time of the crash. Another way of viewing the data is to examine the number of specific minutes lost per user, for each of the 111 users out of a total of 17453 min of usage (Figure 4).

Figure 4. Minutes lost for each of the 111 users.

 

Since there were a few outliers in the data, the top 5 outliers were examined individually. The user that reported the most minutes lost (633) was chatting online and the connection was dropped. The user reported 600 min lost, making the argument that: the internet provider “has changed from a static connection to a dynamic one and thus is a terrible ISP as a result” and therefore the user is accounting for all the time lost since the change. However, the user reported that the problem lasted just 20 min.

The second user in the top 5 users that reported extended lost time was attempting, as a first task, to boot up a Microsoft operating system, and, since each time the blue screen appeared, he spent 300 min installing Linux instead. The second task the user attempted was to defragment the hard drive. A message error appeared, and the user reported 200 min lost because of the inability to perform other tasks until the problem was fixed.

The third user was attempting to add multiple IP’s to his internet account. He got to the same error page 4 times. He reported 45 min for the time needed to fix the problem, and 300 min lost, because he could not do a class assignment. He also reported 1 hr lost because while trying to download music off the internet, the computer rebooted 10 times. 

The fourth user reported 240 min lost while trying to get rid of a computer virus, and another 60 min because the internet was not working anymore and he had to wait for a friend to come and fix it.

Finally, the fifth user reported 240 min lost because, in trying to access a site that was important for one of the school assignments, he forgot the password, and the site had no retrieving password function, so he had to go home and look through his notes to find the password.

After discarding the 5 users with the highest lost times reported, the numbers for the minutes lost changes in the following way: for the UMD groups, the percentage of individual time lost drops from 47.8% to 37.9%, and for the Towson groups, the percentage of time lost drops from 53.1% to 43.5%. Likewise, for the self-reports, the percentage of individual time lost drops from 50.1% to 38.9%, and for the observations, the percentage of time lost drops from 49.9% to 41.9%.

A more conservative approach would be to count as minutes lost only the minutes spent to solve the problem that occurred, without including the minutes spent to recover work lost due to the problem. In this case, the number of minutes lost at UMD changes from 47.8% to 30.1% and the number of minutes lost at Towson changes from 53.1% to 26.2% (Figure 5). Likewise, the number of minutes lost in the self-reports changes from 50.1% to 27.8% and the number of minutes lost in the observations changes from 49.9% to 29.2% (Figure 6).

 

Figure 5. Minutes lost at UMD and Towson including (version 1) and excluding (version 2) minutes spent to recover from work loss.


 

 

 

 


Figure 6. Minutes lost in Self-reports and Observations including (version 1) and excluding (version 2) minutes spent to recover from work loss.

 


 

 


     Regardless of how the data is viewed or analyzed, it is clear that a lot of time is lost by users who encounter frustrating experiences. This lost time has a value. Improved usability in information systems can be measured in time saved, and the value of that time can be quantified in monetary terms (Bias & Mayhew, 1994). Similarly, the substantial value of time lost due to frustrating experiences can be measured in monetary terms.

 

Conclusions and future work

 

Based on the data, it is clear that user frustration is a serious problem. The subjects reported high levels of frustration, as well as large quantities of wasted time. This wasted time comes at a cost in financial terms. In addition, increased levels of frustration can impact on how users interact with other people during the day. We are currently analyzing the demographic and emotional responses in the pre- and post surveys (Bessiere et al., 2002).  These analyses will examine more of the socio-psychological issues in user frustration: For instance, is the level of user frustration tied to the level of self-efficacy and similar perceptions of users? How does a frustrating experience affect users’ interactions with other people the rest of the day? Does computing experience affect frustration levels?

 

     The data collected in this study answer some questions but raise others.  We are planning further studies:

  1. To examine frustration in workplaces: Are the frustrations of students different from those of professional users? How does the level of frustration relate to the perceived importance of the task?
  2. To examine how different user populations react to frustrating experiences: For instance, will frustration levels be higher or lower with younger or older users? What about users with disabilities? It is well documented that younger users, older users, and users with disabilities have different needs and responses relating to errors, response time, and animation. As universal usability in information technology becomes a more widely accepted goal (Shneiderman, 2000), researchers must understand how to prevent or provide remedies for different user populations.

4.   To develop metrics for measuring user frustration: we want to measure frustrating experiences over time, to determine whether progress is being made by software developers, trainers, and users. It would also be helpful to measure the monetary costs of frustrating experiences. 

5.   To develop strategies for reducing the frequency of user frustration: more reliable software, superior user interfaces, clearer instructions, and improved training could help prevent problems.

6.  To develop methods for coping with user frustration so that the time wasted is reduced: these include help desks, knowledge bases, online help, and social forms of help by email, chat, instant messaging or online communities.

 

 

Acknowledgements

 

We appreciate partial support from National Science Foundation
grant for Information Technology Research (#0086143) Understanding the Social
Impact of the Internet: A Multifaceted Multidisciplinary Approach.  We
appreciate the devoted efforts of Prof. Shirley Anne Becker of the Florida
Institute of Technology and her students Ali Al-Badi and Madhan Thirukonda in
preparing the web site for data entry.

 

 

References

 

Adams, W., Brown, J., Rapeepun, D., Williams, W. (2001). The Effectiveness of Online Help Systems. Downloaded on April 9, 2002.

Working paper available at: http://www.otal.umd.edu/SHORE2001/help/index.html.

 

Baecker, R., Booth, K., Jovicic, S., McGrenere, J. and Moore, G. (2000). Reducing the Gap Between What Users Know and What They Need to Know. Proceedings of the ACM 2000 International Conference on Intelligent User Interfaces, 17-23.

 

Berkowitz, L. (1978). Whatever Happened to the Frustration-Aggression Hypothesis? American Behavioral Scientist, 21(5), 691-708.

 

Bessiere, K., Ceaparu, I., Lazar, J., Robinson, J., and Shneiderman, B. (2002). Understanding Computer User Frustration: Measuring and Modeling the Disruption from Poor Designs, University of Maryland Dept of Computer Science, Submitted for publication. University of Maryland, CS Technical Report 4409

 

Bias, R., and Mayhew, D. (1994). (eds.) Cost-Justifying Usability. San Francisco: Academic Press.

 

Brosnan, M. (1998). The Impact of Computer Anxiety and Self-Efficacy Upon Performance. Journal of Computer Assisted Learning, 3(14), 223-234.

 

Carroll, J.,and Carrithers, C. (1984). Training Wheels in a User Interface. Communications of the ACM, 27(8), 800-806.

 

Collins, C., Caputi, P., Rawstorne, P., & Jayasuriya, R. (1999).  Correlates of End-User Performance and Satisfaction with the Implementation of a Statistical Software Package. Proceedings of the 10th Australasian Conference on Information Systems, 223-234

 

 

Compaq, Inc. (2001). Rage Against the Machine – a Compaq survey. Compaq Press Centre, United Kingdom, Downloaded on: April 9, 2002 (URL no longer available).   

 

Davis, F. (1993). User acceptance of information technology: system characteristics, user perceptions, and behavioral impacts. International Journal of Man-Machine Studies, 38(3), 475-487.

 

Dollard, J., Doob, L., Miller, N., Mowrer, O. & Sears, R. (1939). Frustration and Aggression.  New Haven: Yale University Press.

 

Freud, S. (1958). Types of Onset of Neurosis. In J. Strachey (ed.) The Standard Edition of the Complete Psychological Works of Sigmund Freud vol. 12.  London: Hogarth Press.

 

Jacko, J., Sears, A. & Borella, M. (2000). The effect of network delay and media on user perceptions of web resources. Behaviour and Information Technology, 19(6), 427-439.

 

Johnson, C. (1995). Time and the Web: Representing and Reasoning about Temporal Properties of Interaction with Distributed Systems Time and Space. Proceedings of the HCI'95 Conference on People and Computers, 39-50.

 

Johnson, C. (1998). Electronic Gridlock, information saturation, and the unpredictability of information retrieval over the World Wide Web. In P. Palanque and F. Paterno (eds.) Formal Methods in Human-Computer Interaction (pp. 261-282) London: Springer.

 

Lazar, J. (2001). User-Centered Web Development. Sudbury, MA: Jones and Bartlett Publishers.

 

Lazar, J. & Huang, Y. (2003). Improved Error Message Design in Web Browsers. In J. Ratner (ed.). Human Factors and Web Development (2nd ed.),167-182. Mahwah, NJ: Lawrence Erlbaum Associates.

 

Lazar J., Meiselwitz, G.& Norcio, A. (2003). Novice User Perception of Error on the Web: Experimental Findings. Paper under review.

 

Lazar, J.& Norcio, A. (2000). System and Training Design for End-User Error. In S. Clarke & B. Lehaney (Eds.), Human-Centered Methods in Information Systems: Current Research and Practice. Hershey, PA: Idea Group Publishing, 76-90 .

 

Lazar, J. & Preece, J. (2001). Using Electronic Surveys to Evaluate Networked Resources: From Idea to Implementation. In C. McClure & J. Bertot (Eds.), Evaluating Networked Information Services: Techniques, Policy, and Issues . Medford, NJ: Information Today, 137-154.

 

Lloyd, B.H. & Gressard, C. (1984).  Reliability and Factorial Validity of Computer Attitude Scales.  Educational and Psychological Measurement, 44(2), 501-505.

 

Murphy, C., Coover, D. & Owen, S. (1989).  Development and Validation of the Computer Self-efficacy Scale. Educational and Psychological Measurement, 49(4), 893-899.

 

Nash, J.B. & Moroz, P.A. (1997).  An Examination of the Factor Structures of the Computer Attitude Scale.  Journal of Educational Computing Research, 17(4), 341-356.

 

Norman, D. (1983). Design Rules Based on Analyses of Human Error. Communications of the ACM, 26(4), 254-258.

 

Olaniran, B. (1996). A Model of Group Satisfaction in Computer-Mediated-Communication and Face-to-Face Meetings. Behavior and Technology, 15(1), 24-36.

 

Ramsay, J., Barbesi, A. & Preece, J. (1998). A Psychological Investigation of Long Retrieval Times on the World Wide Web. Interacting with Computers, 10(1), 77-86.

 

Reeves, B. & Nass, C. (1996). The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Place. Cambridge University Press, UK.

 

Riseberg, J., Klein, J., Fernandez, R. & Picard, R. (1998). Frustrating the User On Purpose: Using Biosignals in a Pilot Study to Detect the User's Emotional State. Proceedings of ACM 1998 CHI: Conference on Human Factors in Computing Systems, 227-228.

 

Schleifer, L. & Amick, B. (1989). System Response Time and Method of Pay: Stress Effects in Computer-Based Tasks. International Journal of Human-Computer Interaction 1(1), 23-39.

 

Sears, A. & Jacko, J. (2000). Understanding the Relation Between Network Quality of Service and the Usability of Distributed Multimedia Documents. Human-Computer Interaction, 15(1), 43-68.

 

Shubin, H. & Meehan, M. (1997). Navigation in Web Applications. ACM Interactions, 4(6), 13-17.

 

Shneiderman, B. (2000). Universal Usability: Pushing Human-Computer Interaction Research to Empower Every Citizen. Communications of the ACM, 43(5), 84-91.

 

Shneiderman, B. (1998). Designing the User Interface: Strategies for Effective Human-Computer Interaction. (3rd ed.). Reading, MA: Addison-Wesley.

 

Shneiderman, B. (1982). System message design: Guidelines and experimental results. In A. Badre and B. Shneiderman (eds). Directions in Human/Computer Interaction. Norwood, NJ: Ablex Publishing, 55-78.

 

Zviran, M. (1992). Evaluating User Satisfaction in a Hospital Environment: An Exploratory Study. Health Care Management Review, 17(3), 51-62.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Appendix A

 Frustrating Experience Report

 

Please fill out this form for each frustrating experience that you encounter while using your computer during the reporting session.  This should include both major problems such as computer or application crashes, and minor issues such as a program not responding the way that you need it to.  Anything that frustrates you should be recorded.

 

What were you trying to do?

 

 

 

On a scale of 1 (not very important) to 9 (very important), how important was this task to you?

Not very important  1     2     3     4     5     6     7     8     9   Very Important

 

What software or program did the problem occur in? If the  problem was the computer system, please check the program that you were using when it occurred (check all that apply).

 

__email

__file browsers

__presentation software (e.g. powerpoint)

__ chat and instant messaging

__spreadsheet programs (e.g. excel)

__multimedia (audio/video software)

__web browsing

__graphic design

__other __________________

__other internet use

__programming tools

 

__ word processing

__database management/searching software

 

 

Please write a brief description of the experience:

 

 

 

 

How did you ultimately solve this problem? (please check only one)

 

__ I knew how to solve it because it has happened before

__ I ignored the problem or found an alternative solution

__ I figured out a way to fix it myself without help

__ I was unable to solve it

__ I asked someone for help.  Number of people asked ___

__ I tried again

__ I consulted online help or the system/application tutorial

__ I restarted the program

__ I consulted a manual or book

__I rebooted

 

Please provide a short step by step description of all the different things you tried in order to resolve this incident.

 

 

 

How often does this problem happen? (please check only one)

   ___ more than once a day   ___ one time a day   ___ several times a week ___ once a week  

   ___ several times a month   ___ once a month   ___ several times a year    ___ first time it happened

 

On a scale of 1 (not very frustrating) to 9 (very frustrating), how frustrating was this problem for you?

 

Not very frustrating  1     2     3     4     5     6     7     8     9   Very frustrating

 

Of the following, did you feel: 

___ angry at the computer  ___ angry at yourself  ___ helpless/resigned

___ determined to fix it   ___neutral  ___  other: ___________

 

How many minutes did it take you to fix this specific problem?  (if this has happened before, please account only for the current time spent) _____________________________

 

Other than the amount of time it took you to fix the problem, how many minutes did you lose because of this problem?    (if this has happened before, please account only for the current time lost; e.g. time spent waiting or replacing lost work). ____________

Please explain:

 

12. Until this problem was solved, were you able to work on something else?

____Yes     ____No

Please explain:

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Appendix B

 

 Pre-session Survey

 

Email: ________________

 

Section 1: Demographic Information

1. Age:  

2. Gender:  __Female     

     __Male

3. Education:         __High School Graduate

                __Freshman/Sophomore in College

__Junior/Senior in College

__College Graduate

__Advanced Degree

4. Employment:   __Student

__Professional/Managerial

__Technical

__Administrative

__Other

 

Section 2: Computer Experience and Attitudes

1. How many years have you been using a desktop or laptop computer for home or work use?  

2. How many hours per week do you use a desktop or laptop computer?  

3. What type of Operating System is installed on the computer that you are currently using?  

DOS                        Windows NT

MacOS                   Windows ME

Unix/Linux             Windows 2000

Windows 95          Windows XP

4. What type of applications and programs do you typically use? (check all that apply)

__Email                                                                  __Graphic Design Programs

__Web Browsing                                                 __Word Processing

__Chat and Instant Messaging                         __Programming Tools

__Other Internet Use                                          __Presentation Tools (PowerPoint)

__Spreadsheet Programs (Excel)                       __Database Management/Searching

__Other  Multimedia (audio/video software)

 

5. How many years have you been using the internet?  

6. How many hours per week do you spend online? Please indicate the amount of time that you are actually using the computer while online, not simply the amount of time you are connected to the internet. 

7. At work, do you  have

__a permanent connection to the Internet    

__dial-in through a modem

8. Which of the following do you do when encountering a problem on the computer or application that you are using?

__Try to fix it on my own 

__Ask a friend/relative for help 

__Consult a manual or help tutorial

__Ask Help Desk or a Consultant for help

__Give up or leave it unsolved 

 

9. How sufficient is your computer software and/or hardware for the work that you need to do?

                 Not at all   1   2   3   4   5   6    7   8   9   Very

 

Section 3: For the following questions, please choose the number that best corresponds to your feelings

1. Computers make me feel:

 

Very Uncomfortable  1   2   3   4   5   6   7   8   9    Very Comfortable

 

2. When you run into a problem on the computer or an application you are using, do you feel:

 

                   Anxious   1   2   3   4   5   6   7   8   9   Relaxed/Indifferent

 

3. When you encounter a problem on the computer or an application you are using, how do you feel about your ability to fix it?

 

                                    Helpless   1   2   3   4   5   6    7   8   9  Confident that I can fix it

 

4. How experienced do you think you are when it comes to using a computer?

 

  Very Inexperienced  1   2   3   4   5   6   7   8   9  Very Experienced

 

5. When there is a problem with a computer that I can't immediately solve, I would stick with it until I have the answer.

 

     Strongly Disagree  1   2   3   4   5    6   7   8   9  Strongly Agree

 

6. If a problem is left unresolved on a computer, I would continue to think about it afterward.

 

     Strongly Disagree  1   2   3   4   5    6   7   8   9  Strongly Agree

 

7. Right now, how satisfied with your life are you?

 

                       Very Unsatisfied  1   2   3   4    5   6   7   8   9  Very Satisfied

 

8. How often do you get upset over things?

 

                         Not Very Often  1   2    3   4   5   6   7   8    9    Very Satisfied

 

9. Right now, my mood is:

 

                         Very Unhappy  1   2    3   4   5   6   7   8    9  Very Happy

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Appendix C

 

Post-Session Survey

 

 

For the  following questions, please choose the number that best corresponds to your feelings. 

1. Right now, my mood is:

 

                         Very Unhappy  1   2   3   4   5  6   7   8   9  Very Happy

 

2. We asked you to record your frustrating experiences. Overall, how frustrated are you after these experiences?

 

Not Frustrated at All  1   2   3   4   5   6   7   8   9  Very Frustrated

 

3. How will the frustrations that you experienced affect the rest of your day?

 

                  Not at All  1   2   3   4   5   6    7   8   9  Very Much

 

4. Are the incidents that occurred while you were recording your experiences typical of your everyday computer experience?

 

Yes      No

 

5. In general, do you experience more or less frustrating incidents while using a computer on an average day?

 

                                          Less   1   2   3   4   5   6    7   8   9  More

 

6. Did these frustrating experiences impact your ability to get your work done?

                                No impact  1   2   3   4   5   6    7   8   9  Severe impact

 

7.  Did these frustrating experiences impact your interaction with your co-workers?

                                 No impact  1   2   3   4   5   6    7   8   9  Severe impact

 

 

 

Please enter in the time increments which you used in your self-reporting sessions. This information is important in order to determine how many incidents occurred in each session, so please be as accurate as possible. 

Start Time:      Stop Time:       Number of Incidents:  

Start Time:      Stop Time:       Number of Incidents:  

Start Time:      Stop Time:       Number of Incidents:  

Start Time:      Stop Time:       Number of Incidents: