CS-TR-4319                                                                                       November 2001

UMIACS-TR-2002-1

HCIL-TR-2001-26

LAP-TR-2001-2

 

Implementation of Conditional Branching in Computerized Self-Administered Questionnaires

 

Kent L. Norman

 

Laboratory for Automation Psychology and Decision Processes

Department of Psychology

Human-Computer Interaction Laboratory

Institute for Advanced Computer Studies

University of Maryland

College Park, MD 20742-411

 

 

 

Abstract

On-line surveys can automate conditional branching in self-administered questionnaires and make the task easier for the respondent.  This paper discusses the types of branching that can be automated and different techniques for designing the interface.  Design factors take into consideration the level of automation pre-programmed into the survey versus the amount of control afforded to the respondent, the cognitive complexity of the interface versus ease of use, and the degree of context provided by showing surrounding items on the questionnaire versus a focus on single questions.

 

Keywords

Questionnaires, Surveys, Conditional Branching, User Interface, World Wide Web, CSAQ.


 

Implementation of Conditional Branching in Computerized Self-Administered Questionnaires

 

Kent L. Norman

Laboratory for Automation Psychology and Decision Processes

Department of Psychology

University of Maryland

College Park, Maryland 20742-4411

 


Introduction

 

Many questionnaires and surveys involve branches that lead to follow-up questions or that by-pass items that are not applicable.  Branches help to navigate the respondent through the questionnaire in an efficient way so that they don't need to read and answer all of the questions and in effect tailor a general survey to the profile of a specific respondent.  In paper and pencil questionnaires, branches are accomplished by giving explicit instructions (e.g.,  "if ..., skip question ...", or "if you answered ..., go to question ...").  However, respondents often misinterpret or ignore instructions.  They will skip questions that they should have answered and answer questions that they should have skipped.  Omissions of answers can render the survey useless and answers to questions that are not applicable raise issues about the reliability and validity of answers to those and other questions on the survey.

 

On-line questionnaires have the potential of avoiding some of the problems caused by conditional branching by automating the logic of the conditional statements and by automating the jump to follow-up questions or the jump around inappropriate questions.  Although most of the conditional logic of questionnaires is fairly straightforward (e.g., if A, then B), respondents can still misread, misinterpret, or simply draw the wrong conclusion.  It has been reported that people make mistakes about 25 percent of the time even with simple conditional syllogisms (Ripps & Marcus, 1977; Staudenmayer, 1975). Moreover, in cases where the logic is more complicated (e.g., if A and not B;  if A  or if B and C), the percentage of mistakes is even worse.  Unfortunately, such convoluted logic is not uncommon in financial, tax, medical, and health care questionnaires.

 

Even when the respondent has correctly determined that a jump is appropriate, it may not be executed correctly.  To perform a jump, the respondent may have to read and remember an item number and then search down through the questionnaire for that item.  The respondent may jump down too far past items that should have been answered or not far enough.  Although a number of graphic (highlighting and bold font) and layout (boxes and arrows) techniques can be used to assist the respondent, there is no assurance against mistakes.

 

Automated conditional branching is accomplished by reading the respondent's answers to the questions, testing the logic of the branching statements, and automatically presenting the next appropriate question.  Automated branching is most easily accomplished when the on-line system presents questions in an item-based or section-based method.  When one question is presented at a time, the system can evaluate the answer as well as all previous answers, and determine the next question to present.  When small sections are presented one at a time, the system can evaluate the set of answers, and determine the next section to present.  In either case, the system can check for the completion and logical consistency of the items and require the respondent to fill in missing answers or correct problematic answers.

 

In general, item-based questionnaires are a little more difficult to navigate,  require more mouse clicks, and take a little more time owing to successive writing to the screen and longer user orientation and reading times.  Form-based questionnaires are preferred because they are often easier to navigate by scrolling up and down and because questions are presented in the context of similar questions (Norman, Friedman, Norman, & Stevenson, 2000).  Moreover, respondents can see their answers to previous questions and achieve higher levels of consistency particularly for items using rating scales. 

 

When questionnaires include conditional branching, context may be important;   that is, it may be informative to see the items that one would skip answering a question one way or the other, or to see the follow-up questions if one answers affirmatively to an item.

 

The empirical question is whether the cognitive difficulties of conditional branching and the cognitive advantages of form-based presentation are great enough to warrant the use of a form-based presentation over an item-based presentation.  The design question then is how to implement automated conditional branching in a form-based questionnaire in order to off-load the conditional logic on the system and to maintain the semantic context of the whole form presentation.

 

Previous research in human/computer interfaces suggests a similarity between conditional branching in questionnaires and menu design (Norman, 1991).  The answer to one selection in a menu hierarchy branches the user to the next set of options.  The main difference is that in menu hierarchies branching occurs at nearly every point;  whereas in surveys, it occurs only once in a while in an otherwise linear series of questions.  Research in menu selection suggests that a broad listing of many items is better than a conditional listing of a few items at a number of successive levels despite the fact that many irrelevant items may be listed on the screen (Norman, 1991).  This result applies when the items (even up to 256) can all be listed on the screen at one time in an organized manner.   This will not be the case with surveys when the items may be rather lengthy questions with many alternatives.  Nevertheless, the lesson from menu selection research has been to avoid the game of hide-and-seek by listing alternatives before they are selected rather than successively revealing them over time.  In the same way, this suggests that follow-up questions and branches should be shown to the respondent before he or she commits to an answer.

 

Several factors may operate when a respondent sees the questions in a follow-up branch.  On the positive side, the items may help to interpret the meaning or implications of the alternatives in the branching question.  When the meaning of terms or the consequences of actions are unknown or vague, the follow-up questions can help to define their meaning and establish conceptual links.  For example, consider the question, "Do have any discretionary funds? Yes or No."  If the respondent does not know what the phrase "discretionary funds" means, the follow-up question "On which of the following do you spend the most money:  Entertainment, Travel, or Possessions?" may help.

 

On the negative side, knowing that there is a series of time consuming follow-up questions if one answers one way and not the other way, can influence the respondent's answer.  The answer may be more valid if the respondent is kept in the dark or if there are an equal number of follow-up questions either way one goes down the branch.  Alternatively, knowing what the follow-up questions are may lead one to answer the branching question so that they can express an opinion about some issue in the follow-up.  Again the answer may be more valid (a) if the respondent does not see the follow-up questions, (b) if there is an equally attractive set of follow-up questions down another alternative, or (c) if the respondent has the option of answering questions on branches that would otherwise have been excluded in an automated system.

 

Structures of Conditional Branching Questionnaires

 

The extent to which the cognitive factors mentioned above influence the validity of the answers on the branching questionnaire will depend on a number of structural and semantic aspects of the branching.  We will first consider the structural aspects.  Surveys in essence are data structures and as such organize information (i.e., questions) in a particular form.  We will consider linear, linear skip, linear branch, looping, tabular, hierarchical, and network patterns.

 

Linear Pattern.  Most surveys are linear.  Respondents proceed from Question n to Question n +1 irrespective of how they answer any particular question.  This straight linear pattern is shown in Figure 1.

 

Figure 1.  A linear survey structure.  (Ovals represent questions and squares represent the alternative answers to each question.  In each case, answers lead to the next question in the linear sequence.)

 

Linear Skip Pattern. When a skip pattern is introduced, it means that one may go from Question n to Question n + m, where m is the number of questions skipped.  Although follow-up questions and skipping questions seem to be cognitively different, they are formally the same.  One may think of Question 2 as being skipped when Alternative A of Question 1 is answered.  Or one may think of Question 2 as a follow-up to Alternative B of Question 1.  There are two parameters for conditional skips in linear surveys: the number of skip points and the number of questions skipped.  An increase in either of these parameters is expected to increase the difficulty for the respondent following the instructions and in the potential for problems completing the survey.

Figure 2.  Linear skip pattern.  (Ovals represent questions and squares represent the alternative answers to each question.  Alternative A causes the Question 2 to be skipped.)

 

Linear Branching Pattern.  In the skip or follow-up pattern, a set of questions is either added or omitted.  In more general conditional branching, respondents may take different linear paths through different sets of questions depending on the conditional.  Figure 3 shows a branch such that if the Alternative A of Question 1 is answered respondents go to Question 2.1, but if Alternative B of Question 1 is answered they go to Question 2.2. Again there are two parameters for conditional branches in linear surveys: the number of branch points and the lengths of the branches.  An increase in either of these parameters is expected to increase the difficulty for the respondent following the instructions and in the potential for problems completing the survey.

Figure 3.  Linear branch pattern. (Ovals represent questions and squares represent the alternative answers to each question.  Alternative A of Question 1 leads to Question 2.1 whereas Alternative B leads to Question 2.2.)

 

Looping Pattern.  Repeat loops occur when the same set of questions is repeated for each element of some set, for example, for each member of the family or for each visit to a doctor.  The respondent must decide how many times to repeat the loop and must attend to the beginning and ending points of each loop.  There are three parameters for looping surveys:  the number of loops, the length of each loop, and the number of repeats in each loop.  The difficulty of the questionnaire is expected to increase with the number of loops and the length of each loop.  On the other hand, the number of repeats, may decrease the difficulty as the respondent becomes more and more familiar with the set of questions.

 

Figure 4.  Looping pattern.  (The triangle represents the question pertaining to whether there remains an element in the set for which the looping questions are appropriate.)

 

Tables.  Questions in some surveys are laid out as tables listing rows as elements and the columns as types of information to be entered (or a transpose of rows and columns).  The respondent enters the information into the cells of the table.  Tabular surveys are graphic representations of looping surveys since the respondent goes through a series of columns (questions) for each member of the set listed (rows).  In general, tabular surveys are an efficient use of space and help to convey the overall context to the respondent.  They also allow greater freedom in the order of answering questions since respondents can loop either across rows or columns or fill in the cells in any order.  There are two parameters of tabular surveys:  the number of rows (elements) and the number of columns (questions).  The difficulty of the questionnaire is a function both of these parameters.

 

 

Figure 5. Table layout.  Items and questions pertaining to them are arranged in a 2-way table.

 

Hierarchical Pattern.  Some surveys involve a complex design that channels different respondents down different paths depending on the answers to questions at some or even at every level.  Figure 6 shows a schematic of a hierarchical survey structure in which the answer to each question at each level leads down a different path.  In this case, there are six levels with two alternatives at each branch resulting in 63 questions for the survey of which the respondent actually only answers six.  The parameters of the hierarchical pattern are the number of levels (depth) and the number of branches at each level (breadth).

Figure 6.  Hierarchical branching pattern.  (Ovals represent questions, squares represent alternative answers to each question.  Lines show the paths from the answers to the next question.)

Network Patterns.  The most general structure and chaotic pattern that a survey can have is a network in which any alternative can jump anywhere else in the survey.  Figure 7 illustrates such a questionnaire.  Although it is rare that a printed questionnaire would ever follow such a sporadic pattern, personal interviews may pursue such a course jumping around from topic to topic depending on the answers of the respondent.  If a linear form is imposed on the survey, network patterns may include jumps back to previous questions, jumps ahead to future questions, skips past questions, and loops. The complexity of the network questionnaire is a function number of conditional jumps.

 

Figure 7.  Network pattern. (Ovals represent questions, squares represent alternative answers to each question.  Lines show the paths from the answers to the next question.)

 

Complexity of Conditionals

 

As mentioned above, the difficulty of jumps depends on the complexity of the logic involved in the conditional.  Simple conditionals involve only one question, usually the current question being asked.  If the respondent picks one alternative, then he or she is directed to a different question as shown in the first panel of Figure 8.  A more complex situation may arise if the conditional involves a disjunctive logical statement in which one branches if the respondent answers, Alternative A OR Alternative B as shown in Panel 2 of Figure 8.  On the other hand, a conjunctive statement may occur in a pick list question when the respondent may check a number of options and branches if Alternative A AND Alternative B are selected as shown in Panel 3 of Figure 8.  All of these conditionals are intra-question in that the logical predicates involve the alternatives within the same question.  Inter-question conditionals involve the answers across different questions.  So for example, an inter-question conjunction conditional would occur if one branches when the respondent answers Alternative A for Question 1 and Alternative 1 for Question B as shown in Panel 4 of Figure 8.  Even more complex conditionals can be generated with compounding intra-question, inter-question, disjunctive, and conjunctive elements in the logical statement. 

 

Figure 8.  Types of conditional jumps:  simple, disjunctive, conjunctive, and inter-question.

 

On-Line Implementation of Conditionals

 

In this section we will consider alternative methods of presenting questionnaires with conditional jumps on-line.  As a point of reference, it is worth noting that paper-and-pencil questionnaires represent one extreme in that all of the questions whether skipped or not are printed on the form and the respondent has the opportunity of reading them all.  Moreover, in the paper-and-pencil form, careful instructions must be given to the respondent pertaining to all of the jumps and the respondent is totally responsible for executing the jumps correctly.  At the other extreme, is the guided personal interview. The respondent is asked only questions that are relevant and is not made aware of any other questions that might have been asked.  The interviewer is totally in control of the jumps and the respondent may not even be aware of the branching at all.

 

When questionnaires are implemented on-line, a number of design options arise between and including the extremes discussed above.  They vary between (a) item-based and form-based presentations, (b) automatic versus manual branching, and (c) with or without error checking.  When the whole form is presented, it may include graphic highlighting to guide the respondent and/or expanding/collapsing sections to control the amount of the questionnaire shown on the screen at any one time.  Table 1 lists a number of these designs along with an indication of the level of automation involved, the degree of cognitive complexity, and the amount of context provided to the respondent.  The level of automation refers to the amount of programming required to perform the conditional branching.  The degree of cognitive complexity refers to the extent to which the respondent must follow instructions, make decisions, and interact with the dynamics of the questionnaire.  Finally, the amount of context pertains to the scope of questions seen by the respondent at any one time on the screen.

 

 

 


 

 

 

 

 

 

 

Table 1

 

Design Options for Layout and Control of Conditional Branching in On-Line Questionnaires

 

Survey Design

Description

Level of

Automation

Degree of

Context

Cognitive Complexity

1. Item-Based with Auto-branching

Only one item is presented on the screen at a time.  All branching is determined by the system and is not shown to the respondent.

High

Low

Low

2. Linear Set -Based with Auto-branching

Linear sets of items that contain no branching are presented in groups on the screen. All branching is determined by the system and is not shown to the respondent.

High

Moderately

Low

Low

3. Form-Based with only Respondent Control

The whole form is presented as a scrolling field.  All branching is given in instructions to the respondent.

Low

High

High

4.  Form-Based with Edit Checks

The whole form is presented as a scrolling field.  Appropriateness of input is determined by branching logic.

High

High

High

5. Form-Based with Auto-scrolling

The whole form is presented as a scrolling field.  All branching is controlled by auto-scrolling, but the respondent may override by scrolling back.

High

High

Moderate

6. Form-Based with  Auto-Scrolling and

Edit Checks

The whole form is presented as a scrolling field. All branching is controlled by auto-scrolling. Appropriateness of input is determined by branching logic.

High

High

Moderate

7. Form-Based with Auto-Highlighting

and/or Graphics

The whole form is presented as a scrolling field.  Branching instructions are dynamically given to the respondent in the form of highlighting and graphics.

Low

High

Moderately

High

8. Form-Based with Auto-Graying out

The whole form is presented as a scrolling field.  Branching is dynamically controlled by graying and de-activating inappropriate items.

High

High

Moderately

Low

9.  Selective Revealing

Respondent Controlled

The form is shown in outline (collapse-expand) form. Sections are opened and closed by the respondent according to branching instructions.

Low

Moderate

High

10. Selective Revealing

System Controlled

The form is shown in outline (collapse-expand) form. Sections are automatically opened and closed only by conditional branching.

High

Low

 

 

 

Moderately

High

11. Selective Revealing

System Initiated

The form is shown in outline (collapse-expand) form. Sections are automatically opened and closed by conditional branching. The respondent can override branching and open and close conditional sections.

High

Moderate

Moderately

Low

 

 


When a survey is implemented on-line, one may go to either extreme:  present the whole form with all questions and branching instructions or totally automate the branching and present only the absolutely appropriate items. When the survey is totally automated the typical approach is to use an item-based presentation in which only one question is presented on the screen at a time (Design 1, see Figure 9).  Branching is done automatically and no instructions need to be shown to the respondent.  However, items in linear sections of the questionnaire that do not involve branching can be presented on the screen in sets rather than one by one (Design 2, see Figure 10).


 

Figure 9. Item-based presentation with automatic conditional branching.

 

Figure 10.  Set-based presentation with automatic conditional branching.

 

Figure 11.  Form-based presentation with manual conditional branching.

 


At the other extreme when branching is not automated, the whole-form questionnaire must be available to the respondent (Design 3, see Figure 11).  It may be presented in one scrolling window at one extreme or at the other, subdivided down to the item level with supporting methods of navigation from one item to another.  The whole-form presentation is generally preferred over the item-based form due to the complexity of navigating through a large number of screens.  When the whole form is shown, the problem of course is not only navigation but also following the conditional branching instructions. Mistakes in branching can be detected by the system and the respondent can be informed that he or she has answered an inappropriate question or skipped a question that should have been answered (Design 4).  However, automatic branching can be introduced into the whole-form presentation by auto-scrolling to the next question or set of questions (Designs 5 and 6, see Figure 12), by screen highlighting or graphics pointing to the next question or set of questions (Design 7, see Figure 13), or by automatically expanding/collapsing toggles on sets of questions in the branches (Design 11, see Figure 14). 


 

Figure 12.  Form-based presentation with auto-scrolling used for conditional branching.

 

 

Figure 13.  Form-based presentation with automatic graying out of skipped questions.

 

 

Figure 14.  Selective revealing (expand-contract) presentation controlled by conditional branching and by the respondent.

 


These approaches allow the respondent to view all questions, but off-load the conditional logic onto the system.  They also, allow respondents to override the conditional branching when they want to.  On the other hand, forced branching can be implemented with highlighting by graying out and de-activating questions that are irrelevant (Design 8) or in the case of expanding and collapsing questionnaires, by forcing the expanding and collapsing automatically with no manual override (Design 10).

 

Designers of particular on-line implementations then need to decide where to place their questionnaire along the continuums.  First, where should it be between whole form-based and item-based presentations; and second, how much automation of conditional branching should be implemented from none to total?  To what extent does the context of a particular question need to be presented from none to full?  And how much complexity and responsibility can or should be loaded onto the respondent?  These factors will affect (a) the time to complete the questionnaire, (b) the number of human errors in conditional branching, (c) the validity of answers subject to meaningful context and interpretation, (d) the tendency of the respondent to answer in ways to avoid lengthy sections, (e) the tendency of the respondent to prematurely terminate the questionnaire, (f) the respondent's perception of the length and difficulty of the survey,  and (g) the respondent's subjective satisfaction with the process. 

 

Design Guidelines

 

When considering the design possibilities, it is not clear that there is any one design that is optimal for all questionnaires and types of respondents.  Rather a careful analysis is required taking into consideration the conditions of the survey and the characteristics of the respondent.  In some cases a fully automated streamlined, item-based questionnaire will be most efficient.  In other cases, when items interrelate and the respondent may need to compare questions and answers, the form-based designs may be preferable.  Time constraints, motivations of the respondents, and complexity of the items all play into the design considerations.  However, the following guidelines may help:

 

1.  Reduce the branching instructions to a minimum to reduce reading time, confusion, and perceived difficulty of the questionnaire.

2.  Automate conditional branching when possible, but allow the respondent to override branching if there is a need or desire to do so on the part of the respondent.

3.  Hide inappropriate and irrelevant  questions to shorten the apparent length of the questionnaire and make such questions available only if the respondent specifically needs or wishes to view them.

4.  When the respondent is allowed to answer all questions, implement logic and consistency checks on conditional branches.

5.  Streamline forward movement through the questionnaire while allowing backtracking and changing of answers.

6.      When context matters, provide form-based views of sections to help to clarify the meaning of items and the interrelationships among items.

 

Although these guidelines seem intuitive, they require empirical verification before they are adopted as design principles.

 

Conclusion

 

The implementation of on-line questionnaires, particularly on the World Wide Web, is a new design challenge.  The interface must balance the desire to completely automate the system with the need of respondents to see the context of the questions and to control the sequencing of questions for themselves.  The design options and considerations discussed in this paper should help in the first iteration of the design of an on-line questionnaire.  Nevertheless, it should be emphasized that continued cognitive testing and user evaluation of the questionnaire must be conducted to spot and correct problems. 

 

Finally, we would hope that additional factors be considered in novel and thoughtful designs of on-line questionnaires. Simple designs that require minimal actions on the part of the respondent often prove to be superior to powerful, overly complex, burdensome interfaces.  In the end, however, it will not be logical or even the aesthetic merits of the design that prevail, but rather the design that results in data that are more complete and reliable and in the respondents that are more satisfied with the process.

 

References

 

Dillman, D. A. (2000).  Mail and Internet Surveys (2nd ed.), New York, N.Y.:  John Wiley & Sons.

Lazar, J. & Preece, J. (1999).  Designing and implementing Web-based surveys. Journal of computer information systems, 39, 63-67.

Messmer, D. J., & Seymour, D. T. (1983). The effect of branching on item nonresponse. Public Opinion Quarterly, 46, 270-277.

Norman, K. L. (1991). The psychology of menu selection:  Cognitive control at the human/computer interface. New York: Ablex.

Norman, K. L., Friedman, Z., Norman, K. D., & Stevenson, R. (2000). Navigational Issues in the Design of On-Line Self Administered Questionnaires, Behaviour and Information Technology, 20, 37-45.

Rips, L. J., & Marcus, S. L. (1977). Suppositions and the analysis of conditional sentences.  In M. A. Just & P. A. Carpenter (Eds.), Cognitive processes in comprehension.  Hillsdale, NJ:  Erlbaum.

Staudenmayer, H. (1975). Understanding conditional reasoning with meaningful propositions. In R. J. Falmange (Ed.), Reasoning: Representation and process in children and adults.  Hillsdale, N. J.: Erlbaum.

Synodinos, N. E., Papacostas, C. S., & Okimoto, G. M. (1994).  Computer-administered versus paper-and-pencil surveys and the effect of sample selection.  Behavior Reseach Methods, Instruments, & Computers, 26, 395-401.

 

Acknowledgements

 

This work was funded in part by a grant from the Statistical Research Division of the U.S. Bureau of the Census (Contract #50YABC166008). We thank Kent Marquis, Beth Nichols, Betty Murphy, and others at the Census for their guidance and direction in this research.