Pet Enumeration: Usability Testing of U.S. Census Bureau Data Collection Methods Avoiding Protected Personal Data

Kent Norman and Susan Campbell

Laboratory for Automation Psychology and Decision Processes


The U.S. Bureau of the Census is testing new methods of collecting census information both online with Web-based self-administered surveys and in person with enumerators using PDAs going to each household. Usability testing has been hampered by the fact that household and personal data is confidential under “Title 13” security regulations. To avoid these problems, we propose to collect data analogous to household census data, but that is not subject to Title 13, namely, information about household pets. Survey screens for the Web have been reconfigured so that they have all of the same features, functionality, and types of data as those for the household census data but pertain to pets rather than persons. These interfaces can be easily tested in the lab or on the Web without personal data security concerns.


Usability testing is an important component in software development, implementation, and deployment (John & Marks, 1997; Shneiderman & Plaisant, 2003). This is particularly true for Web-based applications such as online surveys. Rapid iterative testing is often required to detect and fix problems with the interface prior to its deployment. Anything that limits or delays user testing can result in major usability and data problems in the final deployment. However, organizations often impose restrictions for security reasons. The result is that additional approvals are required; testing can only be done in secure facilities; and testing personnel must have passed a security background check. The question is whether there is some way around these problems to insure that user testing can be done in rapid and timely manner.

Since usability testing pertains primarily to the interface and not to the content of the data, we propose that testing be done on a surrogate interface that is either “sanitized” or is an analogue of the interface that has the same data structure and relationships as the target interface but deals with totally different information. Analogue interfaces can capture the same information about the process but avoid the security problems by referring to completely different information in their content.

As an example, we present the problem faced by the U.S. Census Bureau in doing user testing of interfaces for the upcoming 2010 Decennial Census. In preparation for the 2010 Census, the Bureau is testing Web-based surveys and surveys presented on Personal Digital Assistants (PDAs) for use by enumerators who go door-to-door. In the fall of 2002, a test was conducted on comparing three modes of data collection for the 2003 test: standard paper surveys, telephone-based interactive voice response (IVR), and a Web-based survey (Murphy & Norman, 2003). In the fall of 2003, a test was made of a Web-based survey of U.S. citizens living aboard for the 2004 Overseas Enumeration Test. Finally, in the spring of 2004 a test was made for a PDA interface for enumerators conducting the non-response follow-up of households (NRFU). In each case, user testing was conducted by the Bureau prior to actual deployment, but in all cases testing was limited by Title 13 restrictions.

Title 13 Restrictions

Title 13 refers to a section of the law that governs the conduct of the U.S. Census Bureau, including the collection and disclosure of personally sensitive information. Individuals must be cleared to handle personal information, and no version of it is allowed to leave the Census Bureau grounds within the embargo dates (usually measured in decades).

To avoid the problem of Title 13 limitations, we suggested that participants in usability testing give bogus information. They could either make it up or we could supply randomly generated names and other information. Unfortunately, this idea did not get past the Census Bureau’s strict interpretation of Title 13. It was pointed out that many people actually give false information on the Census forms and that therefore even this bogus information is protected by law. Consequently, any information submitted, whether real or fabricated is subject to Title 13 restrictions. Or in the words of one security person, “Anything that looks or smells like Title 13 is protected by Title 13.”

Consequently, the only way not to run afoul of Title 13 is to avoid collecting all forms of personal information, real or fictitious. Since most user testing pertains to interface issues rather than the wording or content of the questions, we pursued an analogue solution. In general, testing the wording of questions was outside the scope of usability testing.

2004 Overseas Enumeration Test

In the fall of 2003, the U.S. Census Bureau conducted two rounds of user testing on the Web-based survey to be used for the 2004 Overseas Enumeration Test (Norman & Murphy, 2004). A number of screen shots from this survey are shown in Appendix A. The first round of testing revealed a number of interface problems, which were fixed and retested in the second round. The second round revealed new problems, some of which led to changes in the interface, while others did not.

Although not reported by Norman & Murphy (2004), user testing was hampered in significant ways by Title 13 security restrictions. First, all testing had to be done at the U.S Census Bureau facilities. These facilities may be very inconvenient for participants to access, and required staff to admit the participants and escort them from the gate to the usability laboratory. Second, the prototype survey had to be installed on a secure Web server at the U.S. Census Bureau. This entailed considerable time and effort on the part of the software contractor. Moreover, testing had to suspended on at least one occasion due to a network failure to the secure site and could not be continued on a working development server. Third, all observers and testing personnel had to be cleared on Title 13. Finally, because of the cost of development and testing due to Title 13, a number of interface issues that were discussed by the design team could not be tested or resolved. Only the final prototype could be subject to usability testing. These issues are listed in the next section to illustrate how they pertain to issues of navigation and data verification rather than data subject to Title 13.

Interface Issues

A number of design decisions are made when implementing online surveys, particularly ones that have a number of sections and that repeat the same questions for multiple cases. The household survey for the U.S. Census Bureau has one question about the number of people living at the household and then a repeating cycle of questions about each member. This cycle is followed by a final screen, which gives the user a chance to review the information and submit the form. Figure 1 shows the screen asking for the name of the first person in the household and successive screens cycle through seven screens for each person.

Each screen has three main tabs at the top to jump to the household screen, the person section, and the review/submit screen. A “Next” and “Previous” button appear at the bottom of each screen to allow forward and backward movement through the survey. When in the person section, seven sub-tabs are shown for each of these screens. The left side of the screen lists slots for each of the household members. The names are filled in as they are entered on the name screens. The tabs at the top and the names at the side allow the participant to jump to any screen, but only after that screen has been visited in a forward manner using the “Next” button.

In the course of design and user testing, five fundamental interface issues were discussed, but could not be resolved by empirical testing due to the time and budgetary constraints mentioned above. The following sections briefly present these issues.

1. Unrestricted Navigation. Should participants be allowed to jump ahead to screens not yet visited using the “Next” button? Results suggested that participants did not need or expect to jump ahead. However, if this option had been available, it may have helped some participants in some situations. We hoped to study whether the option would have been more useful or more confusing for users.

2. Form-Based Versus Item-Based. The current design is item-based, in that one item or topic is dealt with on each screen, rather than form-based with the whole questionnaire in one scrollable screen. Results do not give any evidence as to which alternative would be best. Other studies suggest that for the initial completion of the questionnaire, it does not matter. For review and correction, the form-based view is superior. In fact, the current design actually employs part of the form-based idea on the Review/Submit screen. This screen presents the whole questionnaire in a scrolling window. It does not, however, allow the user to directly enter information, but provides links to the screens where the information can be entered. Research is needed to determine whether this method differs functionally from a true form-based interface and whether there are potential benefits of providing both modes of input to the user.

3. Topic-Based Versus Person-Based. The current design is person-based in that the user deals with one person at a time and answers a series of the questions regarding that person. The alternative is topic-based in which the user answers the same type of question (e.g., SSN) for each person. Results suggested that the person-based approach is better for most users and situations. Interestingly, the current design could be easily modified to allow for both orders of input using the person tabs and the sub-item tabs, if one could jump ahead and if the person tabs did not reset the sub-item tab to the first sub-item. Research is needed to test the potential benefit from this variant design.

4. Edit Method. In the current design, users were given data correction instructions if fields were left blank or if the information did not pass a test for data appropriateness on the first click of a link to go to another screen. This design caused a number of problems: First, it was difficult to browse screens that were not complete; users had to click twice to escape the edit message. Second, it resulted in the unexpected consequence of the system capturing incorrect data on the second click. Clearly, a better method needs to be developed for error checking and notification that editing is required.

5. Client-Side Versus Server-Side Processing. The current design used server-side processing. This method is slower and creates greater load on the server. It is also less responsive and less versatile. Designs using client-side processing, particularly for edit messages, should be explored. Such designs could solve the problems created by edit messages.

People/Pets Analogue

The analogue interface for the U.S. Census that we generated was a pet enumeration. To our knowledge, household pets are not covered by Title 13, but like humans they share a number of defining factors such as having names, registration numbers, types of relationships to the household, age and gender, and breed.

Appendix A shows the screens for the U.S. Census 2004 Overseas Enumeration Test and its analogue in the Pet Care Survey. The interface that we generated could be hosted on a non-secure server and administered either in our lab (not secure) or over the Internet. The server software when complete will have the same functionality as that for the 2004 Overseas Enumeration Test in terms of navigation, data checking, and data collection. In addition, it will have software to log records of user interaction. Once it is operational, it can be used to test the interaction framework of the Census tools without raising privacy concerns.


Many interfaces that are tested are subject to restrictions due to security issues. These restrictions can make user testing difficult and sometimes impossible. Sometimes the interface can be sanitized to remove sensitive information and bogus information inserted in its place. However, even this safeguard is not sufficient to pass the strict privacy safeguards that are part of Title 13. Our approach is use an analogue interface that has the same functionality but that refers to a totally different set of information. We propose to gather household information about pets rather than people.

The question is whether the information that we get from this site about usability problems will inform us of the same problems with the target site. To the extent that our analogy holds, we believe that it will. Unfortunately, analogies are rarely perfect. With pets, there are a few breakdowns in the analogy:

1. Person 1 in the Census is different from Pet 1 on the Pet Survey, since Person 1 is the person filling out the survey and Pet 1 cannot fill out the survey for the household.
2. Some of the numbers and information about people differs from pet information. The number of digits for registration numbers is different from social security numbers, and users are unlikely to have memorized their pets’ registration information.
3. While there is some relationship between breed and race, the categories and implications are drastically different.

Nevertheless, basic issues of navigation, jumping to screens, cycling through sets of screens, reviewing the information, and making changes will be the same no matter what the content of the questionnaire,

Analogue interfaces can save significant time, effort, and money by bypassing the need for strict security that is a feature of the target sites.

Analogue interfaces can be applied in many different areas other than for the U.S. Census Bureau. One can develop analogues for military applications, financial transactions, online banking and investing, online gambling, online voting, and online shopping. One wonders if sites such as are not already collecting usability information on their sites, which are game analogues to many real systems. For sites where the content is under development or protected by security concerns, analogue interfaces could provide a convenient, low-cost way to implement comprehensive and iterative usability testing.


John, B. E., & Marks, S. J. (1997). Tracking the effectiveness of usability evaluation methods. Behaviour and Information Technology. 16, 4, 188-202.

Murphy, E. D., and Norman, K. L. (2003). Usability Testing of the 2003 National Census Test Internet Form (Second Round)(Human-Computer Interaction Memorandum #61). Washington, DC: U.S. Census Bureau, Usability Laboratory.

Norman, K. L. & Murphy, E. (2004) Usability testing of an Internet form for the 2004 Overseas Enumeration Test: Iterative testing using think-aloud and retrospective report methods. Proceedings of the Human Factors and Erognomics Society.

Shneiderman, B. & Plaisant, C. (2003). Designing the user interface: Strategies for effective human-computer interaction (4th Ed). Reading, MA: Addison-Wesley.

U. S. Census Bureau. (2002). IT Standard 20.0.0: Design Requirements and Guidelines for Web-Based User Interfaces. Washington, DC: Systems Support Division.


Screen shots of the target and analogue interfaces

Figure 1. Name screens for the U.S. Census Bureau survey (top) and the Pet Care survey (bottom).

Figure 2. Citizenship screen for the U.S. Census Bureau survey (top) and the registration screen for the Pet Care survey (bottom).

Figure 3. Identification screens for the U.S. Census Bureau survey (top) and the Pet Care survey (bottom).