Designing PETS:

A Personal Electronic Teller of Stories

Allison Druin, Jaime Montemayor, Jim Hendler, Britt McAlister,

Angela Boltman, Eric Fiterman, Aurelie Plaisant, Alex Kruskal*, Hanne Olsen*,

Isabella Revett*, Thomas Plaisant Schwenn*, Lauren Sumida*, & Rebecca Wagner*

Human-Computer Interaction Lab, Institute for Advanced Computer Studies
University of Maryland, College Park, MD 20742



We have begun the development of a new robotic pet that can support children in the storytelling process. Children can build their own pet by snapping together the modular animal parts of the PETS robot. After their pet is built, children can tell stories using the My Pets software. These stories can then be acted out by their robotic pet. This video paper describes the motivation for this research and the design process of our intergenerational design team in building the first PETS prototypes. We will discuss our progress to date and our focus for the future.


Children, design techniques, educational applications, cooperative inquiry, intergenerational design team, PETS, robotics.


Real and imaginary animals fill children’s storybooks, television, film, amusement parks, and zoos. In books, they are drawn with words or ink. In television or movies, they can take the furry form of Muppets, Disney characters, or animated creatures. In amusement parks or zoos, real or imaginary animals can be visited, touched, or ridden. These animal-filled activities continue to sustain children’s attention, fill their imagination, and pique their curiosity [8].

It is interesting to note however, studies have shown that in settings where there are both things to observe and things to interact with, (e.g., in science museums, zoos, aquariums) children show a predictable pattern. Young people are attracted to activities that let them become physically involved. In the zoo for example, they prefer to interact with pigeons and squirrels, than more exotic animals isolated behind bars [9].

Figure 1: The PETS robot prototype

Therefore, our research goal is to take animals out from behind those bars. We are developing new learning opportunities for children to physically explore their animal interests with stories and robots. Storytelling has long been considered an important learning experience. Throughout generations, storytelling has been a way to preserve culture and history, communicate ideas and feelings, and educate learners young and old [2, 7, 11]. Storytelling continues to be a critical part of children’s playtime, school time, and home life [8].

Therefore, we believe that these two important childhood ingredients, storytelling and animals, should be brought to the world of robotics for children. Robots don’t have to be hard and plastic. They can be soft and huggable, like animals. Robots don’t have to be complex and tedious to build, but can be enjoyably created by children. And robots don’t just have to live in factories or fly to Mars, robots can simply act out the stories that children tell them.


Significant work has been done by researchers at the MIT Media Lab in bringing together the worlds of robotics and children [12]. Computationally-enriched LEGO bricks have been their robotic building blocks. From programmable bricks to crickets, children can build everything from fanciful animals to physical simulations of viruses. While these robotic constructions do not focus on the storytelling experience, other research initiatives at the Media Lab do. SAGE: the Storyteller Agent Generation Environment [14] uses a programmable stuffed animal to tell stories. Children can listen to stories or tell their own with SAGE. However, the robotic storyteller is minimally configurable. Different characters can be "created" when the animal’s hat is changed by the child, or the sensors from the animal are detached and placed on another object.

Other researchers at the MIT Robotics Lab have been developing KISMET, a robot for social interaction [1]. This robot displays behaviors and emotions, though it is not for the purposes of storytelling, but learning. The learning is not necessarily for the person who interacts with the robot, but for the robot itself. Similar research in the form of a 4-legged robotic pet named MUTANT was recently developed by researchers at the Sony Corporation [6]. Again, storytelling was not a goal in this research, but the purposes of entertainment and companionship were. On the other hand, researchers at Carnegie Mellon University have been developing OZ, an environment that supports interactive drama [10]. While this is a rich world to explore and tell stories with, it is a virtual one that lives behind a computer screen. In recent years, however, the HCI community has come to recognize the importance of physical interfaces to our computational environments [15]. In particular, stuffed animal interfaces have become more common over the years, from Druin’s NOOBIE in 1987 [3] to Microsoft’s Barney in 1998 [13].


Our research in this area has come to be called PETS: A Personal Electronic Teller of Stories (see Figures 1 & 2). Using this robotic storytelling environment, elementary-school-age children can tell stories about how they feel (e.g., excited, sad, lonley). Children can build a robotic pet by simply snapping together special robotic animal parts (e.g., dog paws, a fish tail, duck feet ). After a robotic pet is built, children can tell stories with the My PETS software, giving their robot emotions and behaviors throughout the story. An example below was created by a 7-year old girl in Maryland. This story was entitled Michelle:

"There once was a robot named Michelle. She was new in the neighborhood. She was HAPPY (robot behaves happy) when she first came, thinking she would make friends. But it was the opposite. Other robots threw rocks and sticks. She was SAD (robot behaves sad). No one liked her. One day she was walking down a street, a huge busy one, when another robot named Rob came up and asked if she wanted to have a friend, but then realized she was HAPPY (robot behaves happy). The other robots were ANGRY (robot behaves angry) but knew that they had learned their lesson. Michelle and Rob lived HAPPILY (robot behaves happy) ever after. No one noticed the dents from rocks that stayed on Michelle" (Research notes, August 1998).

Figure 2: The PETS storytelling environment

The PETS robotic animal parts were built with LEGO bricks covered in fur, feathers, felt, etc. Each part can be snapped into place on the body and is also plugged into a plugbox embedded in the animal’s torso. This plugbox is an interface to a Handyboard controller also in the animal’s body, which controls servos and motors, and can read inputs from sensors that are attached throughout the robot’s body. This controller has a serial connection to a Macintosh computer. In the Macintosh, the application software layer, My PETS, takes a story written by children, translates and transfers it to the system software layer that resides in the Handyboard.


The name, concept, and the development of our PETS prototypes came about because of children. We develop new technologies for children, with children in an intergenerational design team. This team consists of children (ages 7-11 years old) and adult professionals, with experience in computer science, education, art, and robotics. Together we use the methodology of cooperative inquiry to understand what children want in new technologies, today, tomorrow and in the future [4]. From this understanding we have begun to interatively design and prototype the PETS robotic storytelling environment.

Cooperative Inquiry

We began our work in March of 1998, by examining and exploring a robotics lab at the University of Maryland. Children and adults tried out robots, asked questions of researchers, and took notes about the experience. When we returned to our HCI lab, each child and adult researcher wrote down on Post-It notes what they liked and didn’t like about the robots they saw. Each researcher privately wrote their comments before sharing them with the group. The table below summarizes these results.

What we Liked:

Number of comments

They have sensors


Seeing the robot examples in the lab


They move


They look like robots


What we didn’t like


What they looked like

(e.g., plastic, ugly, brains showing)


They moved slow


They talked funny


They seem unreliable


Table 1: Summary of what we liked and didn’t after visiting a robotics lab at the University of Maryland

What was most interesting, turned out to be the number of children and adult researchers who described the robots as "ugly." One child wrote on their note, "I don’t like the way the brains show when you look at it." Another child wrote, "They’re plastic and they should be furry like an animal" (Research notes, March 1998).

After our field research and analysis, we began to create numerous low-tech prototypes, using participatory design techniques [5]. Essentially, what we built were animal robots that didn’t move. They were made out of LEGO bricks with fur, feathers, clay, socks and more. They represented our first ideas for animal robots.

In addition, the research team went to a local zoo to conduct research on how real animals look and move. We took notes and pictures of our experience back to the lab.

Iterative Prototyping

In the beginning of August 1998, the team met for two weeks, eight hours a day. During that time, we split into three groups. Each group consisted of two adults and two children, and focused on one important part of the prototype design. The skeleton group was responsible for creating the robotic skeleton of LEGO bricks, gears, motors, and servos. This group developed the plugbox interface to the Handyboard controller and the modular parts that could be put together to create an animal. In addition, they also created the connections for the sensors. One of the most difficult parts to build seemed to be the neck. It had to be redesigned six times so that it could properly hold the weight of the head. The wheel base also had to be redesigned three times so that it could properly move the weight of the entire animal.

What the skeleton group made, the skins and sensors group covered in fur, feathers, felt and fabric. This group started out by sketching on paper the kinds of animal parts they wanted to make. Ultimately, they decided to make a fish tail, dog paws, cow head, bear body, duck feet, and bird tail. The "skins" they created were either sewn, glued, tied, or attached using velcro to the LEGO bricks. The skins were redesigned many times, primarily due to weight. Heads would fall over, tails would not move, and paws would fall off. The skins and sensors group was also responsible for embedding light sensors into the eyes and touch sensors into the paws.

The third group focused on software. The software group was responsible for designing the software that children could use to tell stories. They began by looking at what software had already been made and decided what they liked and didn’t like about it. Then they brainstormed and discussed what feelings they wanted to give the robot. From this list of feelings, a chart was made that showed all the actions the robot would need to perform to show each feeling. At times, questions were asked of the skeleton group to find out what robot actions were possible given the skeleton they were creating. This group then went out and "tested" these actions by having one child play the part of the robot. If their audience could not guess the feeling from the robot’s actions, then revisions were made in the final database of feelings and actions. Finally, the group sketched on paper the design for the screens. These screens were scanned and used for the initial prototype software.

Each day, the three groups would meet together in the morning and afternoon to go over the progress of each group. Design issues were hashed out, questions were resolved, directions were discussed. Everything from the weight of the latest head, to the possibilities for motion were questioned. At times, team dynamics were reflected upon, and depending upon the issue, team process procedures were developed or changed. For example, one 10-year old team member raised the issue that the adults in the skins and sensors group were being noisy and disrupting the software team. This was addressed by changing the location of the software team.

At the end of this two-week intensive work session, a working prototype was developed and presented to a group of 40 local daycamp children (ages 7-14 years old). During this presentation, we received positive feedback and suggestions for future directions. Currently, we are working to better integrate the system software with the application software. In addition, we are developing ways to designate different behaviors for different animal parts. Currently only one behavior can be given to the whole animal. At this time, we are also considering replacing the LEGO brick skeleton with a more stable metal one. Our work continues on PETS during the school year, with the team meeting two afternoons a week.


It has been almost a year since we began our research team. During that time, we have learned a few things, not just about technology, but about the process of how we make it. As our work together has progressed, three guiding principles have emerged as critical to our design process. They are simple, but difficult to make happen: (1) New power structures between adults and children must be developed; (2) All design partners must have a voice in the design process; (3) A comfortable design environment must be created.

In order to support new power structures or relationships between adults and children, we have attempted to "undo" what schools teach children. An example of this, is a rule we have made: no one raises his or her hand to talk. Children easily seem to raise their hands when they want to talk with adults in a group setting. When children raise their hands, it brings on thoughts of school, where teachers are in charge and children are called upon for "right" answers. Instead, children in our team have learned to challenge adults’ ideas, questioning what is done and making suggestions. In addition, we have supported adults in learning to hear what child partners have to say. And that does not mean relegating adults to a corner where they sit and take notes about everything a child does. Instead, we believe in facilitating discussions where children and adults each feel comfortable contributing ideas.

Finding ways to give each design partner a voice in the process is no small challenge. Sharing ideas needs to happen in numerous ways, since people, young and old, feel comfortable communicating in various forms. We have found that there are times when drawing or writing or even building can and should be used to capture ideas. These artifacts become a catalyst and bridge for discussion in large or groups.

Not only should communication opportunities be diverse, but the design environment needs to feel comfortable. A common ground can not truly be found without physical surroundings that accommodate all design partners. One way we have made this possible is being strict about our informality. No child or adult dresses formally in a skit or tie. Design experiences take place in informal settings sitting on the floor or in bean-bag chairs. We capture our ideas in low-tech prototypes made of LEGO bricks and clay rather than yellow pads and design specifications. This all takes a commitment of time and a willingness to change by all design partners. There have been times of frustration, differences among team members, and questioning of goals. It may be those times that have taught us the most.


This research could not have been accomplished without the generous support of the Institute for Advanced Computer Studies, the Sony Corporation, the Intel Research Council, and The Army Research Laboratory. In addition, we thank Ben Bederson and Catherine Plaisant for their lab resources and continued help over the last year.


1. Breazeal, C. (1998). A motivational system for regulating human-robot interaction. In Proceedings of AAAI'98. AAAI Press, pp.126-131.

2. Bruchac, J. (1987). Survival this way: Interviews with American Indian poets. Tuscson, AR: University of Arizona Press.

3. Druin, A. (1987). NOOBIE: The Animal Design Playstation. SIGCHI Bulletin, 20(1), 45-53.

4. Druin, A. (1999). Cooperative inquiry: Developing new technologies for children with children. In Proceedings of Human Factors in Computing Systems (CHI 99). ACM Press.

5. Druin, A., Bederson, B., Boltman, A., Miura, A., Knotts-Callahan, D., & Platt, M. (1999). Children as our technology design partners. A. Druin (Ed.), The design of children's technology. (pp.51-72) San Francisco, CA: Morgan Kaufmann.

6. Fujita, M., & Kitano, H. (1998). Development of an autonomous quadruped robot for robot entertainment. Autonomous Robots, 5(1), 7-18.

7. Gish, R. F. (1996). Beyond bounds: Cross-Cultural essays on Anglo, American Indian, and Chicano literature. Albuquerque, NM: University of New Mexico Press.

8. Goldman, L. R. (1998). Child's play: Myth, mimesis, and make-believe. New York: Berg Press.

9. Greenfield, P. M. (1984). Mind and Media. Cambridge, MA: Harvard University Press.

10. Loyall, A. B., & Bates, J. (1997). Personality-rich believable agents that use language. In Proceedings of First Annual Conference on Autonomous Agents

11. Ortiz, S.(Ed.), (1998). Speaking for generations: Native writers on writing. Tuscson, AR: University of Arizona Press.

12. Resnick, M., Martin, F., Berg, R., Borvoy, R., Colella, V., Kramer, K., & Silverman, B. (1998). Digital manipulitives: New toys to think with. In Proceedings of Human Factors in Computing Systems (CHI 98). ACM Press, pp.281-287.

13. Strommen, E. (1998). When the interface is a talking dinosaur: Learning across media with Actimates Barney. In Proceedings of Human Factors in Computing Systems (CHI 98). ACM Press, pp.288-295.

14. Umaschi, M. (1997). Soft toys with computer hearts: Building personal storytelling environments. In Proceedings of Extended Abstracts of Human Factors in Computing Systems (CHI 97). ACM Press, pp.20-21.

15. Wellner, P., Makay, W., & Gold, R. (1993). Computer augmented environments: Back to the real world. Communications of the ACM, 36(7), 24-26.