Sensing, Storytelling, and Children:
Putting Users in Control

Jaime Montemayor, Allison Druin, Gene Chipman, Allison Farber, and Mona Leigh Guha

University of Maryland, Human Computer Interaction Lab

monte@cs.umd.edu or allisond@umiacs.umd.edu

 

 

Over the past few years, researchers have been exploring possibilities for how embedded sensors can free children from traditional interaction strategies with keyboards and mice. In this paper, we consider sensing-based interactions from a child’s perspective. That is, how children decide to handle sensor data and affect state changes in their environment. We will present this in the context of our research on physical interactive storytelling environments for children. The system architecture will be presented as well as an empirical study of the technology’s use with 18 children, ages 5-6.  We will discuss the challenges and opportunities for kindergarten children to become designers of their own sensing-based interactions.

 

 

1. INTRODUCTION

Today it is quite common for sensors to be embedded in the everyday objects of children. When young people play with their stuffed animals (e.g., Druin et al. 1999; Maddocks 2000; Strommen 1998; Umaschi 1997) Lego blocks (e.g., Martin et al. 2000), musical instruments (e.g., Lamb & Buckley 1984; Roh & Wilcox 1995), or even sit on toilets in their bathrooms (e.g., Druin 2002), children may have a sensing-based interaction. In the past few years, researchers have been exploring the possibilities for how embedded sensors can free children from traditional interaction strategies with keyboards and mice (e.g., Back et al. 2001; Frei et al. 2000).  These  sensing-based  interactions  combine  the

power of computation with the familiarity of a child’s world and what can emerge can be anything from exploratory play, to storytelling, and even learning (e.g., Alborzi et al. 2000; Strommen 1998; Wyeth & Wyeth 2001).

However these sensing-based interfaces are not without technical, design, and social challenges (Weiser 1991; Bellotti and Edwards 2001, Bellotti et al. 2002; Shafer, Brumitt, and Cadiz 2001). Depending on how precise the sensors are children can have experiences that range from frustrating to scary. For example, automated toilets which commonly flush when least expected, can scare a young child who is transitioning from diapers to toilet-training (Druin 2002). This sensing imprecision, while annoying for adults can lead to a lack of self-confidence and real fears in children. As Strommen (1998) pointed out, even with stuffed animals children must feel in control. In usability studies, researchers looked at when and how much children could interrupt a peek-a-boo game with Actimates Barney. They found that children wanted the ability to wave their hands over Barney’s eyes and stop any song or story in favor of playing peek-a-boo. By offering children control of their sensing-based interaction, no matter how small, children felt the technology supported them in appropriate ways.

Another challenge when developing sensing-based interfaces is the actual physical ability of young people using the technology. A growing child’s physical coordination is not like an adult’s (Thomas 1980). Gross motor coordination skills can make it difficult to know if a child will wave her hands precisely over the right spot, or if she will press hard enough or too much on other areas. The notion that embedding sensors should be subtle or unobtrusive may also not be appropriate for children. On the other hand, ruggedness, flexibility, and control may be more needed.

In this paper, we consider sensing-based interactions from a child’s perspective. That is, the child decides how to handle sensor data and how to affect state changes in her environment. We will present this in the context of our research on physical interactive storytelling environments for children. We will show, through an empirical study, that our design artifacts, such as magic wands, physical icons, combined with a physical programming metaphor, can transform kindergarten children into designers of their own sensing-based interactions.

 

2. PRIOR RESEARCH

Since the late 1980’s, researchers have been working on interaction models that can free users from being tethered to the classical computer monitor, keyboard, and mouse (e.g., Weiser 1991; Mackay et al. 1993; Ishii & Ullmer 1997; Montemayor et al. 2002). Regardless of their architectures, these systems must include sensors, actuators, and one or more controller/watch-dog. In addition to the many well known technical challenges, in scale, context awareness, gesture recognition, networking, location tracking, sharing, and software infrastructure (Weiser 1991; Weiser 1993; Salber, Dey, and Abowd 1998; Abowd and Mynatt 2000; Trevor, Hilbert, and Schilit 2002), researchers have begun to consider intriguing issues specifically related to the sensing component. For example, what effects do sensors have over the interactions between users and the system (Bellotti et al. 2002)? What happens if data does not exist to support a particular social context? How and when should a context-aware system defer to a human for decision-making (Bellotti and Edwards 2001)? These are all critical questions that are just now emerging in the development of technologies that rely on sensing-based interactions.

An area of research where applications depend on embedding sensing technologies into the environment is “ubiquitous computing.” These applications are being developed for a variety of social contexts. Examples can be found in museum tours (e.g., Aoki and Woodruff 2000; Fleck et al. 2002; Semper 1990), conferences (e.g., Dey, Abowd, and Salber 2001), personal hygiene (Druin 2002), and storytelling (e.g., Druin and Perlin 1994; Bobick et. al. 1999; Alborzi et al. 2000; Montemayor et. al. 2000).

Since the 1970s, researchers have been exploring computational possibilities in a child’s physical world. Leaders at MIT combined the children’s programming language of Logo with mechanical turtles, LEGO gears, motors, and programmable bricks. In more recent years, their work has been commercialized in the popular Mindstorms Robotic Invention System (Martin et al. 2000). Other researchers have concentrated on robotic stuffed animals that enable children to listen to stories or tell their own. Such research initiatives include the MIT Media Lab’s SAGE (Umaschi 1997) and the University of Maryland’s PETS (Druin et al. 1999). Commercial products have also become commonplace from Microsoft’s Actimates Barney (Strommen 1998) to Tiger Electronics’ Furby (Maddocks 2000). University researchers have also been developing interactive spaces which make use of sensing-based interactions. While this research has generally been developed for adult audiences, it has become more common to focus on children as users [e.g., NYU’s Immersive Environments (Druin & Perlin 1994), MIT’s KidsRoom (Bobick et al. 1999), and the University of Maryland’s StoryRooms (Alborzi et al. 2000; Montemayor et al. 2002)].

 

3. StoryRooms AND Physical Programming

We have found that most physical storytelling spaces that rely on sensor-based interaction are the result of adults’ imaginations and not from children (Montemayor et al. 2002). Young people are generally only able to choose between a few pre-created choices as participants in an experience. It is as if children are only allowed to read books, but never to tell their own stories. There is educational value in reading what others have written, but the act of authoring can offer children creative, problem-solving opportunities that are also critical to their cognitive development (Given & Barlex 2001; Roschelle et al. 2000). Therefore, when we began our research three years ago in developing enabling technologies for sensing-based interactive environments, our research priority was to support children as storytellers and builders from the very start of their physical experience.

The actual StoryRoom technologies and interface experiences have evolved over the years, from screen-based programming languages to an entirely physical metaphor for storytelling (Montemayor et al 2000). In our current work, to create a StoryRoom story, children start by verbally telling a story. They then make or find props that support the storytelling experience (e.g., stuffed animals, cardboard boxes). Once they have the physical items for their story, they begin to arrange the props in the room with the physical icons (e.g., sensors and actuators). It should be noted that all sensors and actuators are very obvious, non-embedded, physical icons. They can be put anywhere on top of any prop, but they are not meant to be subtle to the user. Children typically finish authoring by programming the interaction rules with a special “magic wand.”

Figure 1: Examples of props and physical icons used in the “Irene” story. The cottage and a hand icon are in the foreground. The snake and a light icon are in the back.

 

3.1 An example of creating a StoryRoom: The Irene Story

The following story was used as part of our empirical study: In the “Irene Story,” a little girl named Irene is lost in the woods and stops to ask the people in a cottage, a mouse, and a koala bear if they know where her house is. None of them do, but they ultimately lead her to ask a snake in a cave that is able to help find her house.

 

3.2 Make and find props

The Irene Story contains a cottage built from cardboard sheets and swatches of felt fabric; a stuffed mouse; a stuffed koala bear; a cave that is a cardboard box with a hole in it; and a snake made out of foam (Figure 1).

 

3.3 Arrange the props and physical icons in a room

A foot icon (touch sensor) is placed next to the cottage. A blinking arrow (actuator) is placed next to the mouse. A hand icon (touch sensor) is placed next to the koala bear prop and a wind actuator and light actuator are placed next to the cave (Figure 2). To support the story, we want the foot icon to trigger the blinking arrow, and the hand to trigger both the wind and the lights.

Figure 2: The completed setup for the story. The props include the cabin, the mouse, the koala bear, and the snake inside the cave. The foot icon is a contact sensor that was programmed to trigger the blinking arrow by the mouse. The hand icon was programmed to trigger the sun icon (light) and the wind icon (fan).

 

3.4 Program the interaction rules

The StoryRoom has two distinct modes: authoring and playback. In the authoring mode, the Programming System is capturing activities and saving condition-action pairs into a database. In the playback mode, the system monitors sensor events and refers to this database to trigger actuators. A child initiates the authoring mode by becoming a wizard: she takes a wizard’s hat and a magic wand from a magic table and wears the hat. By returning the hat to the “magic table”, she turns off the authoring mode (Figure 3).

Figure 3: A child creating interaction rules. By wearing the wizard’s hat, she knows that she can create magic. The magic wand gives her the power to create “invisible” wires to connect different icons. Here, she is waving the wand over a physical hand icon.

 

To create relationships among the physical icons, the wizard waves the magic wand over any icons that she wants to be within a group. For example, if the wizard wants a blue light to turn on whenever a red hand is pressed, she first presses a “new-spell” button (Figure 4) on the wand. Then, she waves the wand over both the blue light and the red hand. To the child wizard, she has just created “invisible wires” between these icons so that the red hand now controls the blue light.

Figure 4: The new-spell button on the magic wand lets children create multiple independent interaction rules. The yes and no sides are modifiers to the selection action of the wand. Yes means include the positive action of an icon into a rule. No means include the negative action of an icon. (If an icon is not selected by the wand, it is considered a don’t care.) Because of time limit, our children did not explore the no part of the wand.

 

In the case of the Irene story, the child wizard presses the new-spell button, then waves the magic wand over the foot icon and the purple arrow icon. Next, she repeats the same actions, this time with the hand, wind, and light icons. She concludes by returning the hat and the wand back to the magic table.

The final Irene StoryRoom is as follows: When children enter the Irene StoryRoom, they see the icons and props set up in a semi-circle that follow the chronological order of the story. A researcher is the narrator and she helps them through the environment. First, she turns on the story by flipping the “once-upon-a-time lever.” She then leads the children to the cottage, next to which is the foot icon. She begins, This story is about Irene, a little girl who is lost in the woods and cannot find her house. Irene asks the people in the cottage if they know where her house is, but they do not. Irene sees a strange foot and pushes on it.” The researcher asks the children to press the foot. This activates the blinking purple arrow light next to a stuffed mouse. The children then see a blinking arrow pointing to the mouse. The researcher continues, “Irene asks Mr. Mouse if he knows where her house is. Mr. Mouse says no, but that she should ask Mr. Koala.” The children run to Mr. Koala, who has the hand icon near him. The researcher says, “Irene then asks Mr. Koala if he knows where her house is. Mr. Koala says no, but that she should ask Mr. Snake in the cave.” The children press on the hand icon, which activates the fan and light placed near a snake prop in a cave. The children run over to the cave and are told, Irene asks Mr. Snake if he knows where her house is. Mr. Snake says yes, just turn around and go ten feet and there it is.”

The Irene story is a simple example of the kind of story that can be created using physical programming. The underlying technologies that enable this experience will be discussed in the section that follows. This paper will conclude with a summary of how children used these technologies to explore existing StoryRooms and to create their own.

 

4. THE SYSTEM ARCHITECTURE

Physical interactions are fundamental in the StoryRoom, whether they 1) occur among the physical icons and children, 2) among the physical icons themselves, 3) between props and the children, or 4) among children themselves. We designed a system to support the first two cases, which require embedded devices (within physical icons) and a communication protocol to control them. In doing so, we came to understand that these devices had to be rugged, durable, and predictable in behavior.

4.1 Embedded Devices

The embedded devices, or “icon controllers,” consist of several components (Figure 5):

1.        A printed circuit board (PCB) with micro-controller and various general circuits for communications and sensors.

2.        A battery circuit board.

3.        A wireless module.

4.        A driver circuit board with custom circuits for controlling the sensors and actuators of a specific type of StoryRoom icon.

 

Figure 5: The components to a StoryRoom icon controller (left) and the components of a StoryRoom icon.

 

The multi-layer PCB micro-controller and battery circuit boards were of our own design, and professionally manufactured. The polymer rechargeable battery, with a packaged protection circuit, provides a minimum of 4 hours operation without recharging. The driver circuit boards were built in our lab as needed from basic electronic components, and use external batteries to drive the higher power consuming actuators such as lights and motors. These four components stack vertically into a single package less than 1” in height and are enclosed within a 2” ´ 4” plastic box and embedded into a foamy iconic shell.

We selected a sophisticated wireless modem, the WIT2410 Wireless Module from Cirronet, Inc. [www.cirronet.com] primarily because it has extremely low latency. It also has good bandwidth, reasonable size and power consumption, a package that eliminated most of the RF design challenges, and on-board management of the wireless protocol.

 

4.2 Communication Protocol

A StoryRoom application runs on a single computer and monitors the activities of the environment and controls the states of the icon controllers. Communication between the StoryRoom application and the physical icons follow a three-layered protocol (Figure 6) similar to the TCP/IP Network model (Tanenbaum 1996). These include: 1) the wireless layer, similar to the link and IP network layers in the TCP/IP model, 2) the network layer, similar to the TCP network and transport layers, and 3) the application layer. The WIT2410 modules provide the wireless layer. Network layer software, running on both the icon-controllers and the computer with the StoryRoom application, provides delivery of application layer messages. Application layer software in the icon controllers executes incoming application messages and generates outgoing ones as needed.

 

Figure 6: The StoryRoom Network model.

 

4.2.1 The Wireless Layer

The WIT2410 is configured to operate in a point-to-multipoint mode where the base unit (attached to the computer running the StoryRoom application) can send and receive data from each remote (attached to an icon controller). Remotes send and receive data only with the base. The base transmits a broadcast where every remote unit receives the data. Units share the RF channel using time division multiple access (TDMA).

4.2.2 The Network Layer

The network layer manages delivery of messages regardless of whether they originate from the application or an icon; however, a different packet structure is used for each. Packets originating at the application can carry multiple messages. These packets can be destined for a single icon or can be broadcast to groups of icons (or all icons). Packets originating at an icon carry only a single message to the application. In both packet types is a sequence number that represents the number of packets sent by a specific origin. The sequence number can be used for checking the order of arriving messages and is necessary for a future implementation of the network layer that will provide guaranteed delivery.

4.2.3 The Application Layer

The application layer provides a message format for the application to configure and control icons and for icons to provide both polled and event-driven information to the application. Messages can contain instruction codes, service codes and data. The instruction code is used to determine function of the message. The application can generate instructions for setting or requesting the status of service parameters, setting default service values for an icon or issuing a reset command to an icon. Icons can generate instructions for registration with the application and for reporting application-requested or icon-generated information. Service codes are used for icon specific functions.

4.3 Icon Controller Hardware and Software

The 17C756A micro-controller from Microchip, Inc. [www.microchip.com] is used on the icon controller board. In addition to the micro-controller, the icon controller board has several support circuits, including a RS232 driver and two 7-segment displays for debugging purposes. One of the micro-controller’s serial ports is dedicated to the WIT2410 wireless module. The other serial port is used for communication with the application computer, in the case of the base unit, or for control of sensor devices as needed in StoryRoom icons.

The digital input/output of the micro-controller is used for sensors and actuators. The driver circuit board has custom circuits depending on the device to be controlled. A common sensor used in StoryRooms is a simple switch that runs through a simple circuit to provide a latch on the switch, which then drives one of the micro-controller’s digital inputs. When an input change is detected, the appropriate message is sent to the application and the latch cleared to be ready for the next event. Actuators can be simple lights, which the controller drives through a transistor circuit that provides external battery power to the light. Other driver circuits include a motor driver and a circuit to drive glow fiber.

In order for StoryRooms to be a physically programmable environment, it is necessary to have physical tools that act over physical icons. The most basic tool is the “magic wand,” which allows children to create icon groups (Figure 7). A group is similar to a programming statement in traditional approaches. The wand must be able to detect the proximity of other icons and identify them. We used a radio frequency identification (RFID) system from SkyeTek, Inc. This system detects and identifies RFID tags, which are inexpensive, passive, credit card sized pieces of paper and wire that can be inserted into StoryRoom icons. Control of the system is through the micro-controller serial port, and RFID reader data is translated into a message for the StoryRoom application. The magic wand is able to detect and identify other StoryRoom icons consistently from a range of about 4”.

 

Figure 7: The Magic Wand and an underlying RFID reader.

 

5. An Empirical Study

Over a one-month period in the fall of 2002, we used the StoryRoom technology and physical programming with 18 children (ages 5-6) in an initial empirical study. The children who participated in this study were racially and ethnically diverse, varied widely in their academic ability, and were in the kindergarten program at the Center for Young Children (CYC), an early childhood center located on the campus of the University of Maryland. Children worked with the StoryRoom technology in the Great Room (a large open space in the middle of the CYC) with our team of four adults for sessions that lasted approximately 20 minutes. The pairs were diverse in gender, race, and ethnicity, and remained the same throughout the research.

The purpose of this study was to determine the effectiveness of StoryRooms as a storytelling tool with young children. To understand this, we asked three questions:

1.        Can a child participate (including retelling) in an already created StoryRoom?

2.        Can children program using physical programming?

3.        Can the children use physical programming to create an original StoryRoom?

These questions came out of a prior pilot study that explored these questions with “wizard of oz technologies” (Montemayor et al. 2002).

For this empirical study, we designed three separate activities using StoryRoom technology in order to answer each of the three questions above.

 

5.1 Can a child participate in an already created StoryRoom?

The activity designed to answer this question provided the children with their first exposure to StoryRooms. An adult told children a story using the StoryRoom (the Irene story described in the “StoryRooms and Physical Programming” section) and then invited the children to retell the story.

We collected data from these sessions by videotaping and taking observational notes. To analyze the data, we developed a coding scheme based on the data that emerged from our work with children. Two members of our team initially coded the tapes, after which the codes were refined. We established interrater reliability by having two of our team members code 33% of the data. There was one discrepancy in coding which was resolved, and one team member continued to code the rest of the data.

In analyzing the tapes, we first determined if children were engaged – that is, if they listened to and/or observed the storytelling researcher for a majority (more than 50%) of the time while being told the Irene story. We also determined if the children reacted when something “magic” (e.g., the fan and light turning on when the hand was pushed) happened.

Through our analysis, we determined that 100% of the children were able to participate fully in this previously created StoryRoom. This laid the groundwork for the next part of the activity, in which the children were asked to retell the Irene story to a new researcher.

During retelling, we looked for the children’s ability to recall and retell the main events of the story and to use the StoryRoom elements in order to do so. Because the children were functioning as a pair in this activity, points were given when at least one of the children did a task. During this section, data from one pair of children had to be eliminated due to poor video quality.

Across the remaining 8 pairs, a much greater variance of ability to use the StoryRoom was seen on retelling than on participating. All pairs were able to show a basic understanding of how to use the StoryRoom to retell a story. However, the variance of ability to retell was wide, with pairs scoring from 7 to 14 of the possible 16 (Figure 8). We attributed this to the fact that children needed varying amounts of adult guidance on the retelling. Codes were based on the presence or absence of adult guidance. For these activities, a child would receive two points for completing a task independently, one point if he or she needed adult guidance, and no points if the task went uncompleted. Therefore, a pair who received 8 of the possible 16 points was able to retell the story with a lot of adult guidance, while a pair who received 14 points was able to do so with minimal adult guidance.

Figure 8: The number of total points (out of 16) that each pair scored on retelling the Irene story. The range of scores is from 7 – 14, showing that children were able to retell the story with varying degrees of adult support.

 

It should be noted that adult prompting, such as “What’s next?” was not coded as adult guidance as it was not related specifically to the children’s ability to use the StoryRoom. More specific prompting, such as “What do you do with that foot?” was coded as adult guidance as this prompt was specific to the use of the StoryRoom.

 

5.2 Can children program using physical programming?

In order to answer this question, we placed a large pile of StoryRoom actuators and sensors on the floor along with the “play story” switch. The magic wand and hat were set up on a table. The sessions began with a researcher showing the children how to program the StoryRoom actuators and sensors using the magic hat and wand (as described in the “StoryRooms and Physical Programming” section).

Children were then each given the opportunity to define interactions for a StoryRoom. Because each child was given a turn to define these interactions on his or her own, the coding for this activity was done by individual rather than by pair. One pair was absent, making the total number of children who participated in this activity 16.

We again analyzed the videotape of each child to determine if he or she took the necessary steps to program a StoryRoom. These steps included:

1.  putting on the magic hat

2.  using the wand to connect at least one actuator to one sensor

3.  taking off the magic hat

4.  turning “on” the play story switch in order to test the program

5.  turning “off” the play story switch to end the activity.

We also tried to determine if the child understood the connections that he or she had made. Because every child (A) was working with a peer (B) nearby, there was a high occurrence of B stepping in and completing a task before A had a chance to do so. In that situation, a k designation was given to A on the part of the programming completed by B. Any task designated k was not figured into the child’s final score. Nearly half (7 of the 16) of the children had at least one k in their score.

All children scored between 44% and 86% on programming (Figure 9). Because on most tasks our coding scheme awards full points for independent completion and partial points for completion of a task with adult guidance, these numbers show that most of the children were able to program with some degree of adult guidance, and all had some understanding of what to do in order to control the interactions of sensors and actuators in a StoryRoom. Interrater reliability was established for the coding of this activity by having two team members compare 25% of their coded data. There were no discrepancies; therefore one team member finished the coding.

 

Figure 9: Percentage of possible points that each child scored on physical programming.

 

5.3 Can the children use physical programming to create an original StoryRoom?

This question necessitated a much more in-depth approach than the previous two. In order to create a StoryRoom, each pair would have to come up with a story, create the necessary relevant props out of common art supplies, set up their story in the room, and program the interactions. We chose to only work with two pairs for this activity, but to work with these pairs in an in-depth manner. Because we found that almost all of the children could program with some degree of adult support, results from the retelling section of activity one were analyzed in order to select pairs for this case study. The two pairs we chose were the pairs with the highest and lowest scores on retelling the Irene story. In this way, we hoped to better understand the abilities of children at each end of the spectrum of StoryRooms use.

Because of the case study-like nature of this task, we observed each pair in detail. To analyze the children’s stories we asked five questions about the process of creating a StoryRoom.

1.  Can the children create a story with a plot, characters, and a setting?

2.  Can the children make appropriate props for their story?

3.  Can the children program their Story Room?

4.  Can the children appropriately integrate the StoryRooms technology into their story?

5.  Can the children play or retell the StoryRoom they created by telling a story involving props and aided by the use of the StoryRooms technology?

 

5.3.1 Case Study One: Bobby and Dennis

The first pair chosen was the highest scoring pair on activity one. These two Caucasian boys, Bobby and Dennis (not their true names), are both age five. These children both come from two-parent homes and have no siblings. It is both Bobby and Dennis’s third year at the CYC. They worked with us for four consecutive days for approximately 45 minutes each day on the task of creating a story using StoryRooms.

On the first day, Bobby and Dennis came up with a plot for their story which follows. The letters in parenthesis show who conceived each part of the story. Italics indicate a mention of how physical programming devices could be integrated into the story.

A little girl was combing her hair when the sink came on all by itself (D). She knew her dog could help her find what was wrong with the sink (researcher). The dog’s name was Rocket (researcher). Rocket didn’t know what happened (D). We could put the purple arrow next to the sink so everybody will know that the sink is broken (B). There was a bad ghost in the sink (D). The girl scared the ghost away with the mask (D). The ghost ran away to a cave (B).

 

Bobby and Dennis decided that they would need a brush, a ghost, a dog, a sink, a cave, and a mask as props for their story (Figure 10). Using low-tech art supplies, Bobby and Dennis worked with us to construct these props. On the second day, they used their props and StoryRooms icons to set up the story. Both Bobby and Dennis were given a chance to set up and program the story. Each child had a different way that they wanted to arrange the props and icons. After coming to a consensus, they then programmed the story and practiced telling it to us.

 

Figure 10: Some props in Bobby and Dennis’ story. From left to right, the ghost, the mask for scaring the ghost away, Rocket the dog, the sink, and the comb. Adjacent to the sink are the foot and arrow icons.

 

On the third and fourth days, Bobby and Dennis set up their StoryRoom and told the story to selected classmates and their teachers (Figure 11). The classmates often got involved in the StoryRoom by asking questions about the story (e.g., “Once upon a time what?”) or by pushing icons themselves, and by participating on the floor with Bobby and Dennis. The teachers remained seated in chairs when listening to the story, but were quite engaged with their students’ work.

Figure 11: Bobby and Dennis sharing their story with classmates.

 

5.3.2 Case Study Two: Mary and Shelly

Mary and Shelly (not their true names) were chosen so we could understand the potential for StoryRooms with a pair of children who scored lower on the retelling. Both Mary and Shelly are females and are 5 years old. Both Mary and Shelly come from two-parent homes. Mary is Chinese-American and speaks Chinese at home, is bilingual, and has an older brother. It is her second year at the CYC. Shelly was born in Korea. She moved to the U.S. with her parents and her younger brother one month before the school year began, and is in the process of learning English. It is Shelly’s first year at the CYC. We worked with Mary and Shelly for three consecutive days for approximately 45 minutes each day.

On the first day, Mary and Shelly were given the same prompts as Bobby and Dennis and were asked to come up with a story that they could tell using the StoryRooms technology. The story they chose was a retelling of a story that they saw on Dragon Tales, a popular animated television show for children in the United States. Although we asked Mary and Shelly several times to tell an original story, the girls wanted to tell the Dragon Tales story. Here is their story.

Max and Emmy moved to a new house and they found a magic wish in a drawer (M). They made the wish come true by saying “I wish I wish with all my heart to fly with dragons in a land apart” (S). This wish took them to Dragon Land (M). There they went to Dragon School (M). At the dragon school they met lots of dragons like Zack, Weezie and Ord (M). (Note: there was no mention of how physical programming devices could be integrated into the story)

 

Mary and Shelly decided that in order to tell this story, the props they would need were Max and Emmy’s house, a drawer for a magic box, the dragon school, and dragons. They then worked with low-tech art supplies and with our help the children made the drawer with the magic box, the dragon school, and some dragons.

On day two, Mary and Shelly were given their props and asked to program the StoryRoom. Mary and Shelly set up the props and icons around the room, but in two different places: one for the props that they made and one for the StoryRoom icons. There was no apparent connection between the prop group and the icon group. Mary flipped the play story switch and expected the icons to work before either she or Shelly programmed them. Mary needed to be prompted explicitly by adults to remember that she needed to program the icons. One adult asked, “How are you going to get magic in those (the icons)?” and another hummed the music that plays during programming before Mary remembered that she needed to use the hat and wand to program. When Mary did use the wand to program, she connected all of the sensors and actuators in one command, which meant that pushing all of the actuators at one time would cause all of the sensors to go off. In this situation, pushing one sensor will not cause any actuator to go off. Mary did remember to flip the play story switch in order to test the icons, but did not realize that she had connected all of the sensors to all of the actuators. She pushed on one sensor at a time expecting something to happen. During this time, Shelly was not paying attention to the StoryRooms task.

When she tried programming again, Mary connected two hands, a foot, and two arrows. At this point, an adult asked, “What do you do if you’re done with that spell?” to prompt Mary to use the new spell button to create a new command. At this point, Mary put the hat and wand away, ending the programming mode. Because of the manner in which Mary connected the sensors and actuators, she would have to push on both hands and the foot in order to activate both arrows. She tried pushing the hand and the foot and then tried just a hand. Therefore we can assume that Mary did not understand how to “play” the StoryRoom she had just made.

When again prompted to use the props and icons together, Mary put an arrow pointing to the school prop that the children made but also put another arrow pointing to a foot, which is a StoryRoom icon and not a prop. This is significant because it shows that Mary was not distinguishing between the functions of props and icons.

On day three, Mary and Shelly again had trouble programming. Shelly was more engaged on this day, but when magic was mentioned, she pantomimed sprinkling magic dust on the icons. She also said “abracadabra” when using the magic wand with the icons, and said that this was making them work. Mary and Shelly spent time on this day repeatedly picking up and down the magic hat, which turns the programming music on and off, and turning the play story switch on and off, which caused it to repeatedly say “once upon a time” and “the end”. Shelly also appeared to enjoy when the StoryRoom gave auditory feedback (such as “yellow foot” when the yellow foot was pressed). When asked again to tell their story, the girls used their props but not the StoryRoom icons to tell a Dragon Tales story, this time telling a different story than the one they had planned.

 

5.3.3 Case Study Analysis

The two pairs of children in the case study performed very differently in their attempts to create StoryRooms. Both groups created stories with plots and created appropriate props that suggest a setting and characters. However, the disparities between the groups became apparent when it came to programming and integrating the technology with their narratives. Bobby and Dennis were both able to create independent interaction rules. Mary and Shelly were unable to perform this task; instead, they programmed all of the actuators to go on at once. Furthermore, Bobby and Dennis were able to integrate the StoryRooms technology into their story. Mary and Shelly were not – they programmed the icons separately from the story and did not relate the StoryRooms icons to the props or events in their story. Finally, Bobby and Dennis were able to retell their story to their peers and teachers using the StoryRooms technology. Mary and Shelly did not progress that far in their storytelling experience.

 

6. Lessons Learned

Through this research, we learned valuable information that will help to direct our future work with StoryRooms and sensing technologies for children. Children may need more prompting when using physical programming technologies such as StoryRooms. Our work with Mary and Shelly taught us that providing more feedback during physical programming can help children to be more successful. This could be supported in future versions of our technologies. For example, the magic wand could provide an audible cue when children are finished with a spell or starting a new spell. In addition, the icons could visually show a child if they had connected to another icon, or a new tool could allow children to see which icons had connected in a programming statement. On the other hand, the novelty of the StoryRoom can sometimes hinder children from using the StoryRoom for its intended purpose of telling stories. For example, Mary and Shelly spent a lot of time picking up and putting down the wizard hat in order to make the ambient music start and stop.

We have also learned that for children to understand, predict, and control the interactions in their environment, it may be necessary to expose the system components. However, Mary and Shelly had challenges integrating the sensors and actuators with the props into one physical story. This may be due to their inability to abstract certain programming concepts, but this may also have to do with the system characteristics. The icons may have been too easily identifiable as say a foot or hand. Some children such as Mary and Shelly tended to focus on those characteristics (as in, “I see a foot”) and forget that the item had another purpose, which was to be an interactive proxy for a prop. We think in some cases, this may have been due to the relationship between sensing devices to the physical environment. For instance, a child may place a large icon next to small cottage, making the icon more visually important than the prop.

On the other hand, exposing the system components may not have been quite as obvious as it needed to be in some cases. For instance, Mary just could not relate the new-spell button to the programming activity of creating a new interaction group. We believe that this button was perhaps not an appropriate metaphor for more challenged children. This suggests that the visual metaphors for sensors and actuators need to be carefully considered. So while we have revised our physical interfaces many times, further revision to our system needs to be accomplished.

Another obstacle to consider for children using these technologies may have been the system’s ruggedness and reliability. There were times that our current RFID system could not respond correctly to children’s natural movements (e.g., heavy punching of sensors, constant repetition). A lack of timely feedback can lead to unpredictable technology behaviors, which we find can confuse children quite quickly. Our research confirmed that these technologies must be extremely rugged and flexible for children to control in ways that are cognitively and physically appropriate.

In summary, developing technologies for children’s sensing-based interaction can be a challenge. Children demand extremely reliable, rugged, and flexible technologies they can control. In addition, a balance needs to be struck between visible concrete metaphors for these technologies and integrating these technologies into the environment for storytelling. We look forward to understanding future paradigms in programming and physical interaction with children.

 

ACKNOWLEDGMENTS

This work could not have been accomplished without an NSF Career grant to support Dr. Druin’s research on "The Classroom of the Future" and the collaboration of the child design partners in our lab and those children who used the technologies at the Center for Young Children (CYC). We also are grateful to the many teachers we have worked with at the Center and the Center’s Director, Fran Favretto.  In addition, the underlying technologies could not have been developed without the untiring expertise of Sante Simms, Wayne Churman, Nacer Lataabi, Harry Singh, and Daniel Cabrera. To them we are indebted.

 

REFERENCES

ABOWD, G. D. AND MYNATT, E. D. 2000. Charting past, present and future research in ubiquitous computing. ACM Transactions on Computer-Human Interaction, Special issue on HCI in the new Millenium 7, 1 (March), 29–58.

ALBORZI, H., DRUIN, A., MONTEMAYOR, J., PLATNER, M., PORTEOUS, J., SHERMAN, L., BOLTMAN, A., TAXÉN, G., BEST, J., HAMMER, J., KRUSKAL, A., LAL, A., PLAISANT-SCHWENN, T., SUMIDA, L., WAGNER, R., AND HENDLER, J. 2000. Designing storyrooms: Interactive storytelling spaces for children. In Proceedings of Designing Interactive Systems (DIS-2000). ACM Press, 95–104.

AOKI, P. M. AND WOODRUFF, A. 2000. Improving electronic guidebook interfaces using a task-oriented design approach. In DIS'00: Designing Interactive Systems: Processes, Practices, Methods, & Techniques, 319-325.

BACK, M., COHEN, J., GOLD, R., HARRISON, S., AND MINNEMAN, S. 2001. Listen reader: An electronically augmented paper-based book. In Proceedings of Human Factors in Computing Systems.

BELLOTTI, V. M. E., AND EDWARDS, K. 2001. Intelligibility and accountability: human considerations in context-aware systems. Human-Computer Interaction, Volume 16. Lawrence Erlbaum Associates, Inc., 193-212.

BELLOTTI, V. M. E., BACK, M. J., EDWARDS, W. K., GRINTER, R. E., LOPES, C. V., AND HENDERSON, A. 2002. Making sense of sensing systems: Five questions for designers and researchers. 415–422.

BOBICK, A., INTILLE, S. S., DAVIS, J. W., BAIRD, F., PINHANEZ, C. S., CAMPBELL, L. W., IVANOV, Y. A., SCHUTTE, A., AND WILSON, A. 1999. The Kidsroom: A perceptually-based interactive and immersive story environment. In PRESENCE: Teleoperators and Virtual Environments. 367–391.

DEY, A. K, ABOWD, G. D., AND SALBER, D. 2001 A conceptual frameword and a toolkit for supporting the rapid prototyping of context-aware applications. Human Computer Interaction, Volume 16, Numbers 2-4. Lawrence Erlbaum Associates, Inc., 97-166.

DRUIN, A. July/August 2002. When technology does not serve children. SIGCHI Bulletin, 34(4), 6.

DRUIN, A., MONTEMAYOR, J., HENDLER, J., MCALISTER, B., BOLTMAN, A., FITERMAN, E., PLAISANT, A., KRUSKAL, A., OLSEN, H., REVETT, I., PLAISANT-SCHWENN, T., SUMIDA, L., AND WAGNER, R. 1999. Designing PETS: A personal electronic teller of stories. In Proceedings of Human Factors in Computing Systems. ACM Press, 326–329.

DRUIN, A. AND PERLIN, K. 1994. Immersive environments: A physical approach to the computer interface. In Proceedings of Human Factors in Computing Systems. Vol. 2. ACM Press, 325–326.

FLECK, M., FRID, M., KINDBERG, T., O’BRIEN-STRAIN, E., RAJANI, R., AND SPASOJEVIC, M. 2002. Rememberer: A tool for capturing museum visits. In UbiComp 2002, 48-55.

FREI, P., SU, V., MIKHAK, B., AND ISHII, H. 2000. Curlybot: designing a new class of computational toys. In Proceedings of Human Factors in Computing Systems. ACM Press, 129–136.

GIVEN, N. & BARLEX, D. 2001. The Role of Published Materials in Curriculum Development and Implementation for Secondary School Design and Technology in England and Wales. International Journal of Technology and Design Education,11(2).

ISHII, H. AND ULLMER, B. 1997. Tangible bits: Towards seamless interfaces between people, bits and atoms. In Proceedings of Human Factors in Computing Systems. ACM Press, 234–241.

LAMB, M. AND BUCKLEY, V. 1984. New Techniques for Gesture-Based Dialogue. In Proceedings of IFIP INTERACT'84: Human-Computer Interaction, 135-138.

MACKAY, W., VELAY, G., CARTER, K., MA, C., AND PAGANI, D. 1993. Augmenting reality: Adding computational dimensions to paper. Computer-Augmented Environ-ments: Back to the Real World. Special issue of Communications of the ACM 36, 7.

MADDOCKS, R. 2000. Bolts from the blue: How large dreams can become real products. In Robots for kids: New technologies for learning, A. DRUIN, AND J. HENDLER Eds. Morgan Kaufmann, San Francisco CA, 111-156.

MARTIN, F., MIKHAK, B., RESNICK, M., SILVERMAN, B., AND BERG, R. 2000. To mindstorms and beyond: Evolution of a construction kit for magical machines. In Robots for kids: New technologies for learning, A. DRUIN AND J. HENDLER, Eds. Morgan Kaufmann, San Francisco CA, 9–33.

MONTEMAYOR, J., DRUIN, A., FARBER, A., SIMMS, S., CHURAMAN, W., AND

DAMOUR, A. 2002. Physical programming: Designing tools for children to

create physical interactive environments. In Proceedings of Human Factors in

Computing Systems. ACM Press.

MONTEMAYOR, J., DRUIN, A., AND HENDLER, J. 2000. Pets: A personal electronic teller of stories. In Robots for kids: New technologies for learning, A. DRUIN AND J. HENDLER, Eds. Morgan Kaufmann, San Francisco CA, 367–391.

ROSCHELLE, J. M, PEA, R. D., HOADLEY, C. M., GORDIN, D. N., & MEANS, B. (Fall/Winter 2000). Changing How and What Children Learn in School with Computer-Based Technologies. The Future of Children: Children and Computer Technology, 10(2).

ROH, J. H. AND WILCOX, L. 1995. Exploring Tabla Drumming Using Rhythmic Input.In Proceedings of ACM CHI'95 Conference on Human Factors in Computing Systems, 310-311.

SALBER, D., DEY, A., AND ABOWD, G.1998. Ubiquitous computing: Defining an HCI research agenda for an emerging interaction paradigm. Tech. rep. GIT-GVU-98-01, Georgia Institute of Technology.

SHAFER, S. A. N AND BRUMITT, B. and CADIZ, JJ. 2001. Interaction Issues in Context-Aware Intelligent Environments. Human-Computer Interaction, Volume 16. Lawrence Erlbaum Associates, Inc., 363-378.

STROMMEN, E. 1998. When the interface is a talking dinosaur: Learning across media with actimates barney. In Proceedings of Human Factors in Computing Systems. ACM Press, 288–295.

TANENBAUM A S. 1996. Computer Networks, 3rd Edition, Prentice Hall.

THOMAS, J. R. 1980. Acquisition of motor skills: information processing differences between children and adults. Research Quarterly For Exercise and Sport, Vol. 51, No. 1, 158-173.

TREVOR, J., HILBERT, D. M., AND SCHILIT, B. N. 2002. Issues in personalizing shared ubiquitous devices. In Ubi-Comp. 56–72.

UMASCHI, M. 1997. Soft toys with computer hearts: Building personal storytelling environments. In Proceedings of Human Factors in Computing Systems. ACM Press, 20–21.

WEISER, M. 1991. The computer for the twenty-first century. Scientific American, 94–104.

WEISER, M. 1993. Some computer science problems in ubiquitous computing. In Communications of the ACM.

WYETH, P. AND WYETH, G. 2001. Electronic blocks: Tangible programming elements for preschoolers. In Human-Computer Interaction - INTERACT’01. IOS Press, 496–503.