A Collaborative Digital Library for Children:

A Descriptive Study of Children's Collaborative Behavior and Dialogue

 

Allison Druin, Glenda Revelle, Benjamin B. Bederson, Juan Pablo Hourcade,

Allison Farber, Juhyun Lee, Dana Campbell

 

Human-Computer Interaction Laboratory

University of Maryland

College Park, MD  USA

+1 301 405 7406

allisond@umiacs.umd.edu

http://www.cs.umd.edu/hcil/kiddesign/

 

 

Abstract

Over the last three years, we have been developing a collaborative digital library interface where two children can collaborate using multiple mice on a single computer to access multimedia information concerning animals. This technology, called “SearchKids” leverages our lab’s past work in co-present collaborative zoomable interfaces for young children. This paper describes the differences in children’s collaborative behavior and dialogue when using two different software conditions to search for animals in the digital library. In this study, half the children had to “confirm” their collaborative activities (e.g., both children had to click on a given area to move to that area).  The other half used an “independent” collaboration technique (e.g., just one mouse click allows the pair to move to that area). The participants in this study were 98 second and third grade children (ages 7-9 years old) from a suburban public elementary school in Prince George's County, Maryland.   The children were randomly divided into two groups and paired with a classmate of the same gender. Each pair was asked to find as many items as possible from a list of 20 items given a limit of 20 minutes.  Sessions were video taped and the first and last five minutes of each session were coded for discussion type and frequency.  The results of our study showed distinct differences between groups in how children discussed their shared goals, collaborative tasks, and what outcomes they had in successfully finding multimedia information in the digital library. These findings suggest various ways educators might use and technologists might develop new collaborative technologies for learning.

 

Keywords

Children, Collaboration, Computer-Supported Collaborative Learning, Digital Libraries, Educational Applications, Single Display Groupware (SDG), SearchKids, Zoomable User Interfaces (ZUIs).

 

Introduction

According to the President’s Information Technology Advisory Committee on Digital Libraries (2001), no classroom, group or person should ever be isolated from the world’s greatest knowledge resources.  They envision a time when citizens anywhere and anytime can use any Internet-connected digital library to search all of human knowledge. They point out however, that today’s Internet “only hints at the future of digital libraries.” (p.3). “Making digital libraries easier to use will further help realize their power.  We need a better understanding of the requirements for specific tasks and classes of users, and we need to apply that understanding along with new technical capabilities to advance the state of the art in user interfaces” (p.5).

 


When it comes to children, the promise of digital libraries falls short.  Few technology interfaces for digital libraries have been developed that are suitable for younger elementary school learners (ages 5-10 years old).  Children want access to pictures, videos, or sounds of their favorite animals, space ships, volcanoes, and more. However, young children are being forced to negotiate interfaces (many times labeled “Appropriate for K-12 Use”) that require complex typing, proper spelling, reading skills, or necessitate an understanding of abstract concepts or content knowledge that are beyond young children’s still-developing abilities (Druin et al., 2001; Moore & St. George, 1991; Solomon, 1993; Walter et al., 1996).  In recent years, interfaces to digital libraries have begun to be developed with young children in mind (e.g., Nature: Virtual Serengeti by Grolier Electronic Publishing, A World of Animals by CounterTop Software). However, while these product interfaces may be more graphical, none of these interfaces specifically address collaboration, a critical learning experience for children. Structuring collaborative learning experiences has come to be a priority in many classrooms and emphasized by diverse curriculum standards (Chambers & Abrami, 1991; Cohen, 1994; Fulton, 1997; Johnson & Johnson, 1999; Lou et al., 2001; Slavin 1996).  Yet, few computer technologies have been developed to support co-present collaboration in the information-seeking domain.  Therefore, in the Fall of 1999, we began at the University of Maryland to develop a digital library interface that supports young children in collaboratively browsing and searching multimedia information.  This paper discusses the importance of the collaborative learning experience, the digital library technology we created, the methods we used to understand the differences in collaborative interface technologies, and suggests possible future directions for educators and technology developers in developing new technologies which support collaborative learning.

 


Collaboration and Children

Research has shown that under certain conditions, working together to achieve a common goal produces higher achievement and greater productivity than working alone (e.g., Chambers & Abrami, 1991; Lou et al., 2001; Johnson & Johnson, 1999; Slavin, 1996). A resent meta-analysis of 122 research studies conducted between 1966 and 1999 which compared small group learning with individual outcomes using technology showed that on average, small group learning had significantly more positive effects than individual achievement (Lou et al. (2001).

 

The question of how to structure these cooperative learning experiences is still an important area for research.  There is evidence that incentives need to be put in place to motivate collaborative learning (e.g., Latane, et al., 1979; Cameron & Pierce 1994; Meloth & Deering, 1992; Slavin 1996).  Others suggest that group rewards are important but coupled with individual accountability, so that group consequence is based on the work of many not of a few (e.g., Davidson, 1985; Latane et al., 1979; Shepperd, J. 1993; Slavin 1996).  Researchers have also suggested carefully structuring the interactions among students in cooperative groups can also be effective (e.g., Berg, 1993; Lou et al., 2001; Newburn et al., 1994; Palincsar & Brown, 1984; Wood & O’Malley, 1996; See also Kim at al., in this issue).  From the developmental science perspective of research, it is believed that due to the discussions between collaborators, the questioning or disagreements that might arise, can offer opportunities for critical understanding and learning (e.g., Damon, 1984; Murray, 1982; Wadsworth, 1984).  And still others feel that it may a combination of many complex factors that can support cooperative learning (Wood & O’Malley, 1996). 

 

By applying this research to the design of collaborative technologies for children, it seems that the following design criteria are critical:

-         supporting shared goals,

-         structuring interactions between collaborators,

-         enabling discussions about the goals,

-         supporting achievement outcomes

 

However, if one examines the design of today’s computers, it is obvious even from the hardware that these technologies often limit children’s collaborative interactions.  Current computers have been designed with one mouse and one keyboard with the underlying assumption that one person will use the computer. In looking at the literature on computer-supported collaborative learning, the majority of software applications support collaboration only when children “take turns” using the mouse or when they collaborate from different locations over the Internet (Inkpen et al., 1995; Inkpen et al., 1999; Stewart et al., 1999; Wang et al., 2001).  However, “Single Display Groupware” (SDG) is an emerging research area that explores innovative technological solutions to support small groups of users collaborating around one shared display (Benford et al., 2000; Bricker et al., 1998; Hourcade & Bederson, 1999; Inkpen et al., 1999; Stanton et al., 2001; Stewart et al., 1999; See also Scott et al., and Stanton et al., in this issue).

 

Within this focus of research, there have been some initial studies that have compared the use of one mouse to the use of two mice by pairs of children (Inkpen et al., 1995; Stanton et al., 2001; Stewart et al., 1999).  In those studies, researchers found that using multiple mice at a single display can do a great deal to motivate users, support more successful problem-solving outcomes, and to help focus users on the task (Stanton et al., 2001; Stanton et al., in this issue).  On the other hand, researchers did find that shared navigation tasks with the use of multiple mice presented challenges for collaborators.  With other tasks, if simultaneous users did not want to collaborate, they could essentially ignore the other person by, for example, drawing on their own side of the screen.  However with shared navigation, one child could change the view on the screen making it difficult for the other child to continue their activity of choice (Stanton et al., 2001).   It is this challenge of shared navigation that we address in this paper within the framework of the digital library interface for children that we developed.

 

 

 

A Collaborative Digital Library

In attempting to explore the importance of collaboration as an educational strategy in the classroom, we began the development of a digital library for children that supports two or more children.  As part of an NSF-funded DLI-2 research initiative, we began building an application we now call SearchKids (Druin et al., 2001; Hourcade et al., 2000).  SearchKids is written in Java, and relies on Jazz and MID, Java toolkits we developed in part to support SearchKids.  Jazz supports the development of zoomable user interfaces (Bederson et al., 2000, Bederson & Boltman, 1999), and MID supports the use of multiple input devices (Hourcade & Bederson, 1999; Hourcade et al., 2000; Stewart, et. al., 1999).  SearchKids uses a custom Microsoft Access database that contains the hierarchical metadata with pointers to local files containing the animal-domain content.  More detailed information about the toolkits is available at http://www.cs.umd.edu/hcil/jazz and http://www.cs.umd.edu/hcil/mid.

 

The Zoomable User Interface (ZUI) of SearchKids gives children a visual, direct manipulation interface to access a digital library of animal media. Multiple mice can be plugged into a single computer, and the SearchKids application uses each mouse to control a separate “hand” cursor (see Figure 1).  SearchKids supports two collaborative interaction styles.  The first, “independent collaboration” enables each child full independent control over the interface, so that they can each click on and activate any icon in any location at any time.  Each “mouse click” will change the view on the screen. The second interaction style, “confirmation collaboration” requires each action to be confirmed by the other child.  Therefore, each mouse click must be confirmed by a subsequent click of the other mouse in order to activate icons to change the screen view.

 

 

SearchKids has three areas that children can explore: the world, zoo, and search area.  Figure 1 shows the prototype’s initial screen (left) and the three areas for browsing and searching (right).  The first two areas provide a way to browse a curated subset of the database.  The zoo area provides a way of browsing the contents of our animal database in a familiar setting with virtual animal houses for children to zoom into. For example, to access media about lizards, children can zoom into the reptile house and click on a representation of a lizard.  The world area supports geographic browsing.  It presents children with a globe that they can spin and zoom into to find animals that live in that part of the world. For example, to access media about polar bears, children could zoom into the North Pole and click on a representation of a polar bear.

 

To access the full database, children can enter the search area, which gives them the ability to graphically specify and manipulate queries (Figure 1, far right image).  It also provides a visual overview of query results, which instantly indicates how many items were found.  The initial search area and more detail with search results is shown in Figure 2.  Our primary goal has been to enable children to perform moderately sophisticated queries without any text or knowledge of Booleans search logic. We did this by creating a fixed vocabulary hierarchy of metadata (approximately 25 items), and annotating our database of 500 pictures, sounds, and drawings of animals with it.  The metadata hierarchy has four top-level nodes which enable children to search based on what animals eat, where they live, how they move, and what type of animal they are (a biological taxonomy).  Icons were drawn to represent each item in this hierarchy. 

Based on this structure, an interactive interface enables children to specify any item in the hierarchy by simply clicking on one of them.  The search kids (see upper left of screens in Figure 2) visually represent the query as it is being formed.  The selected metadata icon slides over to one of the children; the database is queried; and the results are shown in the small area within the red bounding line.

 

In order to form queries with more than a single item of metadata, children can click on more icons.  To navigate to a deeper level of the hierarchy, the child clicks on the shadow under each icon to zoom into the contents of that hierarchy.  All pans, zooms, and object motions are animated to help children understand the effect of their actions.  It should be noted that the software automatically forms either an intersection or a union of the search terms based on what we have discovered to be the most intuitive approach for children.  The application constructs a union of any terms within the same top-level hierarchy, and an intersection between different top-level hierarchies.  For example, clicking on the icons for fish, bird, and “eats meat” would implicitly form the query ((fish OR bird) AND “eats meat”), since fish and bird both belong to the top-level “taxonomy” hierarchy.  While this approach can limit search expressivity, we have found that it works quite well in practice for children.  Young people are able to form the queries they want, and are able to do so in what seems to be an intuitive manner (Revelle et al., 2002).

 

To see the results of a query in more detail, children can click on the results area, and the view smoothly zooms into that area so it fills the screen.  The images in the results can still be small (if there are many results), and so children can continue to click on the picture, and the area that was clicked on zooms in a bit at a time so eventually, the full resolution picture is shown.

 

 

Methods for Evaluation

The Participants and Setting

The participants in this study were 98 second and third-grade children (ages 7-9 years old) from a suburban public elementary school in Prince George's County, Maryland (in the Washington DC metropolitan area).  Approximately 52% of the children were Caucasian, 36% were African American, and 22% were Asian or Hispanic.  The school serves an economically challenged population of children.

 

The children were divided into two groups and paired with a classmate of the same gender. The first group, a total of 50 participants, used the “independent navigation” model for collaboration (as described in the previous section on “A Collaborative Digital Library).  This group was made up of 24 second graders (14 females and 10 males) and 26 third graders (14 females and 12 males).  The second group, a total of 48 participants, used the “confirmation navigation” model for collaboration.  This group was made up of 22 second graders (12 females and 10 males) and 26 third graders (14 females and 12 males).

 

The Tools and Activities

The children were taken out of their normal classroom and brought to a quiet area in the school library to take part in the study.  Participants used a laptop computer with the SearchKids application running.  All of the interface functionality was demonstrated by a researcher, and children were given a free-play period of a few minutes to experiment with clicking on icons to see what happened before the “treasure hunt” began.  Each pair was asked to find as many items as possible from the same paper text list of 20 target animals (e.g., monkey, octopus, etc.).   They were asked to get as many of these animals into the treasure chest as possible within a 20-minute session.  Each session was videotaped, and a researcher was present to take notes and answer questions.  In addition, the software logged all of the mouse clicks for later analysis.

 

Data Collection and Analysis Methods

The first and last five minutes of each video taped session was coded for discussion type and frequency.  The coding instrument was developed based upon previous coding instruments designed by our team and other collaborators (Bederson & Boltman, 1999; Stanton et al., 2001).  In addition, the instrument was revised based on its initial use, coding two sample tapes of child pairs.  The final instrument and a definition of the codes can be seen in the Appendixes to this paper. The codes fell into six basic areas: Interaction Style (e.g., explanation, elaboration, new thought), Type of Comment (e.g., agreement, disagreement), Social Interactions (e.g., question, off-topic comment), Task Interaction (e.g., concerning navigating the program, search strategies, animal information) Comment on the Experience (e.g., positive, negative) and Non-verbal Communication (e.g., movement or gesture to the laptop, to a mouse, to the paper). Multiple codes could be used for a given piece of dialogue. These codes were used by 5 researchers (only one of which was actually present during the video taping) to code the first and last five minutes of each pair’s experience.  Before coding began all researchers did a pilot-test on the sample tapes and their codes were compared to look for inter-rater reliability.  We found an average reliability of 81% between coders

 

Once all tapes were coded, an analysis was done to look for the most frequent kinds of dialogue and the largest differences between conditions.  Once these areas were identified, then a content analysis of those areas was done to better understand the specific differences in the children’s dialogue.  It was at that time that an additional code was added to the analysis based on the data content that emerged.  At the same time, an analysis of the data logs was done to examine possible differences in search outcomes.  This meant a record of each user’s mouse clicks and a listing of the animals found by each pair were analyzed.  These results were compared with the qualitative analysis of the dialogue to form a descriptive analysis of the children’s differences in collaboration.

 

Results

Frequency Analysis

In examining all codes in all conditions, we saw that the four most frequent areas of discussion were introductory, descriptive, task or navigation statements (see Figure 3).  This reflects a consistent pattern of discussion between pairs. Most frequently, the children used an introductory statement to begin a new thought (e.g., “let’s start” “time for something new”). Following this, they often stated what was happening or what they were about to do or look for (e.g., “there is the elephant”). Next, they talked about the task (e.g., “I think it eats meat.  Let’s go to what it eats”). And depending on the condition, they would discuss the way they needed to navigate (e.g., “click here” “you click it so wrong”).  Interestingly enough, the frequency of these types of statements did not change dramatically over the 20 minute session. 

Figure 3: Frequency of discussion of all 98 children who participated in the study

 

To further understand these frequencies, we examined the dialogue based on condition and found that the biggest differences in frequency were in discussions of task and navigation (see Figure 4).  Children in the independent condition talked more about the task of finding animals, while the children in the confirmation condition discussed more about navigation issues (e.g., “you have to click the same time as me”). It was in these areas that we decided to focus on understanding content differences (see Content Analysis section).

 

Figure 4: Frequency of discussion compared by condition

 

We then looked to see if there were differences in discussion between the male pairs and the female pairs as well as between grade level pairs.  What we found was that there were no major differences in the frequency of discussion based on gender or grade level (see Figures 5 & 6).

 

 

Figure 5: Frequency of discussion by gender

 

Figure 6: Frequency of discussion by grade

 

However, when we looked at the children by grade in comparing their performance in the two conditions, we did find some differences between the two grades when children were discussing tasks and navigation (see Figure 7).  Though both grade-groups spoke more about the task in the independent condition and more about navigation in the confirmation condition, these differences were stronger for the second graders than they were for the third graders, which may suggest some developmental differences between the two age groups.

 

Figure 7: Frequency of discussion based on condition and grade level

 

 

Content Analysis

In examining the content of the children’s dialogue, we found two interesting differences by condition.  The pairs in the confirmation condition spoke much more about navigation (e.g., mouse clicks), yet displayed more “shared goals” in their dialogue (see Figure 8).  It seems while the constraints of the interface focused the children more on the need to navigate, it also kept them focused on their shared goals.  In the two examples below, these characteristics are highlighted (navigation word-italic bold, shared goal-underlined) for two pairs under the confirmation condition.

Example 1: females/2nd grade/confirmation condition

D: One, two, three

T: Let’s go! You click it so wrong

D: Wait, Let’s count to be able to click

D,T: One, two, three

T: Wait, wait, we’ve got to wait. Come on!

D: Let’s do it one time. Wanna do it?

T: One two three! I wanna go back

T: No, this one

D: This one?

T: No we already picked the one. Now we gotta pick an animal. No let’s pick an animal

D: One two three! What were we looking for?

T: Pig!

D: Pig! What type of animal?

T: Alligator

D: Yeah! Alligator

T: Where is alligator? Alligator, Where is it?  I don’t see

D: Well let’s go this way

 

Example 2: males/2nd grade/confirmation condition

 

A: Let’s check for elephant

A: I think I saw before. So much before. Go up!

T: Ok! Over here.

A: We are really concentrating on cow right now

A: Butterfly

T: Is that a cow?

A: No, that’s a goat. Drag, drag, elephant. I don’t think that elephant is here.

T: Let’s go back

A: There is horse

A: Remember there is horse on the list

A: Anything else requested for?

A: We’re still looking for cow

T: There is a cow

A: Where?

T: I see it

A: Cow ! Click on it!

T: Yep!

 

Under the independent condition where each child could use their mouse to move to an area where he or she wanted, navigation discussions were less frequent.  Instead, they seemed to talk more about the task itself.  Yet, with this added flexibility, more individual goals emerged in their dialogue.  In the two examples below, these characteristics are highlighted (Task word-Italic Bold , individual goal-Underlined) for two pairs under the independent condition.

 

Example 3: males/2nd grade/independent condition

W: Now it’s my turn. I want a snake

M: A snake is amphibian

M: Ok.

W: What they eat?

M: Oh, wait

W: No, go to the “eat”. We need to get rid of this.

M: we captured it.

W: Okay, let’s go to what they eat.

W I wanna have reptiles

W: What they eat?

M: You want that one?

W: We captured it

W: Ok. Snakes

M: Where are the snakes?

W: This one

M: It’s my turn

W: What do you want?

W: Oh! Jaguar!

W: Jaguar is mammal.

W: What are you looking for?

 

Example 4: females/3rd grade/independent condition

T: Who goes first?

D: I will go

D: I think it eats meat

T: What animal do you want to pick?

D: Alligator

D: Where are they supposed to be in there?

T: Alligator!

D: Alligator!

T: Take it

D: Okay

T: Ok. Can I go around?

D: Hold on. Check first

D: You are looking for……

D: How does it look like?

T: Go back

T : Butterfly

D: This is really hard

 

 

Finally, we looked to see if there were differences in the total number of shared goals versus independent goals based on condition.  What we found was that there were many more shared goals present in the dialogue of children in the confirmation condition than in the independent condition (see Figures 8).  When looking at these goals by gender and grade, there were no obvious differences.  What we did find interesting was that while the difference between the total number of goals was quite clear, the actual variance within each condition was quite different.  In the confirmation condition, almost no independent goals were shown in the dialogue.  In each case, all pairs of children show many more shared goals than independent goals.  On the other hand, with the independent condition, sometimes the children’s dialogue showed an equal number of shared and independent goals, other times, many more independent goals could be seen, and still other times, there were some children who showed more shared goals than independent ones.

 

 

Figure 8: Total number of shared goals and independent goals by condition

 

 

Data Logs

In an analysis of the log data from a previous study (see Revelle et al., submitted), we found that there were differences in how successful the children were in their “treasure hunt”.  What we discovered was an interesting interaction of grade and condition.  Children who used the independent condition put more “right” items into the treasure chest, than those in the confirmation condition.  However, particularly in the second grade, children in the independent condition put a large number of “wrong” items into the treasure chest as well.  As shown in Figure 9, 75% of the second graders in the independent condition put four or more wrong items in the treasure chest, and 42% entered nine or more wrong.  In fact, second graders in the independent condition averaged placing the same number of wrong items (9.8) in the treasure chest as the number of right items placed there.  In that study we concluded that this result points to a developmental difference between the second and third grade pairs in the differential usefulness of the two collaboration conditions.  It appears that the second graders need the support of the confirmation condition to help them focus on their searches on the “right” items, rather than clicking on lots of items with disregard for task goals.  The third graders, on the other hand, did not appear to need this support.

 

 

3rd  grade pairs

2nd grade pairs

 

4 or more wrong

9 or more wrong

4 or more wrong

9 or more wrong

Independent

.23

.08

.75

.42

Confirmation

.15

.15

.36

.09

Figure 9: Percentage of pairs who had 4 or more WRONG items in the treasure chest and

9 or more WRONG in the treasure chest

 

 

Discussion

What emerged from the data was that there was no clear condition that best supports collaboration.  Instead we saw that each condition supported certain aspects of collaboration better than the other (see Figure 10).   

 

DIFFERENCES IN…

confirmation CONDITION

independenT CONDITION

Goals

Shared goal

Individual goals/shared space

Dialogue

Talked less in general
Talked more about Functions

Talked more in general
Talked more about Content

Outcome

More focused searches by younger children

Less regard for task goals by younger children

Figure 10: A summary of differences between conditions

 

In referring back to the literature on collaboration and learning, many researchers stressed structuring the collaborative experience for better achievement outcomes (e.g., Chambers & Abrami, 1991; Lou et al., 2001; Johnson & Johnson, 1999; Slavin, 1996), and in fact this is what emerged from our data.  The more structured interface that asked both children to confirm their actions, seemed to better support more focused and accurate search results, particularly for the second grade children.  We found that the second grade children who had a more flexible interface with the independent condition seemed to compete for who could click first on a place or icon.  This perhaps led to more “wrong” animals placed in the treasure chest. 

 

In regards to shared negotiated actions, the confirmation condition lent itself to consistently  more discussion of shared goals.  However, with this condition, the content of the discussions were more functional in nature and less frequent.  This suggests that the need to “confirm clicks” kept the second graders focused on navigation issues instead of task discussion. The children who used the independent condition had more flexible an interface, and were found to talk more about the strategy of finding their animals and less about the “mouse clicks.”  It seems that when there was no need to consider confirming, the younger children could concentrate on the task of looking for animals.  On the other hand, in some way, the independent condition was more consistent with the literature that stresses group rewards, but individual accountability (Davidson, 1985; Latane et al., 1979; Slavin 1996).  Each child had to be more accountable for their actions, however this accountability did not lead to better search outcomes.  It did however lead to more discussion about the process, which has made us wonder if these pairs might have learned more about general search strategies and animal content than the other teams who used the confirmation condition.

 

It is interesting to note that we did not find much non-verbal communication within the coding scheme we chose.  This is consistent with the previous literature (Inkpen et al., 1995; Stewart et al., 1999).  In those studies which compare one mouse to two mice use, there was a significant amount of non-verbal communication when only one mouse existed (e.g., grabbing the mouse, pointing at the screen).  However, when two mice were used, the children tended to use their screen cursors to point and negotiate.  In a future study, it may be appropriate to code the use of screen cursors as a form on indirect non-verbal communication, but at this point, all we can say is that we informally observed that this occurred.

 

In general, it seems that there is no a clear-cut “better” interface for collaborative searching.  Each condition offered different strengths that educators may want for their classroom teaching.  If educators are interested in stressing shared negotiated action, then the less flexible interface for children may be more appropriate.  On the other hand, if educators are interested in stressing the quality of the communication and process, making children more accountable for their actions, than the more flexible interface may be appropriate.  In regards to search outcome however, educators might consider a less flexible interface condition.

 

 

Conclusions

This study has shown us that different interfaces may be more appropriate in supporting different aspects of children’s collaboration experience.  For educators, it is critical that they understand that there are trade-offs in what different technologies can support.  The outcomes the classroom learning experience is designed for should dictate what technologies are appropriate for use.

 

In regards to designing new collaborative technologies for children, we have learned that interfaces that enforce collaboration, may only be supportive of some learning experiences. On the other hand, non-enforced collaborative interfaces may better be able to support the process of collaboration, but not necessarily the outcomes.  This may mean that we need to design technologies that have options for both conditions of collaboration in a classroom. 

 

In considering the limitations of this research, we understand it is necessary to do future studies that compare what children have learned about searching with their process outcomes.  Due to the exploratory nature of our study, we were able to describe some of the complexities concerning which aspects of collaboration may be better supported by different interfaces, but it is hard to generalize from our research findings without further targeted quantitative studies. 

 

Therefore, our future research includes not only further evaluation of our digital libraries technologies, but further development of our interface to support various collaborative behaviors.  We will also be expanding our search content area to include digital books about many topic areas.  We believe that this will lead to future challenges for interface design, and even more possibilities for exploring collaboration activities for learning.

 

Acknowledgements

This research was conducted with the generous support of the National Science Foundation’s Digital Library’s Initiative-2 (1999-2002), contract # 9909086. Our work could not have been accomplished without the partnership of the second and third grade teachers and students at Yorktown Elementary School in Bowie, Maryland.  We would also like to thank Elizabeth Row and Joyce Maynard, the school’s media coordinators.  In addition, it is important to acknowledge our colleagues in the Human-Computer Interaction Lab and the Institute for Advanced Computer Studies, who continue to support and inspire us as we continue to create new technologies for children.

 

References

Alborzi, H., Druin, A., Montemayor, J., Sherman, L., Taxen, G., Best, J., Hammer, J., Kruskal, A., Lal, A., Plaisant Schwenn, T., Sumida, L., Wagner, R., & Hendler, J. (2000). Designing StoryRooms: Interactive Storytelling Spaces for Children.  Proceedings of ACM Designing Interactive Systems (DIS'2000), NY, pp. 95-104, ACM, New York.

 

Bederson, B. B., Meyer, J., & Good, L. (2000). Jazz: An extensible zoomable user interface graphics toolkit in java.  In Proceedings of User Interface and Software Technology (UIST 2000) ACM Press, pp. 171-180.

 

Bederson, B. B., & Boltman, A. (1999). Does animation help users build mental maps of spatial information? In Proceedings of Information Visualization Symposium (InfoVis 99) New York: IEEE, pp. 28-35.

 

Benford, S., Bederson, B., Akesson, K., Bayon, V., Druin, A.,  Hansson, P.,  Hourcade, J.,  Ingram, R., Neale, H., O'Malley, C., Simsarian, K., Stanton, D., Sundblad, Y., & Taxen , G. (2000). Designing storytelling technologies to encourage collaboration between young children. Human Factors in Computing Systems: CHI 99 ACM Press.

Berg, K. F. (1993, April). Structured cooperative learning and achievement in a high school mathematics class. Paper presented at the annual meeting of the American Educational Association, Atlanta.

Bobick, A., Intille, S., Davis, J., Baird, F., Pinhanez, C., Campbell, L., Ivanov, Y., Schutte, A., & Wilson, A. (1999). The KidsRoom: A perceptually-based interactive and immersive story environment. Presence: Teleoperators and Virtual Environments, 8(4), 367-391.

Bricker, L. J., Baker, M., Fujioka, E., & Tanimoto, S. (1998). Colt: A System for Developing Software that Supports Synchronous Collaborative Activities. University of Washington Technical Report (UW-CSE-98-09-03).

 

Cameron, J. & Pierce, W.D. (1994). Reinforcement, reward, and intrinsic motivation: A meta-analysis. Review of Educational Research, 64, pp. 363-423.

Chambers, B. & Abrami, P. C. (1991). The relationship between student team learning outcomes and achievement, causal attributions, and affect. Journal of Educational Psychology, 83, pp. 140-146.

 

Cohen, E.G. (1994). Restructuring the classroom: Conditions for productive small groups. Review of Educational Research, 64(1), pp. 1-35.

 

Damon, W. (1984). Peer education: the untapped potential. Journal of Applied Developmental Psychology, 5, pp. 331-343.

 

Davidson, N. (1985). Small-group learning and teaching in mathematics: A selective review of the research.  In R. E. Slavin, S. Sharan, S. Kagan, R. Hertz-Lazarowitz, C. Webb, & R. Schmuck (Eds.). Learning to cooperating to learn (pp. 211-230). NY: Plenum.

 

Druin, A., Bederson, B., Hourcade, J. P., Sherman, L., Revelle, G., Platner, M., & Weng, S. (2001) Designing a Digital Library for Young Children: An Intergenerational Partnership. Proceedings of ACM/IEEE Joint Conference on Digital Libraries (JCDL 2001) pp.398-405.

 

Fulton, K. (1997). Learning in the digital age: Insights into the issues. Santa Monica, CA: Milken Exchange on Education Technology.

Hourcade, J. P., & Bederson, B. B. (1999). Architecture and implementation of a java package for multiple input devices (MID). Tech Report HCIL-99-08, CS-TR-4018, UMIACS-TR-99-26, Computer Science Department, University of Maryland, College Park, MD.

 

Hourcade, J. P., Bederson, B. B., Druin, A. (2000).  QueryKids: A Collaborative Digital Library Application for Children. ACM CSCW 2000: Workshop on Shared Environments to Support Face-to-Face Collaboration. Philadelphia, Pennsylvania, USA, December 2000. Papers available at http://www.edgelab.sfu.ca/CSCW/workshop_papers.html

 

Inkpen, K., Booth, K.S., Gribble, S.D. and Klawe, M. (1995).  Playing together beats playing apart, especially for girls.  Proceedings of Computer Supported Collaborative Learning (CSCL)’95

 

Inkpen, K. M.,  Ho-Ching, W., Kuederle, O., Scott, S. D., & Shoemaker, G. (1999). This is fun! We're all best friends and we're all playing: Supporting children's synchronous collaboration. Proceedings of Computer Supported Collaborative Learning (CSCL) '99, December 1999, Stanford, CA.

 

Johnson, D. W. & Johnson, R. T. (1999). What makes cooperative learning work.  In D. Kluge, S. McGuire, D. Johnson, & R Johnson (Eds.) JALT applied materials: Cooperative learning (pp. 23-36). Tokyo: Japan Association for Language Learning.

 

Latane, B., Williams K., & Harkins S. (1979). Many hands lighten the work: The causes and consequences of social loafing. Journal of Personality & Social Psychology, 37(6), pp.822-832

Lou, Y., Abrami, P. C., & d’Apollonia, S. (2001). Small group and individual learning with technology: A meta-analysis. Review of Educational Research, 71 (3), pp.449-521.

Meloth, M. S.& Deering, P. D. (1992). Effects of two cooperative conditions on peer-group discussions, reading comprehension and metacognition. Contemporary Educational Psychology, 17(Apr, 92), pp. 175-93.

Moore, P., & St. George, A. (1991). Children As Information Seekers: The Cognitive Demands of Books and Library Systems. School Library Media Quarterly, 19, pp. 161-168.

 

Murray, F.B. (1982). Teaching through Social conflict. Contemporary Educational Psychology, 7(3), pp. 257-71.

 

Newburn, D., Dansereau, D. F., Patterson, M. E., & Wallace, D.S. (1994). Toward a science of cooperation. Paper presented at the annual meeting of the American Educational Research Association (AREA), New Orleans.

 

Palincsar A. S. & Brown, A.L. (1984).  Reciprocal teaching of comprehension-fostering and comprehension-monitoring activities. Cognition & Instruction, 1(2), pp.117-175.

 

Pejtersen, A. M. (1989). A Library System for Information Retrieval Based on a Cognitive Task Analysis and Supported by and Icon-Based Interface. In Proceedings of Twelfth Annual International Conference on Research and Development in Information Retrieval (SIGIR 89) New York: ACM, pp. 40-47.

 

President’s Information Technology Advisory Committee (PITAC): Panel on Digital Libraries (February 2001). Digital Libraries: Universal Access to Human Knowledge.

National Coordination Office for Information Technology Research and Development: Washington DC.

 

Revelle, G., Druin, A., Platner, M., Weng, S., Bederson, B.  Hourcade, J. P., & Sherman, L. (2002) A Visual Search Tool for Early Elementary Science Students. Journal of Science Education and Technology, 11(1), pp. 49-57.

 

Revelle, G., Druin, A., Bederson, B., Hourcade, J. P., Farber, A., & Cambell, D. (Submitted). Software support for collaboration among elementary school children. Journal of Research on Technology in Education.

 

Shepperd, J. (1993). Productivity losses in performance groups: A motivation analysis. Psychological Bulletin, 113, pp. 67-81.

 

Slavin, R. E. (1996). Research on cooperative learning and achievement: What we know, what we need to know. Contemporary Educational Psychology, 21, pp. 43-69.

 

Solomon, P. (1993). Children's Information Retrieval Behavior: A Case Analysis of an OPAC. Journal of American Society for Information Science, 44, pp. 245-264.

 

Stanton, D., Neale, H. & Bayon, V. (2002) Interfaces to support children's co-present collaboration: multiple mice and tangible technologies. Computer Support for Collaborative Learning. (CSCL) 2002. Boulder, Colorado, USA. January 7th-11th.

 

Stewart, J., Bederson, B., & Druin, A. (1999). Single Display Groupware: A model for co-present collaboration. Human Factors in Computing Systems: CHI 99 (pp. 286-293). ACM Press.

Wadsworth, B. J. (1984) Piaget’s theory of cognitive and affective development (3rd ed.) NY: Longman.

Walter, V. A., Borgman, C. L., & Hirsh, S. G. (1996). The Science Library Catalog: A Springboard for Information Literacy. School Library Media Quarterly, 24, pp. 105-112.

 

Wang, X. C., Hinn, D. M., & Kaufer, A. G. (2001). Potential of computer-supported collaborative learning for learners with different learning styles. Journal of Research on Technology in Education, 34(1), pp. 75-85.

 

Wood, D. & O’Malley, C. (1996). Collaborative learning between peers: An overview.  Educational Psychology in Practice, 11(4), 4-9.


APPENDIX A

 

Code Definitions for Collaborative Digital Library Observations

 

CS: Cursor Shape

 

Pers: Person (Which person speaks in the group?)

            #: Kid ID #

 

Interaction Style: How the person interacts

            Intro: Introduction (new thought)

            Expl: Explanation (repeating or expanding own previous thought)

           

Elab: Elaboration Building on other’s ideas, responding to other’s 

questions/comments

            Requ: Request (“Can you hand me a pencil?”)

 

Type of Comment

            Agre: Agreement

            Disa: Disagreement

 

Social Interactions

            Dire: Directive (“Do this”)

            Sugg: Suggestive (“I think we should…”, “Let’s…”, etc.)

            Ques:  Question

            Offt: Off topic

            Stat: Stating what is happening, stating what you are about to do/look for 

                        (“Here we go”, “We found it”), a descriptive statement

 

Task Interaction: If the interaction is concerned with the task

            Task: About the task (“Next is the heron”, “Let’s look for…”, I’m gonna

check this one off)

Nav:  Navigation (“Click here”), about the program itself

Sear: Search strategy (“Elephants are mammals so we should click on

mammals”), talking about how the program is organized or about

search categories

            Anim: Specific animal/picture content, comments about animals that do

not refer to the specific picture or category under which they are

organized  i.e “I like dogs because…” 

            Res: Researcher

            Desc: Description of what’s happening

            Colla: Collaboration i.e. “We work well as a team.”

           

Comments on Experience

            Pos: Positive

            Neg: Negative

 

Non-Verbal Communication

            Pen: Moves/gestures to pen

            Mous: Moves/gestures to mouse

            Pape: Moves/gestures to paper

            Body: Moves/gestures to other child’s body parts

            Lapt: Moves/gestures to laptop

APPENDIX B

 

Name/Kid ID #____________   Age in Months____ Sex __ Grade ___ CS_____

 

 

 

Name/Kid ID #____________   Age in Months____ Sex __ Grade ___ CS_____

 

 

 

Teacher ________________    Date____________  Start Time _____________

 

 

 

 

Pers

Interaction Style

Social Interactions

Task Interaction

Com Exp

Non Verb

#

intro

expl

agre

disa

elab

requ

dire

sugg

ques

off

stat

task

nav

sear

anim

res

desc

coll

pos

Neg

pen

mous

pape

body

Lap