394;. . 4 .1 .. 4H .In "(3532‘ '3:‘ ‘8': . g t.‘ ‘3?” -3: 9e 1? ‘2 n . " ¢ 2 , 2 u ‘ -\- '~"r 2' .- ' 2:2.“v x'l’vW-‘C. If: “'7‘. _ . _ . .I‘ ~ 1 _ , - ‘0: ‘1. "'2‘ . ' >t . 'W‘ «1.1"?» . $22”: : - -.= "v 1- .~. . .- we? . ’ r f. z. 2 ‘ . - , . 2 .4 , ."r 2' figfiflé’f" "“5“; 3:" .,.. 1,11" f4“ may ‘ ,figxéfi ’22..ng ‘ “1H - 2- .a- ' . '- 23: #2; ;_fi:§, 1.4-1! _ - 7' Yr ‘ k ”fir,” ‘ 3”“ 1:! ,r.',";}' . v - I: ' "2 2'1" ’2 . , - ,5 Q, 44$. 1": - . f /' .“r. l - ‘ 5/ L45, . 'f I, " .' "V/~""’"19 .I '0. . ‘ VJ" MJ'” “4." l 2 , :f. . 3. :15! Is: a i‘gwu‘fi &; "H .J- ‘J {awn-:1 -'| u I ' , F . 2’22” ‘1’ . “55“ "’ :6 fl): "‘3 1 C‘ V. .1, ' "W n . 1 -'r KGTX’Z‘K‘. .1 {3| . . I ‘2 ‘M ' "' -'... . ‘ ‘. ’, '1‘ |§,€X-¢ '~i‘v » .a 4.42: 2-. . 2 2 . u-’~x;~ . ‘17 . > .1“ fig..:% "'3’ » -- 2 — , 222 2222,22,. 7%! ‘2'.“ “3:11;. 7" ' @‘é’fifirfirl -~ u mewi“ ‘ imagm 'hvfliéfi; , .. '1 Maggy”; f.>-.~¢:wc,-- /' ,‘ ‘3; ‘2. -' ' " 1353‘“; Ju-‘w. X‘- ‘ 245;? ":3 -- *f; $22,22W‘“ : z ‘w r" g. 1-3“ ' I L '2' i- - » 4." <"‘=*£*-‘2££<~é:f~~ ,‘ 'J A 1.- y . «1.2/2. “I: ‘/-."' .9 '4‘ A? .4. \‘I s. 8'2. ' .. , ”if! .l.}|'|1(3'".I.NH‘§I-vuv A; aims .nw‘oiu. ... «Minna». .3; 7'. ' ._“. miniiiMiqui‘iiiii 1(9qu Oqo ”-1.”? LIBRARY Michigan State University : This is to certify that the dissertation entitled . Effect Of Group Size, Gender And Ability Grouping 0n Learning Science Process Skills Using Microcomputers. presented by Zane Lee Berge has been accepted towards fulfillment of the requirements for Ph.D. degree in Educational Systems Development , 1' (/7 C‘,? r flzév/ //;74 4 7/ . M‘fjor professor Dateé/g/ 27):) MSU is an Affirmative Action/Equal Opportunity Institution 0-12771 _ 7 _ _ 7 7 ,fi,,_ RASL) RETURNING MATERIALS: Place in book drop to remove this checkout from LIBRARIES __‘—. your record. FINES will be charged if book is returned after the date stamped below. “‘“VM. 1.30 . Nmi " " {5-1 a. .2. .3; "Va-"4 2; 0 200? ' ’ i ~ -' APR 0 6 2009 f'JI‘L" final uqixt U 2 Lvu ,iii . '7 r r ' '- ' e'sé 6 if“ ere 1_ 1"”? ,.. t..—' -— 031809 EFFECT OF GROUP SIZE, GENDER, AND ABILITY GROUPING ON LEARNING SCIENCE PROCESS SKILLS USING MICROCOMPUTERS. BY Zane Lee Berge A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Department of Counseling, Educational Psychology, and Special Education 1988 ABSTRACT EFFECT OF GROUP SIZE, GENDER, AND ABILITY GROUPING ON LEARNING SCIENCE PROCESS SKILLS USING MICROCOMPUTERS. BY Zane Lee Berge This study investigated the effect of group size, (individuals, pairs and quads of students), gender, and ability grouping, (high, medium, and low), on student achievement within an environment that uses microcomputers as tools in learning science process skills. A split—plot, multivariate factorial design was used to analyze the above factors and interactions among the factors. Two response variables, the Test of Integrated Process Skills and a researcher developed test that more closely matched the style and format of the practice that the students had during the intervention, were measured using a pre-test and post-test. Science education was chosen as the context for the study as it is an important school subject, yet one in which the learning of problem-solving skills, commonly referred to as process skills, is difficult. Three separate areas of educational research are relevant: 1) student acquisition of problem-solving skills, 2) appropriate use of computer technology in school learning, and 3) students learning in cooperative groups. (n (n (W Two hundred and forty-five, seventh and eight grade student subjects were the focus of this study. They were selected from twelve classrooms in three different school districts. Selection of classrooms was based upon the number of computers available and the teachers’ willingness to participate in this research. Analyses indicated that the only statistically significant result was a main effect on ability for both response measures. However, the two post-test measures showed opposite trends in gain scores by low, middle, and high ability students. Speculation concerning this ability by post-test interaction was discussed. Other major conclusions included: 1) teams of two and four members working together solved problems as effectively as individuals, 2) the lessons and procedures implemented in the manner described, generated gender-neutral activities in science, and 3) microcomputers, using a file management program and structured activities, can be used as a tool to promote student learning of science process skills. To my parents, Iva and Mark Berge, who gave me the education most worth having. iv i t‘ < J L) ‘fiv‘ I ¢~~ s . ACKNOWLEDGEMENTS The completion of a dissertation is usually the result of support from many people. I would like to thank a few of the people who have made the most significant contributions to my efforts: To my committee Chair, Richard McLeod, for your continued encouragement and support editing multiple drafts of the manuscript; and for your help in general throughout my years in the ESD program. To the members of my committee, Norman Bell, James Rainey, and Stephen Yelon, for your support, encouragement, suggestions, and patience throughout this research effort. To Andrew Porter, Co-director of the Institute for Research on Teaching, for your suggestions concerning the research design of this project; and for recruiting me into the IRT fellowship program, plus your continuing help throughout my years at Michigan State University. To Kris Morrissey, a friend and fellow ESD student, for your editorial comments on the final draft of this study; and for making several collaborative projects fun. To each of the classroom teachers, school administrators, and students who participated in this study. To Nancy, my wife, for your enduring encouragement and support; certainly without which this dissertation would not have been possible. TABLE OF CONTENTS PRELIMINARIES List of Tables. viii List of Figures. ix CHAPTER ONE. INTRODUCTION. Introduction to the problem. 1 Central Questions. 9 Research questions. 10 Purpose of the study. 11 The delimitations. 11 Significance of study. 12 Assumptions. 13 Chapter summary. 16 CHAPTER TWO. REVIEW OF LITERATURE. Introduction. 17 Science education related to problem solving. 18 Computer technology in school learning. 23 Microcomputer database research. 25 Summary of computer related literature. 29 Gender related differences. 30 Gender related differences in science education. 30 Gender related differences in computer education. 33 Summary of gender related literature. 35 Learning in groups. 37 Group size. 38 Group composition. 40 Summary of learning in groups. 42 Chapter summary. 44 CHAPTER THREE. METHODS. Population and sample. 47 Training. 47 Instrument. 48 Materials. 50 Procedures and design. 55 Assigning students to groups. 55 Data Analysis. 58 Chapter Summary. 59 vi CHAPTER FOUR. RESULTS. Description of the subjects. 61 Description of the data collected. 63 Research hypotheses. 68 Design. 70 Results and discussion. 73 Summary. 78 CHAPTER FIVE. SUMMARY, CONCLUSIONS, AND RECOMMENDATIONS Purpose. 80 Procedure. 81 Hypotheses. 82 Discussion. 83 Conclusions. 87 Limitations. 89 Recommendations for further research. 91 Other considerations/recommendations. 94 APPENDIXES Appendix A. Teacher checklists for students working in groups and individually. 97 Appendix B. The Processes of Analysis Lessons 99 Appendix C. Human Subjects Form 120 Appendix D. Researcher Developed Pre-test and Post-test Questions (Part II) 122 LIST OF REFERENCES 130 vii Table Table Table Table MANOVA MANOVA T-test T-test LIST OF TABLES results for Part 1 of post-test. results for Part 2 of post-test. for Post-test Part 1 vs. Pre-test Part 1 for Post-test Part 1 vs. Pre—test Part 2 viii 71 72 73 74 Figure Figure Figure Figure Figure LIST OF FIGURES . Gradient Scale. Plan to form groups with 2 students. Plan to form groups with 4 students. Within-subjects design. Ability x Post-test Gain Scores. ix 57 57 58 59 76 In CHAPTER ONE. INTRODUCTION. Introduction to the Problem. This research focuses on students learning certain science inquiry skills considered to be an important subset of many problem-solving skills, within a microcomputer environment. Science process skills are defined in this study to mean those skills associated with planning, conducting, and interpreting results from scientific investigations. The phrases "process skills", "inquiry skills", or "higher—order cognitive skills", are used synonymously throughout this discussion, unless otherwise stated. The perspective taken here is that science process skills are a subset of problem solving. Problem solving is a complex process that is recognized by many educators as an important, necessary part of schooling and to an individual’s life outside of school (Doerr, 1979; McGuire, 1973; Yinger and Eckland, 1975). Therefore, the teaching of problem solving is frequently incorporated into various school curriculum plans. Definitions of problem solving appear to vary greatly from simply a description of problem solving as a thinking process, to relatively complex sequences of events, (e.g., active phase, evaluation). One list of problem solving steps which may be useful for thinking about the focus of this study was developed by Stapp and Cox (1979): I) recognize the problem, 2) define the problem, 3) listen with comprehension, 4) collect information, 5) organize information, 6) analyze information, 7) generate alternative solutions, 8) develop a plan of action, 9) implement a plan of action, and 10) evaluate the plan of action. Any given activity and/or instructional objective involving the integrated process skills may concentrate on some subset of these ten steps. However, it appears that the ten steps in the Stapp and Cox model are not mutually exclusive and it is assumed all steps are used to some extent by the student in learning science process skills. Before deciding hgw to teach problem solving, the question of whether general problem solving can be taught (e.g., Greeno, 1980) should be considered. In practice, this question involves domain-independence of problem solving, (i.e., general problem-solving skills that cut across many disciplines) vs. domain specificity of problem solving (i.e., problem solving depending on knowledge within a given subject matter area). This debate is not new, of course; and at one point the issue was whether students should be taught Latin to "improve their minds", (i.e. to give them general problem-solving skills). Some educators believed that one trains the mind to discipline; others thought that this type of discipline did little or nothing to promote the transfer of problem-solving skills to other domains (see e.g., Dewey, 1964; Whitehead, 1929). To date, 3 research has failed for the most part, to find significant transfer from training on one task to another. This is consistent with the generally accepted modern, experimental psychological position that the learning of problem-solving skills is generally unique to a given task (Newell, 1980). Over the last few decades, science education as a school subject has taken on the orientation of teaching problem solving within the science domain and educators have often concentrated on teaching problem solving using an inquiry based curriculum. This type of curriculum utilizes an inductive approach, (i.e., the reasoning from particular to general, the inference of laws from observation). The materials used in the classroom for this research focus primarily on an inductive learning approach. Even though few educators agree on the specific characteristics or steps in general problem solving, most authors agree that inquiry skills are a part of the problem- solving process (Bruner, 1961). One instructional strategy that is thought by some educators to help in the teaching of higher-order thinking skills is to have students focus on asking questions, instead of seeking one right answer; to focus on gathering, organizing and analyzing relevant data, rather than only on results (see e.g., Linn, 1986; Tobin and Capie, 1980). New technologies, including microcomputers, have emerged in the past decade which have promise as valuable tools in the teaching of science process skills. Used in laboratory type instruction, these tools may aid in '71 (J. (7 4 the development of scientific skills and positive scientific attitudes in students (White, 1985; Thompson, 1986; Cox and Berger, 1985). In this study, students used database management software to develop certain problem-solving skills. Unlike computer-assisted instruction (CAI) software, which commonly drills students on particular content matter (e.g., vocabulary), database management software (DBM) can be used as a tool for a diversity of problem—solving activities. Rather than presenting specific material or information, this software has the potential for facilitating the process of organizing, manipulating, and accessing information (Freeman, Hawkins, and Char, 1984). As microcomputers become available in schools, many questions arise as to how they can best be used. The computer environment provides many opportunities for students to develop and test plans. Newell (1980) suggested that the problem solver first constructs a plan in some abstract or simplified "problem space" and then uses that plan to guide the solution to the problem. The interactive nature of computer learning often allows students to quickly discover whether their plans work (Linn, 1985). The computer need not be viewed as a discipline per se (i.e., computer science and computer programming), but as a unique tool for encouraging activities and skills already stressed in the established subject areas. This concept of using the computer as a tool to promote inquiry skills already 5 stressed in science education is central in the current research. To date, there has been one published study identified which investigated students’ achievement of process skills using microcomputers and databases. In his dissertation, White (1985) found that students using a computer-based file management program, along with structured activities, achieved higher scores on an information processing test than did students in a control group. The subject area was social studies, but the current study parallels that work in many substantial respects as will be described in Chapter Two. Because of the rather rapid introduction of computers into schools throughout the United States, it has become important to examine ways in which computers can be integrated smoothly and effectively into existing curricula. Since computers are expensive and may be somewhat limited in number in the schools at least in the near term, it is also important to consider the advantages and disadvantages of having students use computers in small groups (Trowbridge and Durnin, (1984). Given the current conditions in schools of far more students than computers (Bitter and Gore, 1986), students often do their work in groups. One of the major lines of investigation in the current study involved comparing the student achievement of process skills of students working individually at computers with those of students working in groups at computers. 6 There is also a growing concern about the effects (i.e., cognitive, social, and attitudinal) that computers have on students compared to other forms of delivering instruction, especially on lower ability and female students. In the field of science, there is evidence of differential success of males and females that might be attributed at least in part to schooling. The fear is that computer use will create or widen the schism between the "haves" and "have nots" (Lipkin, 1983; MacGregor, 1985; Walker, 1983). By age thirteen, females have begun to slip significantly behind males in science achievement (National Assessment of Educational Progress, 1978). The gap continues to significantly increase through high school and into adulthood. Although warnings of gender related differences in computer education have received considerable press (Walker, 1983; McPhail, 1985; Macgregor, 1985) there is a paucity of systematic research (Anderson, et al., 1983) on this topic. The research conducted to date involving gender related differences in computer education examines differences in computer access or usage but not achievement related to problem solving directly. However, to the extent inequity in access to computers and computer usage results in achievement differences, these become important issues to study. It is a common concern that all students have equal opportunity and appropriate support for acquiring skills and literacy with new technology (Linn, 1985; Walker, 1983). (T (n [—4 (I) (f) (I) (I) 3! "fi 7 The research literature regarding group learning points to strong evidence that cooperative learning strategies can be used to decrease certain equity problems (Slavin, 1983). There is also evidence showing that students working together in cooperation are far more successful in problem- solving achievement than students in competitive or individualistic conditions (Johnson, Johnson, and Stanne, 1986). A cooperative learning activity is defined in this study as a task performed by two or more individuals "employing common means in a coordinated manner to attain individual goals" (Bar-Tal and Geser, 1980, p. 214). Learning cooperatively is directly related to this study, since it investigated the effects on achievement of process skills in science of individual students vs. groups of students using microcomputers as tools in learning, with special consideration given to the male/female equity issue. There is very little previous research on cooperative learning in microcomputer environments, especially in naturalistic, (i.e. classroom), settings. Cox and Berger (1985) in a laboratory study, concluded from their research as it relates to learning problem-solving skills within a microcomputer environment that "students work better in teams than alone" (p. 467). They concluded that groups with two to four students were most effective in solving the types of problems used in their study. The Cox and Berger (1985) study provides much of the basis in the current study for determining the number of students per treatment group. I). ' h C4 (I) m In nu In In ,4 8 Group size is only one factor that needs investigation, however. The cooperative learning literature suggests an aptitude by treatment interaction based on the ability distribution of learners within groups. Some researchers found a significant interaction favoring high achievers in a small group (Peterson and Janicki, 1979). Others (Peterson, Janicki, and Swing, 1981; Webb, 1977) found a curvilinear interaction where high and low achievers learned best in small groups while average achievers learned best working independently. Therefore, the current research compared the gains in achievement of low, middle, and high ability students (as measured by achievement on the pretest), while working either individually or in small groups. When students learn in groups, individual characteristics and behaviors are important to learning, as are group characteristics (Allen and Feldman, 1976). The current study investigated three variables that may effect learning. The input characteristics focused on in this study were: one individual characteristic (i.e., gender), and two group level characteristics (i.e., group ability composition and group size). Two output variables involving science process achievement, were the outcome measure used in this study. When a limited number of variables is studied, many other variables (e.g., age, teacher expertise, locus of control) must be held as constant as possible or controlled for in some way, if the study is to be valid. Two important methods utilized in this research to control fly . nu AL a 9 potentially confounding variables were: 1) randomization (i.e., of classrooms to treatment; of students to within classroom grouping), and 2) use of a pretest/post-test design. The use of both of these controls is described later in Chapter Three. Central Questions. Central to this study was the question, can microcomputers be used to promote the learning of science process skills? More specifically, what are some factors which effect such learning within a microcomputer environment? Three factors were chosen for this research: 1) group size, 2) gender, and 3) ability grouping. With achievement in learning inquiry skills as the dependent variable, the central questions related to this study are: 1) With the limited number of computers currently in classrooms, can more than one student effectively use a microcomputer at the same time? 2) As microcomputers are used as tools in solving more and more sophisticated problems, and groups of students learn together, does the ability level of a student within any group have an effect? 3) Can cooperative groups be used to decrease inequity in achievement in schools regarding gender? Out of these questions emerges a need for systematic investigations of the cognitive consequences of process skills learning within a microcomputer environment. 10 Research Questions. The central questions translated into three research questions examined in this study. The research questions and their corresponding null hypotheses are: 1) Are cooperative learning groups more effective than individuals working alone in learning science process skills within a microcomputer environment? H01: There is no significant difference in the learning of science process skills between two- member cooperative learning groups, four-member cooperative learning groups and individuals who work alone using microcomputers. 2) What is the interaction between high, medium, and low ability students and group size in learning science process skills within a microcomputer environment? H02: There is no interaction between high, medium, and low ability students and group size on learning science process skills within a microcomputer environment. 3) What is the interaction between female and male students and group size in learning science process skills within a microcomputer environment? H03: There is no interaction between the gender of the student and group size on learning science process skills within a microcomputer environment. a: his ll Purpose of the Study. This study examined the effects of group size, gender, and ability grouping on learning (i.e. achievement gain). Students used a microcomputer and a database application program as a tool when practicing certain science problem— solving skills, (described previously as integrated science process skills). This study also investigated the interactive effects of gender on group size and ability composition of the group on group size. The independent variables were: 1) group size 2) gender, and, 3) ability level. The dependent variables were two outcomes that were a measure of student achievement of selected integrated process skills. The Delimitations. This study involved only the output performance variables described previously as integrated science process skills. It did not involve other performance outcomes, nor social outcomes, of learning. Therefore, this study cannot be used to make inferences beyond the specific set of problem-solving skills measured as the outcome variables (i.e., relating to the outcome objectives which collectively involve planning, conducting, and interpreting results from investigations as stated in Chapter Three). Since classrooms were selected based on teachers having access to microcomputers and who volunteered to participate in this research, the sample studied cannot be considered a random sample. Students in the classrooms selected may have Pr! (1 12 more access to computers and/or other characteristics which may not be representative of the middle-school population. Therefore, Care should be exercised when making inferences beyond the classrooms studied. All students studied were middle school age (i.e., grades 7 and 8). Since students of other ages may have different experiences and other characteristics (e.g., maturation) which may substantially change the population, this study cannot be used to make inferences beyond the population of middle-school age students. Significance of the Study. An individualistic assumption has been dominant in the instructional use of computers in education (Johnson, Johnson, and Stanne, 1986). The assumption by software designers of one learner to one computer has gone largely unchallenged. The results of this study may have implications for software designers relevant to inclusion of cooperative design for some learning objectives. It may also have implications for policy makers regarding a method for organizing group learning; and how many computers are needed for the particular type of learning described and used in this project. This research extends to microcomputer environments, the work already completed concerning cooperative learning and group composition. It extends the work done by Cox and Berger (1985) on group size in a microcomputer environment to a naturalistic (i.e., regular classroom) setting. Thus, 13 the study may have implications for teachers wishing to structure learning groups within their classroom in which microcomputers are used. It also extends the cooperative learning literature to a non-mathematics subject area (i.e. science achievement), as suggested by Webb (1980); and extends to science education the work completed by White (1985) in social studies. There has been concern in the computer research literature that use of computers in the schools will increase inequity in education (e.g., poor children receiving qualitatively different kinds of instruction than rich children, such as only drill and practice vs. simulations; males dominating programming classes with only a few females enrolling in computer classes). Cooperative learning, on the other hand, has been effective in some cases in decreasing student inequities concerning achievement. Part of this study was designed to investigate the effects of gender and ability composition of groups of students on achievement (i.e. with regard to the development of certain science process skills) within a cooperative learning, microcomputer environment. Therefore, there may be implications for classroom teachers wishing to group their students with regard to ability and/or gender considerations. Assumptions. This study assumes that teaching and learning in schools do not usually take place within a one-to-one 14 interaction between teacher and student. That is, students learn within a network of relationships with peers. It is generally accepted that these student-student interactions are important to the development of, among other things, social competencies, socialization of sex roles, and achievement. It is assumed here that constructive peer relationships are not always formed automatically; and that formation of learning groups can be planned and administered effectively in a classroom setting. Another assumption concerns Vygotsky’s distinction between the social plane of a child’s development and the individual plane. In the social plane, a child’s knowledge is guided by the instruction of others; whereas in the individual plane, the child's learning is under his/her own guidance (Stein & Yussen, 1985). A "zone of proximal development" refers to the difference in functioning between these two planes. It is assumed that the child’s development is accelerated when social agents promote cognitive activity in excess of that attainable by the child learning in the individual plane. It is also assumed that peer interaction can, under certain conditions exhibited in the procedures of this study, be a catalyst in accelerating an individual student's learning. Forman and Kraker (1985) suggest that peer collaboration may enhance problem solving when students must define and revise their understandings of the task, and monitor and critique their problem-solving strategies in 15 social interactions. Peers may define problems differently resulting in disagreements which may be resolved in a manner leading to constructive learning. It is assumed that the interactions among assigned groups are not significantly different across the various treatments. The lessons developed for this research (McLeod, 1987) were not field tested with students of middle school age prior to the current research. Therefore, it was not known that learning would indeed occur as a result of delivering these five lessons that constitute the instructional activities in the main study. The lessons were based on similar materials by the McLeod, Hunter, and Finkel (1987) which were tested with middle school students. However, the tested materials were designed for more teacher involvement than was desired in the current study. The lessons for this study were modified to reduce the amount of teacher involvement, and it was expected that these more self- directed lessons would control to a large extent teacher variations across schools and classrooms. Face validity of the lessons for this study, (i.e., the evaluation by experts regarding whether there is a match between the lesson content and the author’s stated objectives for the lessons), was established by independent evaluation and agreement of three science educators. 16 Chapter Summary. Problem solving continues to be an important part of the general goals of education. Science educators have chosen to direct some of their curriculum development efforts toward teaching inquiry skills as part of problem- solving skills within their subject domain. Unfortunately, recent major reports and articles have criticized the effectiveness of students’ learning of inquiry skills; claiming this training is not very successful. One approach that may help in the teaching of these skills is to include the use of computerized databases as tools to promote practice in what appears to be inquiry skills. When integrating computers into the curriculum, however, educators must be concerned with many issues. Among these concerns are equity issues (e.g., equal use by males and females) and, given that there are far more students than computers in the schools, educators should consider investigation of group learning with computers. These considerations lead to the current research project which investigated the effects of group size, gender, and ability grouping on student achievement of science process skills within a microcomputer environment. v.— AC CHAPTER TWO. REVIEW OF LITERATURE. Introduction. In reviewing the literature relevant to the stated research problem, this chapter focuses on three areas of educational research: 1) acquisition of science skills, 2) learning in small groups, and 3) the appropriate use of computers in education. Particular attention will be focused on learning of higher-order skills (e.g., integrated process skills) and studies within computer environments in education. Higher- order skills, for this discussion, are related to Bloom’s taxonomy (Bloom, 1981; Bloom, Englehart, Furst, Walker, and Kathwohl, 1956, 1972). This taxonomy is organized using six major classes, which are: 1) knowledge, 2) comprehension, 3) application, 4) analysis, 5) synthesis, and 6) evaluation. These behavioral classes are in a relative hierarchical order, and move from simple to complex and concrete to more abstract levels. This study was focused on the higher-order skills of analysis, synthesis, and evaluation. Of course, the lower order skills are subsumed in these skills as well. Of these skills, the most central to this research are the skills of analyses. Analysis is often used in scientific inquiry, and is believed by some educators to be one of the most directly 17 18 related skills practiced by students using databases (e.g., Bommarito, 1986; Pon, 1984). The chapter begins by identifying the difficulty in learning process skills within the larger context of problem solving in science education; and proposes support for why this type of learning is important to students. Secondly, the literature concerning learning within microcomputer environments is investigated, emphasizing applications use. Thirdly, two areas that are related to the learning of science in schools are discussed - they are: 1) the problem of gender-related differences, and 2) learning in groups. Science education related to problem solving. Problem solving has long been an important part of school learning. /The unprecedented pace of scientific and technological innovation since the middle of the century has made insistent demands on science education) The quality of science training beginning as early as the pre—school years affects how well citizens understand their increasingly complex world and how effectively they cope with change (see e.g., National Science Teachers’ Association Position Paper, 1975; Linn, 1986). Pogrow (1983) states that voters will be asked to express preferences on a variety of issues related to scientific issues (e.g., acid rain) and the use of technology in society. Pogrow (1983) goes on to state that there are implications for curricula decisions if a country is to have a well informed national population. He suggests five major curricula implications inherent in having a 19 population with the flexibility to meet employment and voter demands in an "information society". Two of these are: 1) increase the distribution of higher-order skills among students, and 2) strengthen the mathematics and science curriculum. As noted earlier, few authors agree on the specific procedures in general problem solving (i.e., generic problem-solving skills that cut across all or many subject matter domains), or whether generic problem-solving skills can be taught. Some definitions describe problem solving only as a thinking process. Others include recognition and thinking processes, and still others include the action and evaluation steps (see e.g., Bruner, 1961a, 1961b; Taba, 1962; Dewey, 1910; Wallas, 1926; Polya, 1957; Simon & Newall, 1971). However,/in the past two decades, science educators have focused their curriculum on developing the ability of students to use specific components of a process of problem solving. Instead of attempting to develop generic skills of problem solving, science educators work within the domain of science to develop skills related to components that many educators believe to be an important part of the problem-solving process. One well defined list of these components has been developed by the American Association for the Advancement of Science (AAAS, 1967; AAAS, 1976). The AAAS has emphasized the development of the skills of students to employ these processes of science to learn about every day phenomena. In the "Science... A 20 Process Approach" program (SAPA), AAAS identified eight basic processes and five integrated processes as follows: Basic Processes Integrated Processes 1. Observing, 1. Formulating hypotheses 2. Inferring 2. Controlling variables 3. Using time and space 3. Defining operationally relationships 4. Interpreting data 4. Using numbers 5. Experimenting 5. Measuring 6. Communicating 7. Classifying 8. Predicting /" The developers of SAPA believed that thegsystematic teaching of science process skills would result in the acquisition of scientific literacy for all citizens>(Baird, 1985)./ This line of curriculum development leads to the idea of teaching science as a process, i.e. what scientists do, rather than teaching science as a body of knowledge, i.e. what scientists know.fi / Linn (1986) expressed the need for integrated process skills in science education: The information explosion changes the nature of knowing from the ability to recall information to the ability to define problems, retrieve information selectively, and solve problems flexibly. Rapid advance changes the nature of learning from the need to master topics in class to the need to learn autonomously. Educated citizens need to know how to revise their ideas and how to locate and synthesize information (page 13). / (;Is there reason to believe a process approach to teac ing science is effective? Shymansky, Kyle and Alport (1983) reviewed results from 45 studies regarding the 21 effectiveness of SAPA. Their review suggests that student achievement, process skills, perceptions of science, analytic skills, and related science skills increased an average of 0.27 standard deviations in elementary school learning. Furthermore, process skills improved by 1.08 standard deviations over students using other curricula?) Problem-solving skills are thought of as important in school learning. Both the "Educating Americans for the let Century" (1983) and "A Nation at Risk" (1983) emphasize the need for instruction which fosters problem solving, prepares learners to deal with naturally occurring problems, and encourages students to think critically.) Although general \ problem solving is not well defined, there are skills which have been identified as contributing to problem solving in \\ science. Some of those skills are the focus of this study.) More specifically, the integrated process skills that were/ addressed in this research are: 1) formulating hypotheses, 2) controlling variables, 3) defining operationally, 4) interpreting data, and 5) experimenting. In the rapidly changing world, it may be assumed that teaching inquiry skills prepares students to meet challenges in the future world in which knowledge and "facts" will have changed from what we believe today (Finley, 1983; Walsh, 1985). In particular, it has been argued that an inquiry orientation in science education empowers a person for autonomous, life-long learning (Joyce, 1985; Streibel and Garhart, 1985; Thelen, 1972). Yet, currently, science 22 students are not learning higher order cognitive skills (see e.g., Mitman, Mergendoller, Packer, and Marchman, 1984; Doyle, 1983). In the words of Tobin, (1986): Stake and Easley (1978) found that teachers emphasized learning of facts about science and provided students with few opportunities to develop the higher level thinking skills that most of the courses purported to develop. The courses were very textbook oriented and students tended to lack motivation to learn about applications of science to the world outside of the classroom. (p. 1) Several important studies document the serious problems in science education today (e.g., A Nation At Risk; Educating Americans for the let Century). The National Commission on Excellence in Education reported in "A National at Risk" (1983) a pressing need for educational reform to create a "learning society." The National Science Board in "Educating Americans for the let Century" has state a need for "new basics" or the thinking skills ,_':—o ”‘0.” .._ _, . .4..—r'--—---" required to cope Wlth rapid technological and soientific ""“..I-r changes. These studies also often point out that, ...__ J..- unfortunately, our schools may not be adequately preparing «MW students for problem solving in our rapidly changing world. /in light of this, it may be that the existing curriculum can be changed to include some of the new technologies that may help in promoting practice of desirable skills in many of the subject areas. The next section reviews the literature concerning one of the emerging technologies - the microcomputer - as it relates to school learning and this research. 23 Computer technology in school learning. Within any given classroom of students, there are differences among individuals. These differences may include motivation, ability, attitudes, personality, sex, ethnicity, and socioeconomic class (Peterson, 1981). This diversity presents a challenge to the classroom teacher attempting to meet the needs of individually different students. It has been suggested by many educators that the microcomputer has the potential to increase learning on an individual learner basis. The computer can be a tool used in educating students or an object to be programmed and learned about. In the past, public schooling has been more interested in the latterfg To date, very little empirical research has been done on learning when and how the computer can be used as a tool and integrated throughout the curriculum:) Businesses have for years enjoyed the advantages of "high tech" tools. These advantages include computers that help organize and speed transactions, reduce labor costs, and improve competition. Most of us, in the normal course of our daily lives, are affected in some manner by computers each day. Zinn (1979) proposed that educators need to realize the pervasiveness of computing in a person's everyday life when planning education (Cox, 1980). 9):: [The explosion of new technologies and new information | a , I ’ 1 has challenged science educators to revise traditional approaches and set new priorities. This challenge, for the 24 most part, has not yet been satisfactorily met (Linn, 1986). Computer technologies enable students to perform tasks that are very different from those done in the past. These tasks require new skills such as planning solutions using software instead of manipulating equations. The new technologies often reduce the workload for students much as they have done in the business community, by relieving students of the need to focus attention on technical details; and thereby permitting them to concentrate on the problems they are solving. / There is a shortage of empirical research in many areas of computers in education (e.g., simulation). The research on the use of microcomputers in education shows few controlled studies which provide any evidence of more effective learning using computers vs. other instructional delivery methods. (Effectiveness should not be confused with efficiency here. Research on computer assisted instruction has fairly well established a clear image of learning taking less time using computers under many learning conditions than with more traditional instructional delivery methods). As Clark (1983) has suggested, the lack of evidence showing CAI as superior in effectiveness compared with other delivery systems may be because computers are no more or less effective than the other methods per se; but rather it is the underlying instructional design that differs from one treatment to another. 25 Of course, as with any instruction, there are conditions which when met, utilize certain instructional delivery systems and methods more than others. Fisher (1983) summarized the research concerning computer assisted instruction and indicates effectiveness when the following conditions are met: 1) when it is aimed at specific student- body groups (i.e most effective in raising achievement among low-achieving and high-achieving students regardless of whether the "disadvantage" causing the low achievement is physical or social), 2) when it is fully integrated into the regular classroom curriculum (i.e., CAI was found most effective as a supplement to regular classroom instruction), 3) when certain subject areas are selected (CAI was shown to be almost always ffective in the areas of science and foreign language) The current study meets these conditions. I Microcomputer Database Research. Database programs (i.e. programs for data management) refers here to software that allows the user to manage electronically filed data using a computer. Data management involves performing tasks that are similar to recording and manipulating information on index cards; planning how to record the information, recording it, organizing the cards in some order, seeking relationships among variables, finding a particular card in the filebox, reorganizing the cards, updating a card when the information is obsolete, and developing a report based on the information on the cards. 26 Much of what has been written concerning computer—based information technology supports the belief that survival in the modern world will require citizens to access and solve problems using information which has been stored and/or transmitted electronically. This belief follows earlier rationales for teaching processing skills, but focuses on an additional concern about the amount and rate of information flow that is accessible by the citizen (Becker, 1982; White, 1985). The research base concerning database use in education is nearly nonexistent. The only study found to date that investigated certain processing skills in social studies within a microcomputer database environment was a dissertation by White (1985). He used Scholastic Publishing Company’s PFS: Curriculum Data Bases for U.S. History and for U.S. Government (Hunter and Furlong, 1985a and 1985b), PFS: File and PFS: Report, a general file-management program and its associated report generator (PFS: File, 1984 and PFS: Report, 1984), and printed support materials as the curriculum base for the research. (PFS: File is similar to, and functions much like, the Appleworks database used in the current research; the U.S. History and U.S. Government databases are similar to the Climate and weather databases in the current study.) These database management systems allow the user to store, manipulate, and retrieve data stored in related files. (In the case of the current study, 27 weather and Climate database files were used by students for these activities.) White (1985) used the database and materials for a two week time period, in which the software was introduced, and then activities where presented to students in grades 8-12. He hypothesized why these type databases, used according to his design, promote learning of process skills. Both the print materials that were used by the students to lead them through a step—by-step group of activities, and the computer program itself, provide practice using the same structure. He states: "The operation of the software imposes structure on the manner in which students enter, organize and retrieve data. In specifying the criteria for data searches, students are required to consider explicitly their information needs with respect to both relevance and sufficiency. Students must also specify in detail how they wish data to be displayed as output; again, explicit consideration of alternative organizations is necessary as part of interacting with the software." (p. 10). White (1985) further comments that information- processing theory, and the related research, is the basis the instructional methods and materials used in his research. He states that computerized databases "are well suited to serve as memory sources for student problem solvers, not only as repositories of information but as models for information storage, retrieval and organization" (p. 32). He goes on to point out that the materials accompanying the database management software must provide a balance between explicit, direct instruction and discovery. 28 The former guides students through efficient search and use of information for specific problems; and the later enhances the possibility of transfer to new problems of the kind most frequently encounter in social sciences. Results from the White (1985) study found students receiving the treatment involving the use of a computer- based file management program, in concert with structured activities, "achieved higher scores on an information processing instrument than their (control group) counterparts" (p. 91). White’s (1985) study supports the view that the microcomputer can be used as a tool in teaching processing skills in social studies. A number of science educators believe the use of databases help students develop and practice the process skills needed by a scientist. McLeod and Hunter (1987) state that scientists often ask questions and construct hypotheses based on their prior knowledge. With the help of research assistants, scientists often design experiments, gather relevant data, organize that data in ways that will help support their hypotheses, reorganized the data, analyze the data, draw inferences, and modify their inference(s). These are the processes a student using a database can utilize when experimenting and seeking patterns or relationships among the data. 29 Summary of computer related literature. The new technologies challenge science educators to find ways to revise curriculum that will utilize the capabilities of those new tools in student learning. Research points to some conditions (e.g., science curricula; low achieving and high achieving students) that seem to be more suited for learning with computers than other school situations. In addition, the White (1985) study supports the notion that microcomputers used as tools can positively affect achievement under condition similar to those used in this research. However, no instructional delivery system to date has been shown as best in all settings with all students - and computers are no exception. As might be expected, computers are not seen by every educator as a panacea. The growing use of microcomputers for public school instruction has raised both pedagogical and social policy concerns. On the pedagogical side are questions about the instructional capabilities of computer hardware/software, and how teachers can use the potential of computers successfully. On the social policy side, a concern is for equity of access to microcomputers, particularly for minorities and females. One reason equal access to computers may be important throughout schooling is that often early education influences later educational and occupational opportunities and choices (Stasz, Shavelson, and Stasz, 1985). Along with the promise of better deliver of well designed instruction >1 If‘\ m In ’1‘ r—J 30 using computers comes the concerns of equity (e.g., how to insure computer use narrows the male/female science achievement gap rather than widening it), and considerations regarding how to group students when using the limited number of computers currently found in schools. These issues will be reviewed in the last two sections of this chapter; and it will become clear that they are of direct interest to this research project. The next section reviews gender related differences; first in science education, and then in computer education. Gender Related Differences. Gender Related Differences in Science Education. There is a long history of reported differences between boys and girls in interest and achievement in mathematics, science, and related disciplines. Historically, careers in science have been dominated by men (Burton, 1979; Fennema, 1980; Burlin, 1976). There has been much concern expressed recently regarding the unequal numbers of male vs. female students in the various science content areas; and later in life the lack of female scientists. Steinkamp and Maehr (1983) state: An ongoing argument in educational circles concerns whether one should stress the development of proficiency in the hope that motivation will follow or stress the development of positive feelings in the hope that this will encourage the development of proficiency. This argument takes on a special form in the case of observed male/female differences in science achievement. There is little question that women have not achieved in the area of science to the same degree men have 31 (cf., e.g., Steinkamp & Maehr, in press). A major cause is thought to be attitudinal: Females simply do not like science as well. The implication of this conclusion is that science instruction ought to focus especially on affective outcomes (p. 369). Before settling on this instructional strategy, however, Steinkamp and Maehr (1983) suggest examining the research literature regarding gender-related differences and science achievement. Unfortunately, that body of literature does not speak with one voice. Using meta-analysis, they conclude that, in accord with other reviews, boys do better in school science than girls do. The differences are slight, but they appear to be reliable. In a re-analysis of the studies in the above cited Steinkamp and Maehr study, Becker and Chang (1986) found much of the gender differences within subsets of the original studies could be explained in part by the science subject matter being tested and also on the type of measure used in the studies. Gender differences for all subject- matter groupings except for studies of general science were found to be consistent, and the average gender differences are less than one half of a standard deviation. In physics and biology, it was concluded that males tend to do significantly better than females by about one-third of a standard deviation for physics and about one sixth of a standard deviation for biology. There was no significant differences between males and females on either geology (e.g., weather and climate) or chemistry. The authors In In 32 suggest that, since the degree of gender differences in achievement varies significantly across subject-matter areas in science, "care should be taken to distinguish between content areas when discussing or researching science achievement and gender" (p. 17). Becker and Chang (1986) subdivided the general—science studies according to the school grade of the subjects. Subgroups of elementary schoolers (grades 1 through 6), junior-high students (grades 7 through 9), senior-high students (grades 10 through 12, and college students, showed only studies of junior—high groups had a common population effect size. It indicated more than a quarter of a standard deviation advantage for males. Other research suggests sex-related differences are dependent on what type of science performance is being measured. When types of science performance were analyzed separately by Kahl, Malone & Fleming, (1982) modest effect sizes were found in favor of middle school boys for application problems only; effect sizes for knowledge, comprehension and higher order processes were minimal for these students. Meehan (1984) also reports modest effect sizes in favor of boys in proportional reasoning, but no sex effects for either propositional logic or combinations tasks. Studies of a 1980 Science in the Schools survey of 13 year old students in England (Schofield, Murphy, Johnson, & Black, 1982); and a 1981 Science in the Schools survey of 11 e m .. k . «L J ‘ (is b. 33 year olds in England, Wales, and Northern Ireland (Harlen, Black, Johnson, & Palacio, 1983) reported few sex differences. However; girls were superior to boys in planning investigations, while boys performed better than girls in the application of knowledge of scientific facts and principles in the physical science area.‘ Gender Related Differences in Computer Education. Gender differences in computer education have received much attention but little systematic research (Anderson, 1983). Studies by Lockheed, Nielsen, and Stone (1983) and Anderson, Klassen, Krohn, and Smith-Cunnien (1982) report that young women in secondary schools are less likely than men to spend time with computers and to enroll in computer classes. In addition, the 1981-82 National Assessment in Science provides data showing a substantial gap between females and males in signing up for computer programming classes. Females are less likely to take these courses than are males; 8 percent of the females and 14 percent of the males have enrolled in a programming course for at least one semester. A gap has been evident since 1978. Computers, sometimes referred to as "numbers crunchers", are often associated with mathematics. Research supports the notion that sex-stereotyping leads to females having less confidence in their ability to learn mathematics than their male counterparts. Mathematics serves as a filter into the sciences; therefore, math avoidance prohibits female entrance into these fields (Mathews and PM In 5d- (1) 34 Winkle, 1982). This filtering out of technological and scientific options poses a major barrier to women’s occupational and economic equity. The pattern of avoiding mathematics and science can lead to computer anxiety; e.g., the feeling that computers are too complex to be understood by the average woman. Too frequently, women do not take advantage of opportunities for learning about computers as evidenced by the fact that computer courses at all levels remain predominantly male. One area in which there is some evidence that this pattern of avoidance does not relate to gender is in using computers as a tool to accomplish other tasks. Tool uses of computers at schools or in places of business do not seem to show sex differences (Lockheed, 1985). Some reasons Lockheed (1985) suggests for this are: 1) girls are more interested in tool use than are boys, 2) activities supported by tool software, (i.e., drawing; sending and receiving mail), are not sex stereotyped, and 3) tool use is perceived as more relevant to future activities and occupations than other instructional uses for computers, (e.g., programming). Finally, a study by Webb (1984) investigated the effects of the gender composition of a group on achievement and interaction patterns. The gender composition factor varied the ratio of females to males in the group. Three kinds of mixed-gender groups were studied. The types were: 1) groups with two females and two males, 2) majority-female 35 groups (typically, three females and one male), and, 3) majority-male (typically, one female and three males). Achievement in the Webb (1984) study depended on the ratio of females to males in a group. _The achievement of females and males was nearly identical in the groups with two females and two males. In the majority-female groups and the majority-male groups, however, the males showed higher achievement than the females. Summary of gender-related literature. With regard to gender-related differences, Becker and Chang (1986) were able to explain much of these differences normally found in studies involving gender and science achievement using more sophisticated meta-analysis techniques than earlier reviews used. Gender-related differences favoring males were still found to be significant in middle school students. Therefore, gender was selected as one of the independent variables in the current study, given the pervasive nature of gender—related differences in overall science achievement research favoring boys, especially in middle school students. (Furthermore, it is important to note that in science boys are superior to girls on tasks that require knowledge of or familiarity with stereotypically male objects or apparatus, but no sex differences are observed when the task is more gender neutral.j There seems a need for research to report treatments showing no sex-related differences in 36 hopes of identifying more gender-neutral activities in science. Even though the opportunities for computer learning in schools are increasing, inequities continue. Low-income, female, and rural students are especially disadvantaged in receiving computer experiences in school. To the extent that computer literacy and computer expertise are necessary for success in getting employment, computer inequity is a serious problem (Anderson, et al., 1983). If males and females participate differentially in computer learning environments, this could lead to corresponding differences in cognitive attainments and career access. Hawkins (1985) has hypothesized that because computers are often incorporated into math or science curricula (e.g., Saunders, 1978), there are serious consequences for girls. Since computers are often linked with science/math/technology into educational environments long dominated by males, computers typically enter the classroom with an aura of sex-related inequities that have an impact on learners. The current study is designed to compare male vs. female achievement. One implication of the analysis of this comparison is to discover whether or not support is found for the type of instruction used in this research being gender-neutral. The other study that had a direct impact on the design of this study is the Webb (1984) study showing that when females outnumbered the males or were outnumbered by the .r\ 37 males in a group learning experience, the female's achievement was lower than the males. Furthermoreivwhen females and males were of equal number in a learning group, gender-related achievement differences were not significant in the Webb (1984) study. These findings support the use of equal numbers of males and females in the groups with two and four students in the current study. Learning in groups. While research findings are not consistent concerning group learning, overall the evidence points to higher achievement by most students in many school situations resulting from students placed in heterogeneous learning groups; especially when students learn within cooperative groups (Johnson & Johnson, 1978b; Johnson, Skon, & Johnson, 1980; Skon, Johnson, & Johnson, 1981). Conversely, there is evidence to support the position that homogeneous groups seem to be less effective in certain high-level tasks (Lorge and Soloman, 1959). Research does suggest, however, that within heterogeneous small groups, learning may be differentially effective for students with different skills and backgrounds. When learning is done in small groups, a number of interrelated issues need explored. Two important variables are group size and group composition. The following subsections contain a review of relevant literature on these two variables and relates that literature to the current study. l(\ in if) *(3 (I) The. 38 Group Size. The number of students within a learning group has several important implications for academic achievement, cognitive development, and socialization. Optimum group size depends on the group’s task, composition of members, time available, level of social skills of students, and many other factors. A number of research efforts have focused on the effects of instructional group size on learning in non— computer environments. The results are not consistent. Students working in groups have shown greater gains than individuals working alone in some cases but not in others (Trowbridge and Durnin, 1984). Klausmeier, Wiersma and Harris (1963) concluded from their work with groups of various sizes that students working in pairs and quads grasped concepts faster than individuals. However, on transfer tests, individuals learning alone generally showed greater concept retention than students who had learned in groups. There is some evidence that the computer may promote peer teaching and collaborative learning. Bracey (1984) cites a study by Chernick and White which found fifth grade students working with computers collaborated three times more often than students using traditional instructional materials. Trowbridge (1982) and Trowbridge and Durnin (1984) reported gains among adolescents working in groups of 2-4 f‘1 FY (V) f). 39 with computers on scientific activities. Trowbridge and Durnin (1984) found that pairs performed more interactions than individuals, triads or quads among undergraduates using physics simulations. Trowbridge concluded that small groups working with computers constitute a unique system which requires further study before the potential of computer based education is realized. They failed to find evidence of achievement differences between subjects working alone or in any size groups. Okey and Majer (1975) also found no significant difference in achievement when comparing students studying alone with students in pairs, or students in groups of 3 or 4. Students worked together at the PLATO IV computer assisted instruction terminal and then completed criterion- based tests individually on materials covered. The researchers did conclude that time of learning the materials was reduced by students working in groups of 3 or 4. The only direct evidence linking group size with achievement within a mipppcomputer environment identified in the review of literature was a study by Cox and Berger (1985). In their study of seventh and eighth grade students solving problems, they found children who worked on computers stayed on task longer and reached a correct solution quicker than students working without computers. The five skills focused on in their study were: 1) collecting data, 2) organizing information, 3) analyzing information, 4) developing alternatives, and, 5) selecting a: DA 40 the most appropriate solution. Further, their findings suggested groups of two or three members produced more correct problem solutions than individuals working alone or in groups of five. They concluded "teams of two to four would seem best suited to work together to solve problems similar to those in this study" (p. 467). Group composition. Whenever more than two persons form an instructional group, overall diversity increases. One question facing teachers concerning group composition is whether students should be placed in homogeneous or heterogeneous groups. Traditionally, students have been grouped according to ability into separate classrooms or within the classrooms. The rationale for this practice is that narrowing the ability range in the classroom, or within a group of students, facilitates the provision of more appropriate learning tasks, makes more teacher time available to students of a given ability level, and stimulates teachers to gear their teaching to the level of the group (Goldberg, Passow, & Justman, 1966). However, many research findings suggest higher achievement will result when students are placed in heterogeneous, cooperative groups (Johnson & Johnson, 1978a; Johnson, Skon, & Johnson, 1978; Skon, Johnson, & Johnson, 1981; Wodarski, Hamblin, Buckholdt, and Ferritor, 1973). Hoffman and Maier (1961) found four-member heterogeneous groups consistently scored as high or higher than 41 homogeneous groups in a study investigating problem-solving tasks. Furthermore, there may be a variety of experiences important for socialization and cognitive development in classrooms where students of various characteristics interact and learn together. In a series of studies, Amaria, Biran & Leith (1969) found learning in small mixed- ability groups was better than individual learning, especially for low ability students. They hypothesized that small mixed-ability groups permitted teacher-learner relationships to develop between high ability and low ability students in the groups. Along this strategy of heterogeneous ability grouping, research has suggested learning may depend on student characteristics or aptitudes. Aptitudes refers here to any characteristic of a student that predicts his/her probability of success in a given instructional approach, (e.g., abilities, motivation, attitudes, references). That is, differences in a student aptitude may interact with an instructional approach to produce differential achievement. This phenomena has been called aptitude-treatment interactions (ATI) (see, for example, Cronbach & Snow, 1977). Results from a study by Webb (1977) found overall learning in small mixed-ability groups was more effective compared to individual learning. Webb’s study also found an ATI which can be summarized as: High ability students did equally well after learning in small mixed-ability groups or individually; medium ability students did better after 42 learning individually; and low ability students did better after learning in small mixed-ability groups. Peterson (1981) also concluded from results of two studies involving mathematics learning that there did seem to be evidence for the existence of a curvilinear ATI for ability when the treatments involved having a student working either alone or in a small mixed ability group. High ability and low ability students benefited from the small-group learning and medium ability students did slightly better working alone. Summary of learning in groups. Several reviews of cooperative learning indicate a growing research interest in instructional uses of peer work groups (Sharan 1980; Slavin 1980). In many educational settings, however, peer interaction is eliminated through the use of competitive and individualistic instructional procedures (Skon, Johnson, and Johnson, 1981), (e.g., such individual instruction is often suggested for learning with computers). There is evidence that cooperative learning among peer work groups promotes higher achievement than do other type efforts, (Aronson, Bridgeman, & Geffner, 1978; Buckholdt & Wodarski, 1978; DeVries & Slavin, 1978; Johnson & Johnson, 1978a, 1978b; Slavin, 1978), especially in context of higher order skills learning (Skon, Johnson, and Johnson, 1981). There are issues that need clarified. A number of variables suggested in the literature may mediate the relationship between cooperation and achievement. Three 43 of these variables that are hypothesized to be important to the current study are: 1) group size, 2) group ability composition, and, 3) gender. The experimental literature suggests that group investigation methods are particularly appropriate for pursuit of the type outcomes investigated in this study, (i.e. higher cognitive learning goals). Sharan, Ackerman and Hertz-Lazarowitz (1980) found no significant differences between those who worked individually and those who worked in small groups when learning low level information. When learning of higher level concepts was examined, however, they found groups did better. All things considered, the evidence concerning group size indicates that the optimal size of learning groups within the classroom might be from 4 to 6 members. It was hypothesized that such a group is large enough that enough resources are present for achievement gains, and is small enough that everyone’s resources are utilized and that everyone can participate (McMillan, 1980). However, when students are very young, or when there is a serious lack of social skills necessary for working with others, pairs and triads may be more productive. The review of literature concerning group composition suggested there seems to be a curvilinear ATI for ability when the treatments involved having a student working either alone or in a small mixed ability group. High ability and low ability students benefited from the small-group learning 44 and medium ability students did slightly better working alone. One purpose of this research was to explore further the possibility of aptitude-treatment interactions when different numbers of students form groups and work individually or together in small mixed-ability groups. Chapter Summary. Science teaching seems well suited for study, since it emphasizes problem solving and inductive reasoning within a subject domain. Rubinstein (1975) describes problems as one of two kinds: 1) problems requiring synthesis to solve, and, 2) problems requiring analysis to solve. The problems requiring analysis focus on the application of known transformation processes to achieve the obscure or hidden solution. The emphasis in this type problem solving is on a set process from a known initial condition (i.e. initial state). This is the kind of problem most often presented to the scientist. It is also the type of problem hypothesized to be well suited for practice using a microcomputer and database management application program like the one used in the current study. As computers enter the school systems, one challenge for educators is to design instruction which utilizes these powerful tools in effective ways to meet their instructional goals. Given that one important goal in science education is the teaching of inquiry skills, evidence exists which shows computers can be used to meet these goals. 45 However, as with any innovation, care must be taken to investigate possible harmful (albeit unintended) results (e.g., widening gender equity problems), as well as the intended benefits. In addition, given the limited number of computers currently in our schools, it is important to consider issues related to groups of students learning at computers. The current study is designed to investigate a number of questions with regard to achievement and how it is affected by group size, gender of the students, and the ability of the students within groups. The evidence for computer learning in groups is somewhat conflicting and difficult to interpret. In computers, boys were superior to girls in the few studies where performance was assessed; sex differences were small, however (Lockheed, Thorpe, Brooks-Gunn, Casserly, & McAloon, 1985). Although most researchers report anecdotal or intuitive support for positive outcomes resulting from learning in microcomputer environments, it is difficult to find hard data in support of such claims. There is, however, some evidence that microcomputers can provide an environment for the learning of science process skills (Berger, 1982). In general, the perspective used in this study was that the computer is a vehicle used to deliver instruction (Clark, 1983) and viewed as a tool (Taylor, 1980) has no significant effect on science achievement in and of itself. 46 With regard to gender differences, it may be unfair to ask girls and women to change their math/science/computer avoidance patterns without changing some of the influences operating on them (Mathews and Winkle, 1982). There exists a need to design and develop materials that will promote opportunities for all students - and especially females — to learn processes of science. This study investigated one way to intervene using microcomputers as tools that are integrated in the science curriculum, and reports on any sex-related differences found in achievement. Finally, regarding group size and the current study, it is not as important that students working in groups do better than students working alone, as it is important for them to do equally well. CHAPTER THREE . METHODS . This chapter describes the selection of students in the sample, and the method of assigning students to treatment. It also describes the training students and teachers received, the measurement instruments and materials used, and the research design in the study. Population and Sample. There were twelve science classrooms from three schools in three different school districts involving 306 students taken from middle school grades (i.e. 7th and 8th grades). These classes were selected on the basis of the number of computers available in the classroom and the teachers' volunteering to participate in the research. The twelve classrooms were assigned randomly to the three treatments described below. Each of the treatments received four classrooms each. Training. Before the science instruction began, teachers and students were given instruction and practice in using the electronic database, (McLeod, Hunter, and Finkel, 1987), necessary to complete the lessons that followed. Teachers were given a checklist as well, on some "do’s and don’t’s" concerning implementation of the research lessons, (e.g., 47 48 encourage students in a group to discuss the activities; encourage students to make sure all members of their group understand and participate in each activity) (See Appendix A for checklists). There was a separate checklist to be used by the teacher for classes in which students work in groups and for classes in which students work individually. Instrument. Assessing student ability in science process skills can be difficult and time consuming through observation of laboratory situations (Burns, Okey, and Wise, 1985). Due to time requirements in public schools, quality tests are needed to measure pupil performance without always making observations of process skills in the laboratory. Initially, efforts to design tests to measure process skills were tied to the inquiry oriented curricula prominent in classrooms in the 19605 and early 19705 (e.g., Walbesser, 1965; McLeod, Berkheimer, Fyffe, and Robinson, 1975; Ludeman, 1975; Riley, 1972). Later, tests were developed that were not curricular specific, but were targeted toward upper elementary and middle school students (e.g., Molitor and George, 1976; Tannenbaum, 1968). The Test of Integrated Process Skills (TIPS) was developed by Dillashaw and Okey (1980) to respond to the need for a non—curricular specific process skills test for middle grade and secondary students. The Test of Integrated Process Skills (TIPS); and the Test of Integrated Process Skills II (TIPS II) (Burns, Okey and Wise, 1985; Dillashaw and Okey, 1980; Tobin and Capie, 49 1982) were used to measure the outcome variable (i.e. integrated science process skills score). Together these two paper and pencil examinations serve as criterion referenced, alternative test forms to measure integrated process skills. Total test reliability using Cronbach’s alpha was reported by Burns et al. (1985) for TIPS as .82 and TIPS II as .86. The researchers summed up their findings concerning the comparisons of the two tests as two highly equivalent tests which related to the same objectives and produced highly similar mean scores. One form of the test was selected randomly to be used as a pretest in all classrooms and the remaining form of the test was used for the post-test. The pre/post-test design was used for two reasons: 1) the pretest served as the measure to rank order students within classrooms according to ability levels (i.e. high, medium, and low ability in science process skills), and 2) the pretest was used to test the overall effectiveness of learning during the intervention. Both forms of the test have been content validated against nine objectives. The developers believe that collectively the tests measure skills involved with planning, conducting, and interpreting results from investigations. The lessons developed for this study (McLeod, 1987) (See Appendix B) gave the student practice in the four objectives listed below. Therefore, pretest and post-test items identified by the tests developers as those which measure these four objectives become the focus of 50 analysis in this research. These four objectives are (Dillashaw and Okey, 1980): 1) Given a description of an investigation, identify the independent, dependent, and controlled variables and the hypothesis being tested. 2) Given a problem with dependent variables specified and a list of possible independent variables, identify a testable hypothesis. 3) Given a problem with a dependent variable specified, identify a testable hypothesis. 4) Given a hypothesis, select a suitable design for an investigation to test it. These four objectives, as opposed to the entire nine objectives, were selected in part because in the two week time period of this study, only a certain number of objectives are likely to be learned by the students. Materials. Curriculum. Lessons were developed (McLeod, 1987) to match the four objectives described above for the outcome tests. These lessons were designed to be used by the student with little help from the teacher, thereby reducing teacher variation as much as possible. The lessons were developed to be used with Appleworks (Lissner and Apple Computer, Inc., 1983) and the Climate and weather databases (McLeod et al., 1987). Appleworks is an integrated computer program that includes three functions: 1) word-processing program (e.g., for typing and editing 51 letters and reports), 2) database program (e.g., for keeping track of information like mailing lists or inventories), and, 3) spreadsheet program (e.g., for creating mechanized worksheets for accounting, or doing other tasks that arrange information into rows and columns). Of primary interest to this study was the Appleworks database program which allows the user to enter, in an organized way, information such as customer files, elevation of world cities, inventories, or stock portfolios. The user can retrieve that information sorted and arranged in whatever way needed. For instance, lists can be arranged in alphabetical order or numerical order, and in either ascending or descending order. Searching and sorting can be done using multiple criteria. For example, if trying to determine whether there is a pattern of temperature shifts that are associated with approaching pressure centers, one of the activities the experimenter might wish to conduct is to select only those days when the pressure change is decreasing gpg the temperature is increasing. If investigating the relationship between wind direction shifts and approaching pressure centers, the user might choose only those days when the pressure change is decreasing apd the wind is from the northerly direction. (See the section later in this chapter on the format used in each lesson for more on how databases can be used to teach process skills.) Weather is defined in the weather database as the condition of the atmosphere over a short period of time, as 52 opposed to climate, which is the average weather conditions over a prolonged period of time. Students use weather data for one location, Grand Rapids, MI, USA. The data were obtained from the National Oceanic and Atmospheric Administration (NOAA) and include weather data collected every three hours for the months of July and January. The authors stated that they selected these two months to illustrate the differences in weather between them. A sample of the kinds of weather data collected for the two data files used in lessons prepared for this research are: Each record in the Local database file is for the same location, Grand Rapids, MI, but for a different time and date. For each day, there are eight records (1 am, 4am, 7am, 10am, 1pm, 4pm, 7pm, and 10pm). Each record contains the measurements for that time period regarding: Sky Cover, sky cover Ceiling, Wind directions, average wind 8 eed, and precipitation. Each record of the Skytemp database file is for the same location, Grand Rapids, MI, and include temperature and sky cover readings for a day, the minimum and maximum sky cover for the day, and a calculation to show the temperature variance during the day. Climate in the Climate database is defined as the average weather conditions over a prolonged period of time as opposed to weather, which is the condition of the atmosphere over a short period of time. The author of the database states the locations for the Climate database were 53 first selected so as to represent general latitude considerations. Locations were added to insure that water and mountain effects would be sufficiently represented to illustrate their contribution to climate. Finally, locations were added to insure that every state was represented. A sample of the kinds of climate data collected for the two data files used in lessons prepared for this research are: geographic information on the latitude and longitude, the normal temperature each month, the precipitation, the wind direction, the yearly average temperature, the minimum temperature, maximum temperature, and a calculated difference between the minimum and maximum temperatures. Based on his experience with developing and testing similar instructional materials, the author believed that the curriculum activities accompanying the databases described above allowed students to practice integrated process skills. The author stated that these lessons were specifically designed to have students practice problem solving through tasks which include: 1) determination of the data needed, 2) organization of the data, 3) performance of mathematical operations on the data if necessary, and, 4) analyzing the data. Lessons follow a general format that includes the following sections: 1) Purpose - States the purpose of the experiment. Generally, this is what the experimenter expects 54 to find. For example, "To determine if there is a relationship between temperature and latitude. 2) Hypothesis - A statement of the purpose in a way that suggests what to look for. For example, "As the latitude decreases, (as we go South), the temperature will increase." 3) Controlling Variables — Lists (or asks the student to list and question why) variables which may also affect the results in the experiment besides the variables of interest stated in the hypothesis. For example, when attempting to determine if there is a relationship between temperature and latitude, altitude (or mpppp) must be controlled for. 4) Determine the Information Needed - Asks the student to list the information needed to test the hypothesis. For example, when attempting to determine if there is a relationship between temperature and latitude, the experimenter needs at least the following information, a) temperature and b) latitude. 5) Arrange the information - Asks the student to list how the data is to be arranged, or reported. For instance, the way the information is arranged on paper is very important in helping interpret the data. Assuming the data is arranged in columns, it becomes easier to find relationships. If 55 latitude is arranged in decreasing order in a column, and the experimenter asks for temperature in an adjacent column, if the hypothesis is correct, the data in the temperature column should fall into a pattern showing an increase. 6) Analyzing the Results - The experimenter includes a statement of a) whether a relationship was found, b) if possible, whether the relationship seems to be a direct or inverse one, and c) if the relationship is not perfect, note some of the exceptions and state some of the inferences for them. For example, in the experiment on latitude vs. temperature, this section would cue the student to analyze the data reports to determine items a through c above, and focus on the anomalies (e.g., larger due to water effects). Procedures and Design. Assigning Students to Groups. Each classroom of students was rank ordered within gender, from low to high on their ability concerning integrated process skills based on their pretest scores. The research design that follows called for three levels of treatment using individual students, pairs of students and groups of four students working at microcomputers. The twelve classrooms used in the study were randomly assigned to treatments with four classrooms each for individual learning, learning in pairs, and learning in quads. 56 However, each of the three different schools had at least one classroom assigned to each of the three treatments. This random assignment to treatment was designed as one control for teacher, school, and classroom effects. In all cases, a student's sex and ability level was recorded. Groups of two and four were formed using random stratified sampling on ability. Since the overall weight of the evidence indicates that higher achievement is achieved by high, medium, and low achieving learners when they are placed in heterogeneous, cooperative learning groups (Johnson & Johnson, 1978; Johnson, Skon, & Johnson, 1980; Skon, Johnson, & Johnson, 1980; Wodarski et al., 1973), no homogeneous ability groups are intentionally investigated in this study. Furthermore, the review of literature suggested using an equal number of males and females within a cooperative group. Therefore, groups were formed based on equal numbers of boys and girls, randomly chosen as permitted by the natural classroom demographics and the rules for heterogeneous ability grouping discussed below. The following outlines the grouping procedures in more detail (L=low ability; M=medium ability; H=high ability): 57 M H (student group) _____ |____—————|--——---—| achievement gradient 50 75 100 (percentile) Figure 1. Gradient Scale Groups with 2 and 4 members were formed using the method shown in Figures 2 and 3 below. In all cases, G=girl, B=boy, L=low ability, M=medium ability, and H=high ability. Student Ability L M H Group Number 1 G B 2 G B 3 B G 4 B G 5 B G 6 G B Figure 2. Plan to form groups with 2 students 58 Student Ability L M H Group Number 1 G BG B 2 B BG G 3 G BB G 4 B GG B Figure 3. Plan to form groups with 4 students. Data Analysis. A split-plot, multivariate factorial design was planned to analyze the data for main and interaction effects. The design can be represented by Figure 4. 59 T3 I C9 C10 C11 C12 | (S=sex; A=ability; C=classroom; T=treatment) Figure 4. Within-subjects design. Chapter Summary. A total of 306 middle school students made up the sample. Training in the use of the software and database was completed prior to the administration of the five lessons and accompanying material that were the focus of the comparative study. The Test of Integrated Process Skills was used as a pre-test and post-test response variable. 60 Lessons, based on the Scholastic Weather and Climate (McLeod et al., 1987) databases were developed for this research to control for teacher variability across the 12 classrooms. Students in the treatment classes that were designed to have pairs and quads working at microcomputers were assigned according to ability and gender as outlined in the procedures section of this Chapter. A split-plot, multivariate factorial analysis was planned to analyze the data. CHAPTER FOUR. RESULTS. This chapter describes the subjects in detail, the data collected, and the methodology used to analyze the data. It also reports the results of data analyses. Description of the subjects. There were twelve science classrooms selected from three schools in three different school districts at the middle school grades (i.e., 7th and 8th grades) used in this research. The total number of students involved was 306. Selection was based on the number of computers available in the classroom, and the teachers volunteering to participate in the research. The twelve classrooms were assigned randomly to each of the three treatments described below; however, each of the three schools had at least one classroom assigned to each of the three treatments. Of course, a different teacher was found in each of the three different schools. It was assumed that by assigning at least one classroom to each of the three treatments for each school, teacher effects may have been reduced. All three treatments (i.e., individuals, or pairs, or quads of students working at the computers) received four classrooms each. 61 62 Of the 306 total students, 61 were dropped from the study for various reasons. A summary of the sixty-one dropped students is as follows: Seven students failed to return their completed human subjects form (See Appendix C), which was given to all students. Test scores of those students failing to return the completed form were dropped from the study. There was missing data (i.e., pre-test or post—test scores), for four students; and three students were absent from school for an extended period of time, either during the intervention or during the time the post- test was administered. These students’ scores were dropped from this study. Therefore, a total of 14 students out of the 306, (i.e., 4.6%) were dropped for reasons out of the control of the researcher. In addition, 43 (forty-three) students, due to the demographics of individual classrooms (e.g., there were more girls than boys in the class), could not be grouped according to the rules of grouping described in Chapter 3; and their scores were not included in this study. Four students and/or their parents declined to allow their test scores to be used in the research, as indicated on the human subjects form. Since the identity of the four students declining to participate in the research was known to the researcher prior to grouping of students, these persons had no detrimental effect on the design of the research. A total of 245 students were the focus of the data and results presented in this chapter. 63 Description of the Data Collected The data used in this study were collected during individual student pre- and post-treatment tests. There were two parts to each of these tests; one part (I) was the Test of Integrated Process Skills (TIPS), and the other part (II) was a researcher developed test (See Appendix D). Prior to collecting the data in the 12 classrooms which were the focus of this research, a number of other classrooms in school districts outside the main research area were selected to test the implementation of lessons, procedures, and evaluation design. From this pilot study, a number of suggestions were made by the classroom teachers and building administrators, the researcher, and the researcher’s guidance committee members, to enhance the procedures before implementing the lessons and evaluation in the actual twelve classrooms that became the focus of this research. There were a number of minor changes that may have contributed to the student learning which took place at the twelve research sites. Four suggestions that appear to be significant are reported here: 1) Grading of the students in the pilot study was not done by the classroom teachers. Because of this, it appeared that a significant number of the students involved in the pilot study did not take the lessons and/or evaluation seriously. It was suggested that students’ work at subsequent times (i.e., during the 64 actual research) would be graded as a regular unit in their science curriculum. 2) The Test of Integrated Process Skills is a multiple choice paper and pencil test, which measures the objectives stated in Chapter 3. However, this test does not follow the form or style of the practice the students received when using the computer during the research intervention. It was suggested an additional response measure (i.e., besides the TIPS test) be included in the pre- post-treatment testing that would be a closer match, at least in form and style, to the practice students would have during the intervention. 3) At more than one of the pilot study sites, the intervention was interrupted by the students absence from the classroom for scheduled school events (e.g., spelling bees, theatrical plays), or on vacation from school. Through observations by the classroom teachers affected by these interruptions and by the researcher, it appeared that those type interruptions, or anticipation of them by students, caused students to take the lessons and/or evaluation process less seriously than students not burdened by these interruptions. It was suggested that the total research intervention, including both pre- and post- tests, be conducted during a period of the schools’ calendars that was not interrupted by days off (e.g., in-service training, parent conferences, Easter or 65 Christmas vacations), or the day or two adjacent to vacations, unless absolutely necessary to schedule during those times. 4) Of the 36 items in the Test of Integrated Process Skills, twelve items dealt with objectives not included in the focus of this study (e.g., graphing), as discussed in Chapter 3. It was suggested that the 12 items that were irrelevant to this study be deleted from the test to shorten it and save the students’ time and energy. In response to suggestions one and four above, students were graded as part of their regular school works and the 12 items on the TIPS test that were not the focus of this research were deleted from the pre- and post-tests. Concerning suggestion two, subsequent to the pilot study and prior to the formal study, it was believed that the TIPS test (referred to in this Chapter as Part I of the pre- and post-tests), may not promote the transfer of skills, (i.e., integrated process skills as practiced by the students during the intervention), necessary to show significant learning taking place. Therefore, an additional test (herein called Part II) was developed by the researcher in response to the suggestion for a closer match between practice and testing. Four problem cases were identified by teachers and the researcher from middle school science textbooks and/or teaching experience. One group of problems (i.e., two of the cases), in general, involved 66 weather/climate content. The other group of problems (i.e., the other two cases) involved topics often found in textbooks directed toward middle school age students, but were not in earth science. The four cases were designed to follow the format and style of the lessons the students practiced during the intervention. Face validity of this part of the test (i.e., the evaluation by experts regarding whether there is a match between the practice students received during the lessons and the test’s objectives), was established by independent evaluation and agreement of one instructional designer and two science educators. One question from each of these two groups was randomly assigned as Part II of the pre-test, and the remaining two questions formed Part II of the post-test. While still a paper and pencil test, Part II of the pre- and post-tests matched the practice the students had during the intervention period in format and style and was believed to require less transfer of training than Part One required. In response to the third suggestion from the pilot study, every attempt was made to schedule this research at times in the schools’ calendars that were relatively uninterrupted. This goal was achieved as much as possible, with intervention and tests being conducted on continuous days in each school. Although the addition of the second response variable (i.e., part two) is consistent with suggestions made during the pilot study of these materials, it may be helpful to the 67 reader if more on the perspective taken concerning transfer is briefly stated here. Basically, the distinction Kelly (1967) made, between prerequisite knowledge and beneficial knowledge is useful. Prerequisite knowledge is essential for a certain piece of new learning. Beneficial knowledge may be helpful for the new learning but is not essential. Gagne (1974) makes a similar distinction between vertical transfer and lateral transfer for intellectual skills. Vertical transfer is dependent upon prior learning of simpler skills. Lateral transfer, or near transfer as Mayer (1974) called it, refers to skills that are similar to those taught. It was believed that part one (i.e., TIPS test) may require the student to show integrated process skills regarding essential knowledge. On the other hand, part two may require the learner to show near transfer skills, or those skills that are more similar to those practiced during the intervention than those skills required by the TIPS. In addition, the assumption is made that all learning involves transfer from prior learning to a greater or lesser degree (Ausubel, 1963; Voss, 1978). As described in Chapter 3, each of the twelve classrooms were assigned randomly (within school) to one of three treatment groups, so that each treatment had four classrooms total. Treatment One was individuals working at the computer, Treatment Two was pairs of students working cooperatively at the computer, and Treatment Three was quads 68 of students working cooperatively at the computer. Students and groups of students will be referred to by these treatment numbers throughout this chapter. Students in all treatments were divided within classroom by their sex, and within each sex students were rank ordered by their achievement on the pre-test part one scores. High achievement and low achievement were the upper and lower 25% of students respectively, within each classroom and within the appropriate gender. The middle achievement group consisted of those students scoring in the middle 50% on the pre-test, within each classroom and within gender. Students in treatment groups two and three were assigned to their learning groups based on the rules described in Chapter 3, Methods. Low achievement was assigned Ability One, middle ability was classified Ability Two, and high achievement was assigned Ability Three; the results are reported using those classifications in this chapter. Males were coded one and females were coded two for analysis purposes. Research Hypotheses. The null forms of the research questions of interest are: 1) There is no significant difference in the learning of science process skills between two-member cooperative learning groups, four-member cooperative learning groups and individuals who work alone using microcomputers. 69 2) There is no interaction between high, medium, and low ability students and group size on learning science process skills within a microcomputer environment. 3) There is no interaction between the gender of the student and group size on learning science process skills within a microcomputer environment. Since this study was exploratory in nature and not definitive (i.e., not an exact replication of any prior study, or attempting to draw definitive conclusions), attempts were made to discover relationships beyond these proposed hypothesis that were plausible based on the literature review. Two additional hypotheses seemed promising and were tested for significance: 4) There is no significant difference in the learning of science process skills between males or females. 5) There is no significant difference in the learning of science process skills by those students showing low, middle, or high ability (i.e., based on pretest part one rankings). Concerning these two additional hypotheses, both factors (i.e., gender and ability) were mentioned in chapter one as important. Therefore, the two additional hypotheses are of interest to the researcher and supported in earlier chapters. 70 Design. Kirk (1982, Chapter 11.11) supplies a split-plot factorial model for viewing the research design. Since the classes were assigned to treatment, the data was analyzed using class mean scores as the raw data (i.e., classroom was the unit of analysis), rather than individual scores on the criterion measures (Page, 1965; Herron, Luce and Neie, 1976). Within each classroom, the various conditions reflecting two levels of gender and three levels of ability permits a multivariate treatment of the data. A MANOVA procedure, using the multivariate set-up, was used to analyze the scores on both parts of the post-test. SPSSx MANOVA is a generalized multivariate analysis of variance and covariance program which can be used to analyze designs such as the split-plot design used in this study (SPSSX, 1986). This procedure performs univariate and multivariate linear estimation and tests of hypotheses for any crossed and/or nested designs, with or without covariates. Using classrooms1 as the unit of analysis, treatments (i.e., group size) is a between-subjects factor, and ability and sex are within-subjects factors (Figure 4). A breakdown of the raw data involving treatment x classroom x gender x ability was used to aggregate the mean scores for the within subjects factors (i.e., sex x ability) for this procedure. Individual cells contained the mean scores across students. 1 There is not enough data in this study to separate the classrooms within treatment source of variance from the treatment alone (i.e., use a nested design with classrooms Within treatment). 71 A MANOVA procedure using SPSSx was used, without the pretest as a covariatez, to test the significance of the three factors listed above, and interactions among those factors, for the two parts of the post-test (see Tables 1 and 2). TABLE 1. MANOVA results for Part 1 of post-test. Source of Sign of Variation SS DF MS F F Test of significance of between-subjects effects. Within Cells 104.90 9 11.66 Treatment 3.09 2 1.55 .13 .877 Tests involving 'Sex’ within-subject effect. Within Cells 25.23 9 2.80 Sex 1.70 1 1.70 .61 .457 Treatment by Sex 6.65 2 3.32 1.19 .349 Tests involving ’ability’ within subjects effects. Within cells 140.61 18 7.81 Ability 492.50 2 246.25 31.52 .000* Treatment by Ability 14.06 4 3.52 .45 .771 Tests involving 'sex x ability’ within subject effects.3 Within cells 123.35 18 6.85 Sex by Ability 10.90 2 5.45 .80 .467 Treatment by Sex by Ability 17.63 4 4.41 .64 .639 * significant p <.001 2 Keppel (1982) describes the case of using analysis of covariance as being questionable in educational research when intact classrooms are assigned to different treatment conditions (See Keppel, 1982, Chapter 20). 3 Gender x ability is confounded by the grouping practice used. Since males and females were rank ordered separately within each classroom, there is no absolute criteria for high, medium, or low ability across classrooms or within classrooms. This design does not allow for meaningful interpretation of gender x ability or gender x ability x treatment interaction effects. 72 TABLE 2. MANOVA results for Part 2 of post-test. Source of Sign of Variation SS DF MS F F Test of significance of between-subjects effects. Within Cells 171.45 9 19.05 Treatment 10.18 2 5.09 .27 .771 Tests involving 'Sex’ within-subject effect. Within Cells 193.52 9 21.50 Sex 19.64 1 19.64 .91 .364 Treatment by Sex 6.40 2 3.20 .15 .864 Tests involving ’ability’ within-subject effects.4 Within cells 253.29 18 14.07 Ability 575.18 2 287.59 20.44 .000* Treatment by Ability 20.54 4 5.13 .36 .830 Tests involving ’sex x ability’ within-subject effects.3'4 Within cells 144.11 18 8.01 Sex by Ability 7.39 2 3.70 .46 .637 Treatment by Sex by Ability 26.77 4 6.69 .84 .520 * Significant p < .001 4 The univariate results of the analysis of repeated measures designs have greater statistical power, and can be used if the conditions of symmetry are met (SPSSX, 1986). If these necessary and sufficient conditions are not met, the multivariate test results should be used in assessing effects. In analyzing the results of the Mauchly sphericity test, the conditions of symmetry appear to be violated in both the ’ability’ and the ’sex x ability' within-subjects effects on Part 2. However, the multivariate tests show the same significant results as the corresponding univariate tests. (i.e., p < .001 for ability as the only significant result). 73 Results and Discussion. To determine if learning occurred during the intervention, a t-test for both parts of the pre- and post— test design was conducted. Each part of the pre- and post- tests was scored with 24 points being the highest. Results for part one (i.e., the TIPS test) showed an average gain between the pre—test and the post-test of 2.2612, which is significant at the .001 level (See Table 3). Results for part two (i.e., part designed for "near transfer" of skills), showed an average gain between the pre-test and the post-test of 4.6531, which again is significant at the .001 level (See Table 4). Table 3. T-Test for Post-test Part 1 vs. Pre-test Part 1. Standard Standard Means Deviation Error Post-test Part 1 14.4286 4.516 0.289 Pre-test Part 1 12.1673 4.647 0.297 Number of cases = 245 (Difference) Means = 2.2612 Standard Deviation = 4.275 Standard Error = 0.273 Correlation = 0.565 T Value = 8.28 Degrees of Freedom = 244 2-Tailed Probability (Difference in means) = 0.000 74 Table 4. T-Test for Post-test Part 2 vs. Pre-test Part 2. Standard Standard Means Deviation Error Post—test Part 2 8.3959 5.475 0.350 Pre-test Part 2 3.7429 3.982 0.254 Number of cases = 245 (Difference) Means = 4.6531 Standard Deviation = 4.905 Standard Error = 0.313 Correlation = 0.499 T Value = 14.85 Degrees of Freedom = 244 2-Tailed Probability (Difference in means) = 0.000 It appears there is a main effect on ability, and no other highly significant differences (See Tables 1 and 2 above). Since the literature on effects of group size regarding learning is inconsistent, it was not known whether a main effect of treatment would be found for either response variable in this comparative study. Very little research has been conducted within microcomputer environments which reports on group size effects. This research supports the position that groups of two and four students learn certain problem-solving skills equally as well as individuals working at computers under regular classroom conditions. 75 Other results that were hypothesized included no significant differences between males and females (i.e., no main effect on gender), and no interaction effects of gender by treatment. In fact, a number of procedures (e.g., an equal number of females and males assigned to groups) were deliberately used to promote this intervention as a gender- neutral activity. It was expected that high ability and low ability students would be superior to middle ability students in the small-group learning conditions; and that the middle ability students would do significantly better when they worked alone rather than in pairs or quads of students (i.e., an aptitude by treatment interaction was expected). Contrary to expectations, the results of this study showed no ability by treatment interaction. There were significant main effects on ability for both parts of the post-test. Looking at the ability main effect on Part one of the post-test, the gain scores were highest for the lower ability group (4.63), less for the middle ability group (1.96), and least for the highest ability group (0.53). On the other hand, raw mean gain scores across all individuals on Part two of the pre— post-test resulted in the opposite trend. The mean gain scores were 3.72, 4.63, and 5.98 for the low, middle, and high ability groups respectively (See Figure 5). 76 PART 1 PART 2 | I G 6 I 7| a I l i 5 | 6| n I I 4 | 5 | S l | c 3 I 4| o I I r 2 I 3 I e l I 1 l 2 | l = 0 | | A1 A2 A3 A1 A2 A3 Ability Ability Figure 5. Ability x Post-test Gain Scores Perhaps the most unexpected finding in this study is that in part one of the post-test, the trend was for low ability students to gain the most and the high ability students to gain the least, compared to the pre-test; while in part two that trend was reverse. This might be described as a post-test by ability interaction. Usually the term "interaction effect" is used to describe the relationship between two independent variables. However, here the term is used in reporting a phenomena between an independent variable and two different dependent variables. Since this study was not designed to explore this interaction, the researcher can only speculate about why this interaction occurred. 77 It may be that low ability students gained the most between part one of the pre- and post-tests because they concentrated on the essential knowledge, (i.e., general terms and concepts such as independent and dependent variables, controlling variables), while the high ability students already understood those general terms and concepts. By definition, the highest ability students were those scoring in the top 25% on Part One of the pretest, within each classroom. Therefore, it is not surprising that not much gain was made by high ability students on part one. Even though there was no ceiling effect on either part of the pre-test or post-test, it is still expected that the high ability students are those who had already shown their competency in the more general skills on the pretest. On the other hand, given that the high ability students had prior knowledge concerning the essential knowledge regarding science process skills, they were able to concentrate their efforts on learning the skills tested on part two of the pre- and post—tests. The low ability students, struggling with the concepts measured on part one, were not able to concentrate on the specific tasks in the lessons given them during the intervention. These low ability students did not yet have the prerequisite knowledge, (i.e., basic concepts) needed to understand the more task specific skills presented in the lessons. 78 Summary. This research could be described as using a split-plot, multivariate factorial design for analyzing the data. The independent factors included treatment, (i.e., individuals vs. pairs vs. quads of students working at the computer), as the one between-subjects factor, plus gender and ability grouping as two within-subjects factors. Two response variables were measured in both a pre-test and a post-test. One of these response variables was the Test of Integrated Process Skills (i.e., Part One), and the second response measure was a researcher developed paper and pencil test that more closely matched the practice in style and format that the students had during the intervention. The three null hypotheses that were developed prior to the study were tested for significance. Two additional null hypotheses were stated in this chapter, based on the review of literature. These two hypotheses were also tested for significance. The within-subjects factors were collapsed for the analysis; and a multivariate F test statistic was chosen to test these hypotheses using the MANOVA procedure with SPSSX. This procedure benefited from the balanced design afforded when within-subjects factors were collapsed, and also was chosen since it has fewer assumptions associated with its use than does other test statistics which were considered. When testing for main and interaction effects as hypothesized, the only statistically significant result that 79 was shown in the analyses was a main effect on ability for both response measures. Part one of the gain scores between post-test and pre-test showed results that were highest for the lower ability group (4.63), less for the middle ability group (1.96), and least for the highest ability group (0.53). On the other hand, analysis of the raw mean gain scores across all individuals on Part two of the pre- post- test resulted in the opposite trend. The mean scores were 3.72, 4.63, and 5.98 for the low, middle, and high ability groups respectively. Speculation concerning this ability x post-test interaction was discussed. CHAPTER FIVE. SUMMARY, CONCLUSIONS AND RECOMMENDATIONS. This chapter summarizes the perspective taken by the researcher and the procedures used during the research. The findings reported in Chapter Four and the limitations of this study are discussed. Finally, conclusions and recommendations based on this study are made. Purpose The diffusion of new technology, including microcomputers, into the public school system brings with it questions concerning the appropriate use of this technology. Since the acquisition of problem-solving skills, in particular integrated process skills, is an important part of science learning, many educators have spent time considering how to best utilize new technology to help in the teaching of these important skills. The purpose of this study was to empirically explore some of the variables which may influence educators in their practice of science teaching using microcomputers. An emphasis in the design of this experimental research was ecological validity; that is, a naturalistic study characterized by conditions of the treatments matching the standard classroom environment and conditions. 80 81 A review of literature suggested a number of variables which may play an important role in both organizing teaching methods and delivery of instruction in science classrooms. This study examined three factors as suggested in the literature: 1) individual students working alone at a computer vs. cooperative groups of two and cooperative groups of four students working at a microcomputer, 2) the gender of the students, 3) the grouping of the students based on their ability to solve problems using the integrated processes skills prior to the beginning of the lessons given to them during this research. Procedure Two hundred and forty—five seventh and eighth grade student subjects were the focus of this study. They were selected from twelve classrooms in three different school districts. Selection of classrooms was based upon the number of computers available and the teachers' willingness to participate in this research. Within the three schools, classrooms were randomly assigned to treatment conditions, (i.e., individuals vs. pairs vs. four member groups of students working at the computer). A total of four classrooms across all schools were assigned to each of the three treatment groups. Within classroom and gender, students were randomly assigned to 82 groups based upon their ability (i.e., high, middle, low), according to the grouping rules described in Chapter Three. A pre-test consisting of two parts, (i.e., the TIPS and a researcher developed test that more closely matched the practice students received during the intervention), was graded as a regular science unit, with each part receiving a maximum of twenty-four points. After the approximately two week intervention, students were given a post-test consisting of a different form of the two part pretest. Data was collected on the two response variables in each of pre-test and post-test, plus the students gender, ability level and treatment assignment. Hypotheses. A total of five hypotheses were tested for significance with regard to each of the two response variables (i.e., part one and part two of the tests). The multivariate F test statistic and a within-subjects factorial design was used for these analyses. The results are summarized below. Null Hypothesis 1) There is no significant difference in the learning of science process skills between two- member cooperative learning groups, four—member cooperative learning groups and individuals who work alone using microcomputers. Null hypothesis 1 was not rejected for either response variable (p = .877 for part one, and p = .771 for part two). 83 Null Hypothesis 2) There is no interaction between high, medium, and low ability students and group size on learning science process skills within a microcomputer environment. Null hypothesis 2 was not rejected for either response variable (p = .829 for part one, and p = .934 for part two). Null Hypothesis 31 There is no interaction between the gender of the student and group size on learning science process skills within a microcomputer environment. Null hypothesis 3 was not rejected for either response variable (p = .349 for part one, and p = .864 for part two). Null Hypothesis 4) There is no significant difference in the learning of science process skills between males or females. Null hypothesis 4 was not rejected for either response variable (p = .457 for part one, and p = .364 for part two). Null Hypothesis 51 There is no significant difference in the learning of science process skills by those students showing low, middle, or high ability (i.e., based on pretest part one rankings). Null hypothesis 5 gas rejected for both response variable (p = .001 for part one, and p = .000 for part two). Discussion. As stated in the review of literature, the research efforts which have focused on the effects of instructional 84 group size regarding learning in non-computer and computer environments do not show consistent results. Some studies conclude individuals are superior to small groups, other studies conclude the opposite or show no significant difference. The Cox and Berger (1985) study was the only research linking group size and achievement within a microcomputer environment identified in the literature review. In a laboratory setting, they found seventh and eighth grade students who worked in groups of two or three members solved more problems correctly than individuals working alone or in groups of five. In the current study, group size (i.e., treatment) effects were insignificant. Although this research is not consistent with the Cox and Berger (1985) conclusion that teams of two to four members would seem best suited to work together to solve problems, the research conditions and activities between the two studies appear significantly different. Conversely, this research does not support the hypothesis that individuals are superior to small groups when working on this type of problem-solving within a microcomputer environment. This may have implications for educators/administrators wishing to design environments in which students are involved in problem-solving. Also, it may lend support to those educators who believe that, even if a school does not have one computer for each student, microcomputers can be valuable tools in teaching certain problem-solving skills. If group learning is as effective 85 as individual learning, it may be valuable information in policy decisions when considering cost efficiency. Further, effective group learning may have implications concerning how software and courseware designers develop programs which include what is known about collaborative learning of science process skills using microcomputers. There is a growing body of research on ability by treatment interaction. Differences in a student’s aptitude may interact with an instructional approach to produce differential achievement. The current study did not find the curvilinear ATI that researchers such as Webb (1977) and Peterson (1981) have found, (i.e., high ability and low ability students benefit from small-group learning and medium ability students do slightly better working alone). However, differences between this study and others (e.g., setting; content; operational definitions of variables), may have been significant, thus making direct comparisons difficult in the area of ATI. In Chapter Two, the focus of the review of gender- related literature was that there exists a need to develop and report treatments showing no sex-related differences in hopes of identifying more gender-neutral activities in science. It was anticipated that the current research would prove to be such an activity. This study found no significant differences between males and females in learning science process skills in a microcomputer environment. In addition, no interaction effects were found 86 on gender by treatment, or gender by ability, or gender by ability by treatment. Therefore, the results of this study indicate that the lessons and procedures implemented in the manner described, generated gender-neutral activities in science. Concerning the significant results of ability in both parts of the post-test, at least one study (Mayer, 1974) has investigated the phenomena of different instructional methods resulting in learning outcomes which differ. This was indicated by a pattern of post-test performance in which subjects in one instructional group excelled on one kind of transfer post-test item and subjects in another group excelled on another kind; producing what Mayer (1974) called a Treatment X Post-test interaction (TPI). The phenomena in the current research differs from that in the Mayer (1974) study; instead of differential learning of transfer items between instructional groups, the current research is concerned with different transfer of learning wiphin instructional method, but between students of different ability levels - an Ability x Post-test interaction (API). At least one of the characterizations Mayer (1974) proposed to describe the difference in TPI may be useful in describing API as well. Mayer proposed that "different kinds of learning outcomes are due to acquisition processes in which the same content material is encoded within different assimilative sets by different subjects" (p. 644). Viewed from this perspective, which appears to be 87 consistent with the distinction between prerequisite knowledge and beneficial knowledge described in Chapter Four, speculation can be made regarding the API found in this study. It may be that high ability students, who already are knowledgeable about the more general skills tested in part one of the post-test, concentrate their learning on the task specific skills tested in part two. On the other hand, the lower ability students concentrate their learning on the more general, essential knowledge tested in part one of the pre- and post-tests, and do not have time during the two week intervention to engage in the more task specific skills tested in part two of the tests. Conclusions. The following conclusions are made concerning this research study: 1) This study found that students working in a microcomputer environment, in teams of two and four members were as effective in learning integrated science process skills as were students working alone. 2) No significant interaction between ability (i.e., high, medium, and low) and group size (i.e., individuals, pairs, and quads), was found in this study of students learning science process skills while working in a microcomputer environment. 3) Results of this study indicate that the lessons and procedures implemented in the manner described, generated gender-neutral activities in science. 88 4) Results of this study indicate that in part one of the post-test the trend was for low ability students to gain the most and the high ability students to gain the least, compared to the pre-test; while in part two that trend was reverse. This might be described as an ability by post-test interaction. For subjects who do not have a well integrated set of general experiences in the science process skills, testing on tasks related to specific problems may not measure the learning that may have occurred. Conversely, for students who dp have a relatively high ability in science process skills, testing in general abilities after they have practiced solving specific problems might not detect the skills they may have learned. In summary, the common practice of evaluating all students in a classroom using a post-test composed of only one type of items after the students have practiced problem solving in science may not detect the learning that has taken place by any given individual, depending upon that individual’s prior skills level. 5) The results from this study in science are consistent with those of White (1985) in the area of social studies, indicating microcomputers using a file management program, along with structured activities, can be used as a tool to promote student learning of process skills. 89 Limitations. It was not possible to assign students randomly to treatments. Although no significant differences between the three treatments in either part of the post-test were found, the use of intact classrooms provides no control for certain effects (e.g., teacher, overall differences in mean IQ of classroom members). Therefore, any differences in the between subject factor, (i.e., treatments), which may have been found in this study might have been from either the differential effectiveness of group size, from differences in the classrooms in general, or from both sources of differences. However, in an attempt to reduce classroom differences, four classrooms were assigned to each treatment, and at least one classroom from each school was assigned to each treatment. There was no significant differences between the treatment means on the pre-pesp. Given the threat to internal validity posed by the use of intact classes in research (Borg and Gall, 1983), there was no evidence that the results would have been different if random assignment of individuals to treatment within classrooms had been possible. The lessons used to give students practice in science process skills were developed for this research. No effort was made to study the effect of using this courseware under different conditions (e.g., time of intervention, amount of teacher directed instruction), than those described for this study. Therefore, care should be exercised not to assume 90 the conditions in which these materials were used are optimal. Results of this study may not be applicable to other lessons and/or conditions, even if the other conditions and/or courseware are designed for the same learning objectives. The effectiveness of the cooperative groups was not formally assessed. Therefore, the emphasis placed on cooperation within assigned groups by the classroom teacher while instructing and interacting with students may not accurately reflect the level or type of cooperation designed in other cooperative learning group research studies. Treatments were limited to the problem-solving objectives and conditions described in this study. It would be inappropriate to generalize results to other problem- solving situations. The acquisition of science process skills using microcomputers and cooperative learning groups might be more or less effective if other problem-solving conditions are used. The three schools used in this research, while from three different school districts, did not represent a cross- section of different size or socio-economic schools. For instance, no urban school was included. Mainly, the three schools were taken from cities within 15 miles of Lansing, MI; however, all were outside of Lansing. Therefore, given this and the method of selection of the schools used (i.e., based on the number of computers and teacher willingness to participate in the study), generalizations of these results 91 can not be made to other schools in all types of school districts. While significant learning was detected on both parts of the pre- and post-tests, problem solving with microcomputer databases probably cannot be effectively learned in a one-time activity or within one subject area. Using only a one to two week time block, in one subject area at one grade level, does not seem sufficient to teach problem-solving as discussed in this study. Recommendations for further research. This research study has shown that microcomputers can be used as a tool in promoting learning of science process skills. During the research process, different aspects of individuals and small groups of students were explored interacting with microcomputers. This exploration and the resulting analyses may have implications and suggest some recommendations for future research and use. 1) The lessons prepared for this research were successful in promoting student learning of science process skills. However, other lessons and procedures for implementing them may be more successful. Consideration should be given to designing lessons that are more optimal. 2) In this research design, teacher interaction with students was intentionally held to a low level to reduce the teacher effects across different schools as much as possible. Consideration should be given to designing research which explores the mix of teacher-directed vs. 92 student-directed activities in this type of microcomputer environment. 3) Given the ability by post-test interaction effect described in this research, a high priority may be to design research which specifically investigates this phenomena, and/or developing test(s) to use for near transfer that are more rigorously measured for reliability and validity. 4) This research explored only effectiveness as a response variable. Informal questionnaires given to students in this study resulted in some students liking some components of the lessons and procedures used (i.e., their group members, science work using microcomputers) while other students disliked these same and/or other aspects of the study. The cooperative learning literature suggests responses other than effectiveness (e.g., affective responses) as important outcomes. Consideration should be given to designing research which explores outcomes in addition to achievement. 5) Critical outcomes measured in this study, and as far as is known in all cited studies, were by individuals taking paper and pencil tests. Given this, and considering recommendations 3 and 4 above, research should be considered that measures dependent variables in groups, perhaps using the microcomputer as a testing medium. 6) Teams of subjects were formed randomly, given the rules described in Chapter Three, and friction among members developed in some cases. The students whose scores were 93 dropped from this study solely because they were unable to be grouped according to the rules showed significant overall learning on both parts of the pre- post-test. What would be the result if students where allowed to form their own groups? Would they choose to work alone, or in varying size groups? Would they form mixed-gender groups or same-gender groups? Friendship groups may be superior to some other grouping method because of the greater willingness of friends to work together toward a common goal. Consideration should be given to studying the differences between groups formed by student preference and groups formed in other ways. 7) The results reported here indicate that, overall, the activities were gender-neutral. It is not enough to ignore gender-related issues when developing lessons and implementation procedures for science curriculum. Research in the areas of science and microcomputer use should report results of gender in an attempt to build knowledge in the area of activities which are gender-neutral. 8) Cooperative group learning with microcomputers is a more efficient use of a scacre resource, at least at this time. Since this research found no strong advantage favoring achievement by individual students working alone, assignment of learners to small groups may be indicated for this type learning activity. However, more research is needed to replicate these findings, especially across 94 different school districts, before settling on this strategy. 9) All subjects in this study were 7th and 8th graders. Research should be considered which investigates age differences in science process learning. It is interesting to conjecture what might be the outcome of this research if older (or younger) students, who have different perspectives on mixed-gender cooperative group and a greater or lesser overall science knowledge base, had been used for this study on achievement. Also, what would be the results if affective outcomes were formally measured? 10) This research used only a quasi-experimental design. Consideration should be given to other methodologies beyond those employed in this study (e.g., observation and analysis of students’ verbal accounts of their own thought processes while working alone or in groups). 11) The use of intact classrooms being assigned to treatment completely confounded these two variables. Research should be considered that uses random assignment of students to treatments, if possible, to directly control for differences among classrooms. Other Considerations/Recommendations. 12) It became clear to the researcher during this intervention, that use of microcomputers by classroom teachers is not an automatic process. Incorporation of new technologies requires teacher in-service and pre-service 95 training. Science teachers need to know when and how it is appropriate to use technology, such as computerized databases, to promote their instructional objectives. Teacher training that is being referred to here is not the usual "computer literacy" (e.g., how a computer works, programming courses in BASIC). Instead, it is the training in how and when to incorporate computers into existing or developing curriculum - methods driven training. 13) Simply because microcomputers can be used as a tool to promote certain kinds of problem-solving skills, does not mean they will be used for those goals. It appears that schools are investing considerable time and money to develop programs to teach students how computers are put together and how to program computers, with little attention to how the computer can serve students as a tool. Consideration should be given by teachers and school administrators on how to promote computers as tools as well as these other uses. Computers used as tools may be one way to integrate new technology into various subject areas and curriculum. Problem solving is an important part of many school subjects, and all methods, including microcomputers, should be exploited to promote learning of these skills. 14) It is clear that courseware designers should explicitly consider how their programs and lesson plans can be used in small group and whole group instruction, and not simply assume one student to one computer as the way their courseware will be implemented. As suggested by White 96 (1985), curriculum developers must use the best of what is known about problem solving and information processing skills when designing their materials. The advantages of new technology may be lost if the problems we ask science students to solve are trivial. APPENDICES APPENDIX A APPENDIX A TEACHER CHECKLIST FOR STUDENTS WORKING IN GROUPS Tell the students that their grade will be based partly upon their individual improvement between the pretest and the post test. Also, the other part of their individual grade will be based upon how much improvement each member of their group does on the pretest and post test. So, it will help each of them to make sure all members of their group learns the content and practices the lessons. Tell the students, at least each day, that they should work with each other within their group. Tell the students, at least each day, that they should ask each other for help. Let the students know it is each of their jobs to make sure everyone in the group understands the lessons. Tell the students, whenever it is appropriate, that they should consult with the instructor only if no one in their group knows how to proceed. Remind the students, whenever appropriate, to refer to the AIMS if they have a question concerning the use of the database. Instruct the students to take turns at the keyboard. Each student in each group should have approximately equal time entering commands at the computer for the group. It is up to them to see that everyone uses the keyboard. Given you are convinced every one in a group has no reasonable suggestion on how to proceed while they are working on a lesson, direct the students in that group to the part of that lesson, or a previous lesson, that will help them. Give the students encouragement and every opportunity to work through the lessons without direction from you. (Note: If the question involves how to use the database, answer the question directly or direct the group of students to the AIM that will help them. How to use the database is pp; a substantive part of this research.) 97 98 TEACHER CHECKLIST FOR STUDENTS WORKING INDIVIDUALLY Tell the students that their grade will be based upon the individual improvement between their pretest score and their post test score. Tell the students, whenever it is, that they should consult with the instructor only if they can not determine how to proceed on their own. Remind a student, whenever appropriate, to refer to the AIMs if he/she has a question concerning the use of the database. Given you are convinced a student has no reasonable suggestion on how to proceed while he/she is working on a lesson, direct that student to the part of that lesson, or a previous lesson, that will help him/her. Give all students encouragement and every opportunity to work through the lessons without direction from you. (Note: If the question involves how to use the database, answer the question directly or direct the individual student to the AIM that will help him/her. How to use the database is pp; a substantive part of this research.) APPENDIX B APPENDIX B THE PROCESSES OF ANALYSIS R. J McLeod When scientists conduct an experiment, they perform several processes. These processes may not be performed in the order that we will describe them, but they are all part of the general process of experimenting. You will use these same processes as you do the experiments on Climate and Weather. The processes that you will learn to use in this lesson are: Stating the purpose of an experiment Stating a hypothesis Determining the information needed Controlling variables .Arranging the information in a report so that the hypothesis is easy to determine * Analyzing the results iii-I'l- Each of these processes will be explained in this lesson and you will be given opportunities to practice them. Stating the Purpose: The purpose of an experiment is a general statement of what you expect to find. Many times, the purpose of an experiment is given to you. For example, in Experiment One, the purpose is: To determine if there is a relationship between temperature and latitude. The purpose is almost always a general statement of something that we believe to be true. We may believe it to be true because of some experience or because of other scientific theory. In this case, the purpose seems reasonable because of lots of experiences (in the northern hemisphere, we go south for vacations in the winter, birds fly south for the winter, the weather person on TV usually points to warmer temperatures in the south than in the north, etc). There is also scientific reasons to expect a relationship between temperature and latitude. What is meant by "a relationship between temperature and latitude"? 99 (D (W (\l n) _-_ i—de H (D (I O L-‘ ,A .140" A A- iu..n 1" In 100 In science, a relationship between two variables (for example temperature and latitude) means that when one of the variables changes, the other changes also. This is most easily seen by arranging one variable (either from high to low or from low to high) and observing how the other changes. The variable that we choose to arrange is called the independenp variablg, while the variable that we observe as a result of arranging the independent variable is called the dependent variable. Example 1 * As one variable increases, the other also increases. Variable A Variable B 20 500 30 600 40 700 * In this example, variable A is ordered from the smallest value (20) to the largest (40). Notice that as variable A gets larger, variable B also gets larger. Variable A is the independent variable in this case because we chose to order it and look for an effect on variable B. Variable B is the dgpendenp variable. Exam le 2 * As one variable decreases, the other also decreases. Variable B Variable A 40 700 30 600 20 500 * In this example, variable B is ordered from the largest value (40) to the smallest (20). Notice that as variable B gets smaller, variable A also gets smaller. In this example, Variable B is the independent variable. 1. Can you explain why Variable B is called the independent variable? 2. What is Variable A called? 3. Why? (Answers are on page 8 and 9 of this lesson) 101 When both variables change in the same direction (increasing g decreasing) , there is a relationship between the variables and it is called a dirgzt relationship. Examples 1 and 2 are both direct relationships. Example 3 * As one variable increases, the other decreases. Variable A Variable B 20 700 30 600 40 500 * In this example, variable A is ordered from the smallest value (20) to the largest (40). Notice that as variable A gets larger, variable B gets smaller. They go in opposite directions. 4. What is Variable A called? 5. Why? 6. What is Variable B called? \) Why? (Answers are on page 8 and 9 of this lesson.) Examplg 4 * As one variable decreases, the other increases. Variable A Variable B 40 500 30 600 20 700 * In this example, variable A is ordered from the largest value (40) to the smallest (20). Notice that as variable A gets smaller, variable B gets larger. 8. What is Variable A called? 9 Why? 10. What is Variable B called? 11. Why? (Answers are on page 8 and 9 of this lesson.) 102 When the variables change in the ppposipe direction (one increases while the other decreases), there is a relationship between the variables and it is called an inverse relationship. Examples 3 and 4 are both inverse relationships. Relationships do not need to be perfect. Look at the following: Variable A Variable B 50 600 40 500 30 700 20 200 10 100 12. What is Variable A called? 13. Why? 14. What is Variable B called? 15. Why? 16. What kind of relationship does this example show? 17. Why? (Answers are on page 8 and 9 of this lesson.) * In this example, variable A is ordered from the largest value (50) to the smallest (10). In general, as Variable A decreases, so does variable B. However, there is an exception. Can you find it? In real life, there are almost always exceptions. For example, adult males are usually taller than adult females. If we were to arrange a table of height vs. sex so that the height was arranged from shortest to tallest, we would expect to find that most of the short adults are females and most of the tall adults are males. However, we all know short males and tall females. These are the exceptions, but the relationship between height and sex still exists. In this case, since sex is not numerical, the relationship is neither direct nor inverse. It is just a relationship. 103 If one variable is arranged so that it increases (or decreases), and the other variable seems to be quite random (neither increases or decreases, even in a general way) then there is said to be no relationship between the two variables. All of this means that if we want to find out if there is a relationship between two variables, we should choose one of them to be the independent; variable and arrange it so that it increases (or decreases) and then look at the other (dependent; variable) to see if it shows a direct or inverse relationship, (or no relationship at all). A computer can do this for us and do it very easily and quickly. Stating a hypothesis. In the northern hemisphere, as the latitude decreases, (as we go South), the temperature will increase. (Inverse relationship) Notice that the hypothesis states the purpose in a way that suggests what to look for. Other hypotheses that are equally good may be: 1. As the latitude increases in the northern hemisphere, (as we go North), the temperature will decrease. (Inverse relationship) We could also state the following hypotheses and they would be OK for the purpose of our experiment. However, our experiences suggest that these hypotheses are pp; true for the northern hemisphere, but might be true for the southern hemisphere. 2. As the latitude decreases, (as we go South), the temperature will decrease. (Direct relationship) 3. As the latitude increases, (as we go North), the temperature will increase. (Direct relationship). 104 Determine the information needed: In order to test the hypothesis, we need: a) Latitude b) July Temperature c) Location’s Name (The hypothesis doesn’t require that we have this information, but it is nice to have additional information like this at times). Controlling variable: Whenever an experiment is conducted to determine the effect of one variable on another (in this case, the effect of latitude on temperature), other variables may also affect the results. In this experiment, an obvious variable is the time of year. We know that the temperature also gets warmer in the summer and colder in the winter for most places in the United Sates. Therefore, if we look at a July for one location and a January for another, we may get very strange results. The answer is to control all of the other variables that we can. In this case, we will control month by selecting the same month for all of our data. Let’s control on month by selecting July. Arranging the Information: The way that the information is arranged on paper is very important in helping us interpret the data. We will assume that the information will be arranged in columns. This makes it easy to find relationships. The hypothesis states that as the latitude decreases, the temperature should increase. This tells us that one column should be the latitude and it would be nice if the one next to it is the temperature for that latitude. It also tells us that it would be very helpful if the data were arranged so that the latitude decreases (or increases). We will then look at the temperature column to see if it also increases or decreases. Since we are using real data, we should expect to find some exceptions, even if there is a relationship. Finally, we might print the name of the location in the third column. The report headings might look something like the following: Latitude Temperature Location 105 Important information about arrangement: .Arrangement includes which data is placed in which columns ppg ordering one of the columns (arranging the data in the column from high to low or low to high) to see how the data in the other column is then arranged. Analyzing the Results: Once you have stated the hypothesis, determined the information needed, the organization of the information, and printed this information out, you must then analyze it. Many times, you can just look at the printout and be able to see if a relationship exists. Other times, it may be necessary to go back one or two steps (or sometimes back to the beginning) and produce another printout. For example, we suggested that you control on month by selecting only the month of July. You may want to try another printout by controlling on a different month. You may decide that you need to arrange the information in a different way to make your analysis easier. With the computer, this is very quick and easy to do. Your analysis should include a statement of: 1. Whether you found a relationship to exist. 2. If possible, whether the relationship seems to be a direct relationship or an inverse one. 3. If the relationship is not perfect, note some of the exceptions and, if you can, state some inferences for them. Practice Exercises Now let’s see if you understand the processes of experimenting. See if you can help conduct the following experiment. The students in a certain school believed that boys were taller than girls. Their teacher asked them to take a survey of their class to see if this was true. The purpose of gathering the data was to see if there is a relationship between sex and height. Their hypothesis was: Hypothesis: If we compare the heights of boys to girls, the heights of boys . 106 The following data were obtained: NAME AGE SEX HEIGHT (inches) Alma 11 female 52 Andrea 13 female 59 Arthur 11 male 53 Bernard 11 male 57 Bill 13 male 57 Byron 13 male 62 Catherine 13 female 60 Cecil 13 male 57 Daisy 11 female 51 Debbie 13 female 59 Harold 11 male 52 Helga 11 female 56 Irma 13 female 56 Jackie 11 female 53 James 13 male 60 Jean 11 female 63 Jim 11 male 54 Joan 11 female 57 Luke 11 male 55 Nancy 13 female 58 Richard 13 male 59 Rick 11 male 51 Ronald 13 male 58 Sue 11 female 54 Trudy 11 female 56 Victor 13 male 56 Violet 13 female 61 Walter 13 male 61 Although all of the data are collected, it is very difficult to test their hypothesis! Look at the data and answer the following questions: 18. Which variables do you need to test the hypothesis that boys are taller than girls? 19. Which variable should be controlled so that it does not effect the results? 20. Which variable is really not needed for this experiment? 21. How might you organize the data so that the answer is easier to see? 22. 107 What arithmetic operations could be performed on the data to help in the analysis? 10. ll. 12. l3. 14. 15. 16. (Answers are on page 8 and 9 of this lesson). Answers to Questions on The Processes of Analysis Variable B is the independent variable because we chose to order (arrange) this variable and observe changes in another. Variable A is called the dependent variable. Because its values depend on the way we arrange the values of variable B. Variable A is the independent variable. Because we chose to order (arrange) this variable and observe changes in another. Variable B is called the dependent variable. Because its values depend on the way we arrange the values of variable A. Variable A is the independent variable. Because we chose to order (arrange) this variable and observe changes in another. Variable B is called the dependent variable. Because its values depend on the way we arrange the values of variable A. Variable A is the independent variable. Because it is obvious that the data for this variable are ordered from high to low. At first glance, you might think that is true of Variable B also, but look closely. Variable B is the dependent variable. Because its values depend on the arrangement of Variable A. It is clear that the data are not ordered on Variable B because of the value of 700. This is an exception to the general decrease of Variable B. If the data would have been ordered on Variable B, the value of 700 would have been first. This is a direct relationship. 17. 18. 19. 20. 21. 22. 108 Because both variables change in the same direction. In this case, when Variable A was ordered so that it decreased, Variable B also decreased. You need both the sex and the height. The age might affect the results. It is possible that at certain ages, girls are taller than boys. At any rate, since we are not sure, we should obtain data for the same ages and, in this way, control for age. The name of the student is of little interest for this experiment. It would be much easier to answer the question if all of the boys were grouped together and then all of the girls. In other words, we should order on sex. It would also be helpful to have sex and height printed in columns next to each other. An average of all of the boys' heights and an average of all of the girls’ heights would make comparison much easier. The following table is the same data except the age has been controlled at 13 years, the names of students have been left out, and the averages for the heights have been computed. Can you use this table to answer the question, "are boys taller than girls"? AGE SEX HEIGHT (inches) 13 female 59 13 female 60 13 female 59 13 female 62 13 female 63 13 female 58 13 female 61 Average height of females 60.3 13 male 57 13 male 62 13 male 57 13 male 60 13 male 59 13 male 58 13 male 61 Average height of males 59.1 109 Which variable is the dependent variable and which is the independent? Why? If you have trouble with this one, think about which variable was chosen to be ordered. Analysis might be even easier if the same data were organized so that the height is first and it is ordered for each sex like the following table. Does this make the picture any clearer? Note that there are some males taller than some females, but on the average, females in this class are taller than the boys. Notice also how much easier it is to identify those males that are taller than females because the data are ordered on height. HEIGHT SEX AGE 63 female 13 62 female 13 61 female 13 60 female 13 59 female 13 59 female 13 58 female 13 Average height of females 60.3 62 male 13 61 male 13 60 male 13 59 male 13 58 male 13 57 male 13 57 male 13 Average height of males 59.1 110 EXPERIMENT ONE The Relationship between Temperature and Latitude Note: All data for these experiments are for a location in the northern hemisphere. In the northern hemisphere, latitude increases as you move north. Purpose: To determine if there is a relationship between temperature and latitude. Hypothesis: As the latitude decreases, (as we go South), the temperature will increase. Can you suggest other hypotheses that would help us conduct this experiment? Controlling variables: Which variables will you control? Why? Determine the Information Needed: List the information that you will need in order to test the hypothesis: a) b) c) Location’s Name (The hypothesis doesn’t require that we have this information, but it is nice to have additional information like this at times) Arrange the Information: Which column will be first? Which column will be second? Which column will be third? Which column will you have the computer order? Which variable will be the independent variable? Why? 111 Which variable will be the dependent variable? Why? Will you order from high to low or low to high? Why? Analyzing the Results: Your analysis should include a statement of: 1. Whether you found a relationship to exist. If so, what makes you believe there is a relationship? If there is no relationship, what is your evidence? 2. If possible, whether the relationship seems to be a direct relationship or an inverse one. 3. If the relationship is not perfect, note some of the exceptions and if you can, state some inferences for them. 4. Will this same relationship (or lack of it) be true for another month? Materials Needed: * AppleWorks program disk * Climate data disk. Use Data Base File, CLIMATETEMP. You may use Lab Report Format, Lab 1 MonthlyA, Lab 1 MonthlyB, or you may format your own. 112 EXPERIMENT TWO Climate Experiment Two - The Relationship Between Latitude and the Temperature Difference Between July and January Purpose of the Experiment: In some locations, the temperature is nearly the same all year, while in other locations, there is a great difference between the summer temperature and the winter temperature. In most places, July is the hottest month and January the coldest. Therefore, for this experiment, we will use the difference between the July temperature and the January temperature as one variable. This difference has already been computed and is in the database. The purpose of this experiment is to determine if this difference is related to latitude. In other words, do you expect the difference between January and July temperatures to be greater in the North, the South, or is there no reason to believe that there is a relationship? Hypothesis: As the latitude decreases, (as we go South), the temperature difference (between January and July) will Can you suggest other hypotheses that would help us conduct this experiment? Controlling variables: Which variables will you control? Why? In this experiment, the variables have already been controlled by the selection of temperature difference between July and January. If you had reason to believe that East-West location was related to this temperature difference, you might later control on latitude. How would you do this? Determine the Information Needed: List the information that you will need in order to test the hypothesis: a) b) c) Location’s Name (The hypothesis doesn’t require that we have this information, but it is nice to have additional information like this at times.) .J .. 1 l'\- _ mini-pr i‘ a Fun..- .gJ 113 Arrange the Information: Which variable will be the independent variable? Which variable will be the dependent variable? Which column will be first? Which column will be second? Which column will be third? Which column will you have the computer order? Will you order from high to low or low to high? Why? Analyzing the Results: Your analysis should include a statement of: 1. Whether you found a relationship to exist. If so, what makes you believe there is a relationship? If there is no relationship, what is your evidence? 2. If possible, whether the relationship seems to be a direct relationship or an inverse one. 3. If the relationship is not perfect, note some of the exceptions and, if you can, state some inferences for them. 4. Will this same relationship (or lack of it) be true for another month? Materials Needed: * AppleWorks program disk * Climate data disk. Use Data Base File, CLIMATETEMP. You may use Lab Report Format, Lab 2, or you may format your own. 114 EXPERIMENT THREE The Relationship Between Precipitation and Latitude Purpose: To determine if there is a relationship between precipitation (rain, snow, etc) and latitude. Hypothesis: As the latitude , (as we go South), the precipitation will . Can you suggest other hypotheses that would help us conduct this experiment? Controlling variables: Which variables will you control? Why? Determine the Information Needed: List the information that you will need in order to test the hypothesis: a) b) c) Location's Name (The hypothesis doesn’t require that we have this information, but it is nice to have additional information like this at times) Arrange the Information: How will you arrange your columns? Which variable will be the independent variable? Which variable will be the dependent variable? Which column will you have the computer order? 115 Will you order from high to low or low to high? Why? Analyzing the Results: Your analysis should include a statement of: 1. Whether you found a relationship to exist. If so, what makes you believe there is a relationship? If there is no relationship, what is your evidence? 2. If possible, whether the relationship seems to be a direct relationship or an inverse one. 3. If the relationship is not perfect, note some of the exceptions and, if you can, state some inferences for them. 4. Will this same relationship (or lack of it) be true for another month? Materials Needed: * AppleWorks program disk * Climate data disk. Use Data Base File, CLIMATEPRECIP. You may use Lab Report Format, Lab 3 MonthlyA and Lab 3 MonthlyB, or you may format your own. 116 EXPERIMENT FOUR The Relationship Between Relative Humidity, Dry Bulb Temperature, and Wet Bulb Temperature Purpose: The purpose of this experiment is to determine the relationship between relative humidity and wet and dry bulb temperature. Dry bulb temperature is found by the normal means of taking the temperature reading from a thermometer that is dry (and they usually are unless it's raining). Wet bulb temperature is found by first wrapping some cotton around the bulb of a thermometer, soaking it in water, and then causing air to blow past it (for example using a fan to blow on it). Just as you feel cooler when you are wet and a wind is blowing, a wet bulb thermometer will usually have a lower temperature than a dry bulb thermometer. Scientists use the differencg between thg dry bulb and the wet bulp temperatures to determine relative humidity. Relative humidity, in turn, is an indication of how much water is in the air. For example, when it is raining, the relative humidity is 100%. In this experiment, we want to find out what relative humidity is related to. That is, consider relative humidity the dependent variable and determine what other variable is related to it and whether the relationship is direct or inverse. Hypothesis #1: As the , the relative humidity will also Hypothesis #2: Hypothesis #3: Controlling variables: Which variables will you control? Why? Note that the database you will be using for this experiment is all for the same location including temperature and relative humidity readings every three hours for the months of July and January. Do you think that the month may affect your results? How can you control for this? 117 Determine the Information Needed: List the information that you will need in order to test the hypothesis: a) b) Arrange the Information: For each hypothesis, you must produce a report that will test the hypothesis (help you to answer it). As in the other experiments, consider: The arrangement of the columns Which will be the independent and the dependent variables. The need to control variables. How you will order the data. Analyzing the Results: Your analysis should include a statement of: 1. Whether you found a relationship to exist. If so, what makes you believe there is a relationship? If there is no relationship, what is your evidence? 2. If possible, whether the relationship seems to be a direct relationship or an inverse one. 3. If the relationship is not perfect, note some of the exceptions and, if you can, state some inferences for them. 4. Will this same relationship (or lack of it) be true for another month? Materials Needed: * AppleWorks program disk * Weather data disk. Use Data Base File, LOCAL. You may use Lab Report Format, Lab 4 Humid, or you may format your own. 118 EXPERIMENT FIVE Are Winter Months Mere Cloudy than Summer Months? Purpose: The purpose of this experiment is to determine if there is a relationship between the amount of cloud cover and the two seasons, winter and summer. That is, is winter more cloudy than summer or is summer more cloudy than winter, or is there no difference. Remember, the data you are using is only for 1985 for the city of Grand Rapids, Michigan. Without more data, you panno; make general statements for all years in Grand Rapids, nor can you say that these same conditions exist in other communities. The investigation is intended to show you how this analysis is done so that, if you want to, you could obtain data for your community and for other years and do a similar report. Hypothesis: The average cloud cover for all of the days of July will be less than Can you suggest other hypotheses that would help us conduct this experiment? Controlling variables: Which variables will you control? Why? Remember that the database you will be using for this experiment is all for the same location. However, you have very detailed information for this location including readings every three hours for the months of July and January. Do you think that the time of day may affect your results? How can you control for this? How will you compare January cloud cover to July cloud cover? Determine the Information Needed: List the information that you will need in order to test the hypothesis: a) b) c) In order to do this experiment, you will need to get the total of the cloud cover for the period you select. If you don’t remember how, review the AIMS modules. Arrange the Information: What will you consider as you arrange the information? 119 Analyzing the Results: Your analysis should include a statement of: 1. Whether you found a relationship to exist. If so, what makes you believe there is a relationship? If there is no relationship, what is your evidence? 2. If possible, whether the relationship seems to be a direct relationship or an inverse one. 3. If the relationship is not perfect, note some of the exceptions and, if you can, state some inferences for them. 4. Will this same relationship (or lack of it) be true for another month? Materials Needed: * AppleWorks program disk * Weather data disk. Use Data Base File, SKYTEMP. You may use Lab Report Format, Lab 5 Cloud, or you may format your own. APPENDIX C APPENDIX C Dear Parent or Guardian, I am preparing to study the effects of the number of students in a learning group on the achievement of certain science process skills as part of the requirements for my degree at Michigan State University. This study involves student lessons that we have prepared, which will be used during the regular school day. It also involves the administration to each student of a 36 item test before and after these lessons which will permit measurement of the effectiveness of the lessons. Your child's teacher and school principal have approved this project, and now we are requesting you permit the teacher to release your child’s grade to us for research purposes. Would you help us in learning more about science instruction by giving us permission to include your child in this study? If so, please sign the reverse side of this sheet. Your child’s identity will not be revealed in any of our written reports. If you are interested, I would be happy to make the overall results of this research available to you if you contact me with such a request. Thank you for helping me with my research, and to improve our knowledge of science instruction for the schools. Sincerely, Zane L. Berge Graduate Student 120 121 Michigan State University Department of Educational Systems Development As the legal parent/guardian of , I give my consent to one of the following options: I give my permission for the above named student to participate in the study as has been described. I do not give my permission for the above student’s scores to be released for research purposed in the study as has been described. I am indicating that the research project being conducted by Michigan State University, has been explained to me and that I have been informed about my child’s involvement in this project. I recognize that I have the right to withdraw my permission for my child’s participation at any time prior to the study without penalty. Signed (parent/guardian) Date I am indicating my willingness to participate as indicated above. Signed (student) PLEASE RETURN TO YOUR TEACHER! APPENDIX D APPENDIX D What variables Effect the Strength of a Magnetic Field? The following describes a question that lends itself to research, and asks you to answer some questions about how you might plan for an experiment to help answer that question. Write your answers directly on these pages in the spaces provided. A classroom discussion on magnetism lead to the production of a simple electromagnet. The class used insulated bell wire connected to a dry cell battery(s), and coiled the wire around a nail, (used as the electromagnet’s core in this case), to make an electromagnet. They found this magnet could be used to attract tacks. The idea was presented that the strength of the magnetic field, measured by how many tacks could be attracted, can be controlled by how the electromagnet is constructed. For instance, one class member suggested that, (given more than one battery is used in the construction of the magnet), whether the batteries are connected in parallel or series effects the strength of the resulting magnet. Other suggestions about what affected the strength of the magnet involved the number of turns of the coil (wire) around the core, the number of batteries, and the kinds of materials making up the core of the electromagnet. Purpose: To determine what variables effect the strength of a magnetic field when constructing a simple electromagnet. Hypothesis: The greater the number of turns of the coil, the more tacks the electromagnet will pick up in a string. 1. What type relationship is hypothesized (direct, indirect, no relationship)? Controlling variables: Given the hypothesis above, name two variables you would control so they do not effect the results when testing that hypothesis? 2a. b. 122 a". 123 Determine the Information Needed: List the information (two variables) that is absolutely needed in order to test the hypothesis that the number of turns of a coil increase the strength of the magnet: 3a. b. Arrange the Information: 4a. Which variable will be the independent variable? b. Which variable will be the dependent variable? c. Which variable should be ordered (either from highest value to lowest or visa versa), to test the hypothesis given above? Can you suggest two other hypotheses that may help the students answer the question of which variables effect the strength of a magnetic field? 5a. As the is increased, the will increase. b. Analyzing the Results: The analysis of this data should include a statement of whether a relationship exists between the independent and dependent variables, and what evidence there is for such an inference. Can you name at least two other statements the analysis should include? 6a. 124 Effect of Sunlight on Heating various Earth Surfaces The following describes a question that lends itself to research, and asks you to answer some questions about how you might plan for an experiment to help answer that question. Write your answers directly on these pages in the spaces provided. During a classroom discussion about wind, a number of issues were noted by students in a 9th grade class. It was pointed out that as cold air moves in to replace warm air that is rising, winds are created. The discussion turned to how might the heating of the earth’s surface cause convection currents (wind). The experience of some students caused them to guess that sunlight causes materials such as white sand to become hotter than materials such as water. They believed the different temperatures of the earth’s surfaces caused convection currents. Other class members were not so sure these suspicions were justified. To start with, a couple class members recalled that since the earth is tilted, different parts of the earth receive more or less direct rays of the sun. Further, they suspected that direct rays of the sun produce a greater heating effect than slanted rays. However, further discussion lead to ideas that heating of the various surfaces on earth may be affected by such things as: the total amount of materials available to absorb the sun’s radiant heat, the area of the surface exposed to the sun, the length of time of exposure of the surface to the sun, and the color of the surface. The students wanted to scientifically test some of the ideas they suspected to be true. To test one of the notions, the students took two equal size bread pans filled with an equal amount of soil. They stirred the soil thoroughly to ensure equal starting temperatures, and placed one container flat on a window sill in the sunlight. This pan received slanted sunlight. They propped up one end of the second container so that it received the sunlight at right angles (that is, direct rays). They used a thermometer to record the temperature of each soil sample at fifteen-minute intervals for one hour. Purpose: To determine what variables effect the sun’s heating of various surfaces. Hypothesis: As the angle of the earth’s surface and the rays of the sun decreases from 90 degrees (that is, becomes less direct), the rate at which the surface will absorb heat energy will decrease. 7. What type relationship is hypothesized (direct, indirect, no relationship)? 125 Controlling variables: Given the hypothesis above, name two variables you would control so they do not effect the results when testing that hypothesis. 8a. b. Determine the Information Needed: List the information (two variables) that is absolutely needed in order to test the hypothesis that the angle the sun’s rays strike a material has an effect on the amount of heat absorbed: 9a. b. Arrange the Information: 10a. Which variable will be the independent variable? b. Which variable will be the dependent variable? c. Which variable should be ordered (either from highest value to lowest or visa versa), to test the hypothesis given above? Can you suggest two other hypotheses that may help the students answer the question of the effect sunlight has on heating various surfaces on earth? 11a. AS the increased, the will increase. b. Analyzing the Results: The analysis of this data should include a statement of whether a relationship exists between the independent and dependent variables, and what evidence there is for such an inference. Can you name at least two other statements the analysis should include? 12a. res YOL que spa 126 What variables Effect the Period of a Pendulum? The following describes a question that lends itself to research, and asks you to answer some questions about how you might plan for an experiment to help answer that question. Write your answers directly on these pages in the spaces provided. Students in a 9th grade classroom have noticed that not all pendulums swing at the same speed. For example, pendulums of different clocks swing at different speeds. The students want to find out what causes this phenomena. Using a thumb tack at the edge of a desk, some lengths of string for the pendulum arm, and different size iron rings for bobs, they fashion simple pendulums and decide to conduct an experiment to discover what effects the number of swings (per unit of time) of a pendulum. These students gathered data on the mass of the bob (they use either 10 grams, 25 grams, or 50 grams weights), the actual time for different numbers of swings of the pendulum, the number of swings (they timed either 50 swings, 100 swings, or 200 swings), and the length (they used either 15 centimeters, 30 centimeters, or 45 centimeters), of the string to the center of the bob from the thumb tack. The students worked in pairs and gathered dozens of records, each record contained the data described above. Following are examples (not the actual data) of the information in three records (each record is the four pieces of information on one line about one experimental trial): TIME MASS NUMBER OF LENGTH OF THE Alseconds) (grams) SWINGS STRING (cm) 145 10 50 15 192 25 200 30 45 50 50 30 The students gathered the information and now wish to plan and organize the data in a way that will let them discover what variables effect the period of a pendulum. Purpose: To determine what variables effect the period of a pendulum. Hypothesis: As the mass of the bob is increased, the period of the pendulum will increase. 1. What type relationship is hypothesized (direct, indirect, no relationship)? )0. «\\ '1‘ -l 8 PC 127 Controlling Variables: Given the hypothesis above and the four pieces of information collected for each trial, name two variables you would control so they do not effect the results, when testing this hypothesis? 2a. b. Determine the Information Needed: List the information (two variables) that is absolutely needed in order to test the hypothesis that the mass of the bob is related to the period of a pendulum: 3a. b. Arrange the Information: 4a. Which variable will be the independent variable? b. Which variable will be the dependent variable? c. Which variable should be ordered (either from highest value to lowest or visa versa), to test the hypothesis given above? Can you suggest two other hypotheses that may help the students answer the question of which variables effect the period of a pendulum? 5a. As the is increased, the will increase. Analyzing the Results: The analysis should include a statement of whether a relationship exists between the independent and dependent variables, and what evidence there is for such an inference. Can you name at least two other statements the analysis should include? 6a. b. 13(1Q 1 I I 1 . (Will J‘I 128 Effect of Sunlight on Heating various Kinds of Earth Surfaces The following describes a question that lends itself to research, and asks you to answer some questions about how you might plan for an experiment to help answer that question. Write your answers directly on these pages in the spaces provided. During a classroom discussion about weather and the heating effect of sunlight on the world’s oceans and land masses, a number of issues were noted. A couple class members pointed out that surfaces such as roadways and beaches become very warm in sunlight, whereas water does not seem to become so warm. The experiences of some students caused them to guess that sunlight causes white sand to become hotter than water; and that sunlight causes topsoil to become hotter than water. Other students were not so sure about these suspicions. Further discussion lead to ideas that heating of the various surfaces on earth may be affected by such things as: the total amount of materials available to absorb the sun’s heat, the area of the surface exposed to the sun, the length of time of exposure of the surface to the sun, whether or not the surface is exposed to direct rays of the sun or slanted rays, and the color of the surface. The students wanted to scientifically test some of the ideas they suspect to be true. Purpose: To determine what variables effect the sun’s heating of various surfaces. Hypothesis: As the area of the surface exposed to the sunlight is decreased, the temperature of the surface will increase. 7. What type relationship is hypothesized (direct, indirect, no relationship)? Controlling variables: Given the hypothesis above, name two variables you would control so they would not effect the results when testing that hypothesis. 8a. b. Determine the Information Needed: List the information (two variables) that is absolutely needed in order to test the 129 hypothesis that the size of the area of the surface exposed to sunlight is related to the temperature of that surface: 9a. b. Arrange the Information: 10a. Which variable will be the independent variable? b. Which variable will be the dependent variable? c. Which variable should be ordered (either from highest value to lowest or visa versa), to test the hypothesis given above? Can you suggest two other hypotheses that may help the students answer the question of which variables effect the sun’s heating of various surfaces? 11a. As the is increased, the will increase. b. Analyzing the Results: The analysis should include a statement of whether a relationship exists between the independent and dependent variables, and what evidence there is for such an inference. Can you name at least two other statements the analysis should include? 12a. b. LIST OF REFERENCES AAAS. (1967). Science - A Process Approach. Washington, D.C. American Association for the Advancement of Science. AAAS. (1976). Curriculum Catalog. (AE9505-10-85). Lexington, MA: Ginn and Company (Xerox Corp.). Allen, V., and Feldman, R.S. (1976). Research on children tutoring children: A critical review. Review of educational research. A§(3): pp. 355-385. Amaria, R.P., Biran, L.A. & Leith, G.O. (1969). Individual vs. cooperative learning: Influence of intelligence and sex. Educational Research (British). 11, 95-103. Anderson, Ronald E.; And Others. (1983). Computer inequities in opportunities for computer literacy. Minnesota University, Minneapolis. Minnesota Research and Evaluation Center. (ED249 293). Anderson, R.E., Klassen, D.., Krohn, K.R., Smith-Cunnien, P. (1982). Assessing Computer Literacy. (Publication #503). St. Paul, MN Minnesota Educational Computer Consortium. Aronson, E., Bridgeman, D., Y Geffner, R. (1978). Interdependent interactions and prosocial behavior. Journal of Research and Development in Education. 1;: 16-27. Ausubel, D.P. (1963). The psychology of meaningful verbal learning. NY: Grune & Stratton. Baird, William Erwin. (1985). Changes in preservice elementary teachers' hypothesizing skills following qroup cooperative or individual study with computer- presented text or computer simulations. unpublished doctoral dissertation. The University of Texas at Austin. Bar-Tal, Daniel and Saxe, Leonard, (Eds.). (1978). Social psychology of education: Theorv and research. New York: John Wiley and Sons. Bar-Tal, Daniel, and Geser, Devorah. (1980). Observing cooperation in the classroom group. In Cooperation in education, edited by Sholomo Sharan, Pau; Hare, Clark D. Webb, and Rachel Hertz-Lazarowitz. Provo, Utah: Brigham Young University Press. 130 Bc 131 Barker, R., & Gaump, P. (1964). Big school, small school: High school size and student behavior. Stanford, CA: Stanford University Press. Barker, R. & Wright, H. (1955). Midwest and its children. New York: Harper and Row. Becker, Betsy Jan. and Chang, Lin. (1986). Measurement of science achievement and its role in gender differences. Paper presented at the Annual Meeting of the American Educational Research Association. San Francisco, CA. April 16-20. Becker, H.J. (1982). Microcomputers in the classroom - Dreams and realities. (Report No. 319). Baltimore, MD: Center for Social Organization of Schools, The Johns Hopkins University. Berger, Carl F. (1982). Attainment of skill in using science processes. I. instrumentation, methodology and analysis. Journal of Research in Science Teaching. 19(3): 249-260. Bitter, Gary G., and Gore, Kay. (1986). Curricular implications of a "computer for every student". Computers in the Schools. 13(2). Summer. The Haworth Press. Bloom, Benjamin. (1981). All our children learning: A primer for parents. teachers and other educators. New York: McGraw Hill. Bloom, B.S., Englehart, M.D., Furst, E.J., Walker, W.H., & Kathwohl, D.R. (1956). Taxonomv of education objectives. The classification of educational goals. Handbook I: Cognitive domain. New York: Longmans Green. Bloom, B.S., Englehart, M.D., Furst, E.J., Walker, W.H., & Kathwohl, D.R. (1972). Taxonomv of behavioral objectives: the classification of educational goals. New York: David McKay. Bommarito, Frank. (1986). Central TIME - Applying effective social studies software. Micro Lines. Kalamazoo, MI. Jan/Feb. Borg, W.R. (1964). An evaluation of abilitv grouping. Cooperative Research Project, No. 577. Salt Lake City: Utah State University. Boyer, Ernest L. (1983). High School. New York: Harper & Row, Publishers. C1 132 Bracey, Gerald W. (1984). Research column in Phi Delta Kappan. 65(9):645. Brown, John Seely. (1985a). Idea amplifiers - new kinds of electronic learning environments. Educational Horizons. Spring. Brown, John Seely. (1985b). Process versus product: A perspective on tools for communal and informal electronic learning. Journal of educational computing research. 1(2). pp. 179-201. Bruner, Jerome S. (1961). The1process of education. Cambridge, MA: Harvard University Press. Bruner, Jerome S. (1961). The act of discovery. Harvard Educational Review. 31:pp. 21-32. Buckholdt, D., & Wodarski, J. (1978). Effects of different reinforcement systems on cooperative behaviors exhibited by children in classroom contexts. Journal of Research and Development in Education. 12:50-68. Burlin, F. (1976). Sex-role stereotyping: Occupational aspirations of female high school students. School Counselor. 24: pp. 102-108. Burns, Joseph C., Okey, James R., and Wise, Kevin C. (1985). Development of an integrated process skill test: TIPS II. Journal of research in science teaching. 22(2): pp 169-177. Burton, G. (1979). Regardless of sex. Mathematics Teacher. lgzpp. 251-270. Clark, R. (1983). Reconsidering research on learning from media. Review of educational research. 53, pp. 445- 459. Cox, Dorothy A.H. (1980). Early adolescent use of selected problem-solving skills using microcomputers. Ph.D. Dissertation. University of Michigan. Cox, Dorothy and Berger, Carl F. (1981). Microcomputers are motivating. Science and Children. September: pp. 28-29. Cox, Dorothy A., and Berger, Carl F. (1985). The importance of group size in the use of problem-solving skills on a microcomputer. Journal of educational computing research. 1(4). pp. 459-68. Cronbach, L.J., & Snow, R.E. (1977). Aptitudes and instructional methods. New York: Irvington Publishers. Dev 08‘» DEV 131' n} n} 133 DeVries, D., & Slavin, R. Teams-games-tournaments (TGT): Review of ten classroom experiments. Journal of Research and Development in Education. 12:28-38. Dewey, John. (1910). How we think. Boston: Heath. Dewey, John. (1964). School conditions and the training of thought. In Reginald D. Archambault (Ed.) John Dewey on Education. New York: The Modern Library. Dillashaw, Gerald F. and Okey, James R. (1980). Test of the integrated science process skills for secondary science students. Science Education 63(5): pp 601-608. Doyle, W. (1983). Academic work. Review of educational research. 33(2), pp. 159-199. Deutsch, M. (1962). Cooperation and trust: Some theoretical notes. In M.R. Jones (Ed.), Nebraska symposium on motivation (pp. 275-319. Lincoln: University of Nebraska Press. Deutsch, M. (1969). Productive and Destructive. Journal of Social Issues. 33 pp. 7-43. Doerr, Christine. (1979). Microcomputers and the 3 R’s: A guide for teachers. New Jersey: Hayden Book Company. Douglas, J.W.B. (1964). The home and the school. London: MacGibbon & Kee. Eash, M. J. (1961). What have we learned? Educational Leadership. 13, 429-434. Fennema, E. (1980). Teachers and sex bias in mathematics. Mathematics Teacher. 13: pp. 169-173. Freeman, C., Hawkins, J. and Char, C. (1984). Information management tools for classrooms: Exploring database management systems. Technical Report No 28. Center for Children and Technology. New York: Bank Street College of Education. Finley, Fred N. (1983). Science processes. Journal of research in science teaching. 33(1), pp. 47-54. Fisher, Glenn. (1983). Where CAI is effective: A summary of the research. Electronic Learning. 3(3): pp. 82-84. Forman, E., & Kraker, M. (1985). The cognitive benefits of collaborative problem solving: Piagetian and vaotskian perspectives. Paper presented at the American Educational Research Association, Chicago. Ge 134 Gagne, R.M. (1974). Essentials of learning for instruction. Hinsdale, IL: Dryden. Gallini, Joan K. (1986). Cognitive outcomes in a logo environment. Paper presented at the American Educational Research Association Conference, San Francisco. Gibb, J. (1951) The effects of group size and of threat reduction upon certainty in a problem-solving situation. American Psychology. 6, pp: 324. Goldberg, M.L., Passow, A.H., & Justman, J. (1966). The effects of ability grouping. New York: Teachers College Press. Graybeal, S. S., and Stodolsky, 5.3. (1985). Peer work groups in elementary schools. American Journal of Education. May. Greenfield, Patricia M. (1985). Multimedia education: why print isn’t always best. American Educator. Fall. Greeno, J.G. (1980). Trends in the theory of knowledge for problem solving. In D.T. Tuma & F. Reif (Eds.), Problem solving and education: Issues in teaching and researgh. Hillsdale, NJ: Erlbaum. Harlen, W., Black, P., Johnson, 3., & Palacio, D. (1983). Science in Schools. Age 11: Report No. 2. London, England: DES. Hawkins, Jan. (1985). Computers and girls: Rethinking the issues. Sex Roles. 13(3/4): pp. 165-180. Herron, J.D., Luce, T.G., and Neie, V.E. (1976). The proper experimental unit: Comparative analyses of empirical data. Journal of Research in Science Teaching. 13(1): pp. 19-27. . Hoffman, L. Richard, and Maier, Norman R.F. (1961). Quality and acceptance of problem solutions by members of homogeneous and heterogeneous groups. Journal of Abnormal and Social Psychology. 63(2):40l—407. Hofstein, Avi, and Lunetta, Vincent N. (1982). The role of the laboratory in science teaching; Neglected aspects of research. Review of educational research. 33(2), pp. 201-217. Hunter, B. & Furlong, M. (1985a). Scholastic PFS: Curriculum data bases for U.S. history. New York: Scholastic, Inc. :4 ‘09“ 135 Hunter, B. & Furlong, M. (1985b). Scholastic PFS: Curriculum data bases for U.S. government. New York: Scholastic, Inc. Husen, T., & Svensson, N. (1960). Pedagogic milieu and development of intellectual skills. Science Review. 33, 36-51. Jernstedt, G. Christian. (1983). Computer enhanced collaborative learning: A new technology for education. T.H.E11Journal. 10(7): 96-101. Johnson, D.W. (1970). The social psychology of education. New York: Holt, Rinehart & Winston. Johnson, D.W. (1979). Educational psychology. Englewood Cliffs, NJ: Prentice-Hall. Johnson, D., and Johnson, R. (1974). Instructional goal structure: Cooperative, competitive or individualistic? Review of Educational Research. 33(2): 213-240. Johnson, D.W., & Johnson, R.T. (1975). Learning together and alone: Cooperation. competitionL and individualization. Englewood Cliffs, NJ; Prentice-Hall. Johnson, D., and Johnson, R. (1978). Cooperative, competitive, and individualistic learning. Journal of Research and Development in Education. 13:pp. 3-15. Johnson, D.W., & Johnson, R. (Eds.) (1978). Social interdependence within instruction. Journal of Research and Development in Education. 13(1). Johnson, D.W., & Johnson, R. (1983). The socialization and achievement crisis: Are cooperative learning experiences the solution? In L. Bickman (Ed.), Applied social psychology annual 4. Beverly Hills, CA: Sage. Johnson, R.T., Johnson, D.W. and Stanne, Mary Beth. (1986). Comparison of computer-assisted cooperative, competitive, and individualistic learning. American educational research iournal. 33(3), pp. 382-392. Johnson, D.W., Maruyama, G., Johnson, R., Nelson, C. & Skon, L. (1981). The effects of cooperative, competitive, and individualistic goal structures on achievement: A meta- analysis. Psychological Bulletin. 33, 47-62. Johnson, Roger T., Ryan Frank L., And Schroeder, Helen. (1974). Inquiry and the development of positive attitudes. Science education. 33(1): 51-56. 136 Johnson, D.W., Skon, L., & Johnson, R. (1980). Effects of cooperative, competitive, and individualistic conditions on students’ problem-solving performance. American Educational Research Journal. 11, 83-94. Joyce, Bruce. (1985). Models for teaching thinking. Educational leadership. May. Kahl, S. R., Malone, M.R., Fleming, M.L. (1982). Sex-related differences in pre-college science: Findings of the science meta-analysis project. Paper presented at the Annual Meeting of the American Educational Research Association, New York. (Eric Document Reproduction Service #ED216 909). Kantowski, Mary Grace. (1983). The microcomputer and problem solving. Arithmetic teacher. February. Kelly, E.L. (1967). Transfer of training: An analytic study. In B.P. Komisar and C. J. Macmillan (Eds.), Psychological concepts in education. Chicago: Rand McNally. Keppel, Geoffrey. (1982). Design and analysis: A Researcher’s Handbook (2nd). Englewood Cliff, NJ: Prentice-Hall, Inc. Kirk, Roger E. (1982). Experimental design: Procedures for the behavioral sciences (2nd). Brooks/Cole Publishing Company. Klausmeier, H., Wiersma, W. and Harris, C. (1963). Efficiency of initial learning and transfer by individuals, pairs, and quads. Journal of Educational Psychology. 33(3), 160-164. Leentz, Linda. (1986). Variety - The key to success. Micro Lines. 1(2). Michigan Special Discretionary Microcomputer Literacy Grants. Kalamazoo, MI. Lipkin, John P. (1983). Equity in computer education. Educational Leadership. 31(1): p. 26. Linn, Marcia C. (1985). Gender equity in computer learning environments. Computers and the Social Sciences. 1: pp. 19-27. Linn, Marcia C. (1986). Establishing a research base for science education: Challengengtrends, and recommendations. Lawrence Hall of science and Graduate School of Education, University of California, Berkeley. 137 Lockheed, Marlaine E. (1985). Women, girls, and computers: A first look at the evidence. Sex Roles. 13 (3/4) pp. 115-121. Lockheed, M.E., Thorpe, M., Brooks-Gunn, J., Casserly, P., and McAloon, A. (1985). Sex 3 ethnic differences in middle school mathematicsL science and computer science: What do we know? A report submitted to The Ford Foundation. Lochhead, J. & Clement, J. (Eds). (1979). Cognitive process instruction: Research on teaching thinking skills. Philadelphia: Franklin Institute Press. Lockheed, Marlaine E., Gulovsen, Joan P., and Morrison, Donald. (1985). Student use of applications software. Technical report. Educational Technology Center; Harvard Graduate School of Education, Cambridge, MA. Lockheed, M.E., and Harris, A.M. (1982). Classrooms interaction and opportunities for cross-sex peer learning in science. Journal of Early Adolescence. 3(2): pp. 135-143. Lockheed, Marlaine E., and Mandinach, Ellen B. (1986). Trends in educational computing: Decreasing interest and the changing focus of instruction. Educational researcher. 13(5), pp. 21-26. Lockheed, M., Nielsen, A., Stone, M. (1983). Sex differences in microcomputer literacy. In Proceedings of the National Educational Computer Conference. baltimore, MN. Lorge, Irving and Solomon, Herbert. (1959). Individual performance and group performance in problem solving related to group size and previous exposure to the problem. Journal of Psychology. 33: 107-114. Ludeman, R.R. (1975). Development of the science processes test (TSPT). Dissertation Abstracts International. 33. 203-A (Doctoral dissertation, Michigan State university, 1974). (University Microfilms No. 75-14, 783). MacGregor, S. Kim. (1985). Research issues in computer- assisted learning environments. Paper presented at the Annual Meeting of the American Educational Research Association Chicago, IL. March 31-April 4. Mathews, Walter, M. and Winkle, Linda Wyrick. (1982). Computer equity for young women in rural schools. Research in Rural Education. 1(1): pp. 37-41. 138 Mayer, R.E. (1974). Acquisition processes and resilience under varying testing conditions for structurally different problem-solving procedures. Journal of Education Psychology, 33(5), pp. 644-656. McGuire, Christine. (1973). Simulation technique in the teaching and testing of problem-solving skills. Occasional paper series -- Science Paper 8. Presented at the 46th Annual Meeting of the National Association for Research in Science Teaching, March. McLeod, Richard. (1985). Using data bases to teach science processes - even if you only have one microcomputer in the classroom. NSTA. McLeod, R.J., Berkheimer, G.D., Fyffe, D.W., & Robinson, R.W. (1975). The development of criterion-validated test items for four integrated science processes. Journal of Research in Science Teaching. 13, 415-421. McLeod, Richard. (1987). Lesson plans for weather and climate using Appleworks database management software. Unpublished lesson plans. East Lansing, MI. McLeod, R. and Hunter, B. (1987). The data base in the laboratory. Science and Children. 33(4). pp. 28-30, 155. McLeod, R., Hunter, 8., and Finkel, L. (Computer software). (1987). Weather and Climate Databases. New York: Scholastic, Inc. McMillan, James H. (Ed.) (1980). The social psychology of school learning. New York: Academic Press. McPhail, I.P. (1985). Computer inequities in school uses of microcomputers: Policy implications. Journal of Negro Education. 33(1):pp. 3-13. Meehan, A.M. (1984). A meta-analysis of sex differences in formal operational thought. Child Development, 33(3), 1110-1124. Mitman, A.L., Mergendoller, J.R., Packer, M.J., and Marchman, V.A. (1984). Scientific literacy in seventh- grade life science: A study of instructional processl task completion. student perceptions. and learning outcomes. San Francisco, CA: Far West Laboratory for Educational Research and Development. Molitor, L.L., & George, K.D. (1976). Development of a test of science process skills. Journal of Research in Science Teaching. 13, 405-412. 139 National Assessment of Educational Progress. (1978). Science achievement in the schoo1s. (Science Report No. 08-S- 01). Denver, CO: Educational Commission of the States. National Commission on Excellence in Education. (1983). A nation at risk: The imperative for reform. Washington D.C.: U.S. Government Printing Office. National Science Board Commission on Precollege Education in Mathematics, Science and Technology. (1983). Educating americans for the let century. September. National Science Teachers’ Association Position Statement on School Science Education for the 19705. (1975). Readings in Science Education for the Elementary School. Edited by Victor and Learner. New York: Macmillian Publishing Company. Newell, Allen. (1980). One final word. In D.T. Tuma and F. Reif (Eds.). Problem solving and education: Issues in teaching and research. Hillsdale, NJ: Lawrence Erlbaum Associates, Publishers. Norton, Pricsilla. Problem-solving activities in a computer environment: A different angle of vision. Educational technology. October. Okebukola, Peter A. and Ogunniyi, Meshach B. (1984). Cooperative, competitive, and individualistic science laboratory interaction patterns - effects on students’ achievement and acquisition of practical skills. Journal of research in science teaching. 31(9): 875- 884. Okey, James R., and Majer, K. (1975). Individual and small group learning with computer assisted instruction. Paper presented at the Annual Meeting of the American Educational Research Association, Washington, D.C. Olds, Jr. Henry F., and Dickenson, Anne. (1985). Move over, word processors - here come the databases. Classroom computer learning. October. Page, E.B. (1965). Recapturing the richness within the classroom. Paper presented at the American Educational Research Association Annual Meeting. February, 1965. Perkins, D.N. (1985). The fingertip effect: How information-processing technology shapes thinking. Educational Researcher. 13(7): 11-17. if)? a. .- 140 Peterson, Penelope L. (1981). Ability X treatment interaction effects in studies of small-group learning. University of Wisconsin-Madison. Paper presented at the annual meeting of the American Educational Research Association, Los Angeles, CA. Peterson, Penelope L. and Janicki, Terence C. (1979). Individual characteristics and children’s learning in large-group and small-group approaches. Journal of educational psychology. 11(5), pp. 677-687. Peterson, P.L., Janicki, T.C., & Swing, S.R. (1981). Ability X treatment interaction effects on children’s learning in large-group and small-group approaches. American Educational Research Journal. 13: 453-473. Peterson, P.L., Wilkinson, L.C., and Hallnan, M. (1984). The social context of instruction: Group organization and group processes. New York: Academic Press, Inc. "U '71 (1) File. (Computer program). (1984). Mountain View, CA: Software Publishing Company. PFS: Repgrt. (Computer program). (1984). Mountain View, CA: Software Publishing Company. Ploeger, Floyd D. (1984). Instructional microcomputing: A survey of research. Capitol-izing on computers in education. Proceedings of the Association for educational data systems annual convention, Computer Science Press, 267-273. Pogrow, Stanley. (1983). Education in the computer age: Issues ofspolicy, practice. and reform. London: Sage Publications Ltd. Polya, G. (1957). How to Solve It. New York: Doubleday Anchor. Pon, Kathy. (1984). Databasing in the elementary (and secondary) classroom. The computing teacher. November. Riley, J.W. (1972). The development and use of a group process test for selected processes of the Science Curriculum Improvement Study. Dissertation Abstracts International. 33, 6200A-6201-A (Doctoral dissertation, Michigan State University, 1972). (University Microfilms No. 73-12). Resnik, Hank. (1984). From social studies to social science. Learning. October. Rubinstein, Moshe F. (1975). Patterns of problem solving. Englewood Cliffs, NJ: Prentice-Hall Inc. 11.413 1 I. 141 Saunders, J. (1979). What are the real problems involved in getting computers into the high school? Mathematics Teacher. May. pp. 443-447. Schofield, 8., Murphy, P., Johnson, 3., & Black, P. (1982). Science in schools. Age 13: Report No. 1. London, England: DES. Sharan, S., Ackerman, z. and Hertz-Lazarowitz, R. (1980). Academic achievement of elementary school children in small-group versus whole-class instruction. Journal of Experimental Education. 33(2): 125-129. Sharan, S. (1980). Cooperative learning in small groups: Recent methods and effects on achievement, attitudes, and ethnic relations. Review of Educational Research. 33 (2): 241-271. Shaw, Terry J. (1983). The effect of a process - oriented science curriculum upon problem-solving ability. Science education. 31(5): pp. 615-623. Sheingold, K., Hawkins, J., Kurland, D.M. (1983). Software for the information age. Unpublished manuscript. Shymansky, James A., Kyle, William G., and Alport, Jennifer M. (1983). The effects of new science curricula on student performance. Journal of research in science teaching. 33(5): 387-404. Simon, Herbert A. and Newell, Allen. (1971). Human problem solving: The state of the theory in 1970. American Psychologist. 33: 145-159. Skon, L., Johnson, D.W., and Johnson, R.T. (1981). Cooperative peer interaction versus individual competition and individualistic efforts: Effects on the acquisition of cognitive reasoning strategies. Journal of Educational Psychology. 13(1): pp. 83-92. Slavin, R. (1978). Student teams and achievement divisions. Journal of Research and Development in Education. 13:39-49. Slavin, R. (1980). Cooperative learning. Review of Educational Research. 33(2): 315-342. Slavin, Robert E. (1983). Cooperative learning. New York: Longman. SPSSx (1986). SPSSK User’s Guide, (2nd Edition). Chicago, IL. Ci 41‘ ‘(“J' 142 Stake, R.E. & Easley, J.A. (1978). Case studies in science education (Vols 1 3 2). Urbana: Center for Instructional Research and Curriculum Evaluation and Committee on Culture and Cognition, University of Illinois at Urbana-Champagne. Stasz, Cathleen, Shavelson, Richard J., and Stasz, Clarice. (1985). Teachers as role models: Are there gender differences in microcomputer-based mathematics and science instruction? Sex Roles. 13(3/4): pp. 149-164. Stein, N., & Yussen, S.R. (1985). Review of Wertsch’s analysis. In S.R. Yussen (Ed.), The growth of reflection in children. (pp. 99-101). New York: Academic Press. Steinkamp, M.W., and Maehr, M. (1983). Affect, ability, and science achievement: A quantitative synthesis of correlational research. Review of Educational Research. 33(3): pp. 369-396. Sternberg, Robert J. (1985). Teaching critical thinking, part 1: Are we making critical mistakes? Phi Delta Kappan. November. Stapp, William B, and Cox, Dorothy, A. (Eds.). (1979). Environmental Education Activities Manual. Farmington Hills, MI. Streibel, Michael J., and Garhart, Casey. (1985). Beyond computer literacy. T.H.E. Journal. June. Taba, Hilda. (1962). Curriculum Development. New York: Harcourt, Brace, and World, Inc. Tamir, Pinchas. (1985). Content analysis focusing on inquiry. Journal of curriculum studies. 11(1), pp. 87— 94. Tannenbaum, R.S. (1968). The development of the Test of Science Processes. Dissertation Abstracts International, 33, 2159-A (Doctoral dissertation, Columbia University, 1968). (University Microfilms No. 69-677). Taylor, Robert. (1980). The computer in the school: Tutor; tool. tutee. New York: Teachers College Press. Thomas, E.J., & Fink, C. (1963). Effects of group size. Psychological Bulletin. 33. pp. 371-384. Thelen, H.A. (1972). Education and the human question. Chicago, IL: University of Chicago Press. 143 Thompson, Charles L. (1986). It’s no tool if it makes work harder. Electronic Learning. September. Tobin, Kenneth. (1986). Exemplary practice in science classroom. Paper presented as part of a symposium entitled "Exemplary Practice in Science Classrooms" at the Annual Meeting of the National Association for Research in Science Teaching. San Francisco, March. Tobin, Kenneth G., and Capie, William. (1980). Teaching process skills in the middle school. School Science and Mathematics. 33: pp. 590-600. Tobin, Kenneth G., and Capie, William. (1982). Development and validation of a group test of integrated science processes. Journal of research in science teaching. 13(2): pp. 133-141. Tobin, Kenneth, and Fraser, Barry J. (1986). Investigations of exemplary practice in silence and mathematics. Paper presented at the annual meeting of the American Educational Research Association, San Francisco, April. Trowbridge, David. (1982). Groups and computer based learning materials. Proceedings of National Educational Computing Conference, pp. 11-14. Trowbridge, David, and Durnin, Robin. (1984). Results from an investigation of groups working at the computer. ERIC Document ED 238 724. Trowbridge, David and Durnin, Robin. (1984). Individual vs. group usage of computer based learning materials. Proceedings of the National Educational Computing Conference. pp. 168-173. Vargas, Julie S. (1986). Instructional design flaws in computer based learning materials. Phi Delta Kappan. June. Voss, J.F. (1978). Cognition and instruction: Toward a cognitive theory of learning. In A.M. Lesgold, J.W. Pellegrine, S.D. Fokkema and R.Glaser (Eds.), Cognitive psychology and instruction. NY: Plenum Press. Walbesser, H.H. (1965). An evaluation model and its application. Washington, D.C.: The American Association for the Advancement of Science, AAAS Miscellaneous Publication No. 65-9. Walker, Decker F. (1983). Reflections on the educational potential and limitations of microcomputers. Phi Delta Kappan. October. e113! «I...» v 144 Wallas, G. (1926). The art of thought. New York: Harcourt and Brace. Walsh, Debbie. (1985). Socrates in the classroom. American Educator. 3(2): pp. 20-25. Webb, Noreen M. (1977). Learning in individual and small group settings. Technical Report No. 7. Office of Naval Research, Arlington, VA. Personnel and Training REsearch Progress Office. Webb, Noreen M. (1980). A process-outcome analysis of learning in group and individual settings. Educational Psychologist. 13(2), pp. 69-83. Webb, Noreen M. (1984). Sex differences in interaction and achievement in cooperative small groups. Journal of Educational Psychology. 13: 33-44. Westbury, Ian. (1973). Conventional classrooms, "open’ classrooms and the technology of teaching. Journal of curriculum studies. 3(2): pp 99-121. White, Charles S. (1985). The impact of structured activities with a computer-based file-management program on selected information-processing skills. unpublished doctoral dissertation. Indian University. Whitehead, Alfred North. (1929). The rhythmic claims of freedom and discipline. In The Aims of Education and Other Essays. New York: The Macmillan Company. Wodarski, J.S., Hamblin, R.L., Buckholdt, D.R., & Ferritor, D.E. (1973). Individual consequences versus different shared consequences contingent on the performance of low-achieving group members. Journal of Applied Social Psychology. 3: 276-290. Yates, A. (Ed.). (1966). Grouping in education. New York: Wiley. Yinger, Joanne, and Eckland, Ruth. (1975). Problem solving with children. San Francisco, CA: Far West Laboratory for Educational Research and Development. Zinn, Karl L. (1979). Computers in science teaching: Today and tomorrow. In What research says to the science teacher, Vol. 2. Edited by Mary Budd Rowe. NSTA. 9.! 11311 1' 41 RIES "111111111111 1