III-I“ - :nr..' ‘3‘, "JP-3:; .‘L . _.— 437-99 7 .— - .. 5M44r. ...... J — .-p.‘ . .— .‘cmrr. - 2‘. .na 1"." _- A . : Wq 515W 41'1“)", L‘LIii'L' 11-3. .LI‘ 'H‘HI‘. .55 ‘5, ' w 4' L1‘J"1V"“'?'5'~ .- fly“ "H“? a! rib ”filth“: "fly-:1. Silt 35":E11L1Iififithfg‘” fl? 3“ ti, ‘ ' goiLIM.’ 3:35; .5315“ $3.1; “an”! . .,II',.' an I W5 :Vk‘: ' ‘8 - '45:.“ . w‘th I 3,54%}. :3“: 5.55:5?“ 155‘ “5W 1 {g‘fitl 531550: L'j‘ 6:35 * ..L. ... “II 1.5.5 5;; 55%“ L. AW“; “5;". ...“. .~ «any. . L "1 a“ , ., $51“ éLM 4331M (”Egan 1 5K”, 63% Tfif‘ngfi‘, g ,5" $135”: ”5 5.5: "I'l V‘ “1.2.”. . 1&3: ' “Ll-1'2; "J\ ' "'5'st l‘5'WI‘M5 1‘»? ”5%“? 5“? 5"“: a" "‘19‘ “'3 $35.91." % Owl} 5... 42"] I" L". - £33. my; {zh'ftz'vfl‘Lw’q v ‘3'? ‘5‘ .{q} “ILL mmév’.‘ M “a‘ " I ' H .. IL ...I. 1; III ‘1 9‘53 ' . x 'L' L., wfiyflkfli ' IIILH.‘ ! $1355"? Mk: . . a“ ‘4 .- t-Mv‘jz. 55% SEW M‘s-n ‘hi‘h'fiég ‘ - I 0“ ‘ z i‘ H 55 55'5" 355:5" 51‘ L‘H'L ‘9 "5.2325 . w Rd}: L 3. I ‘ 5 ' .K.‘ ' \. s i. J ’v " . , x it?" 6321‘?“ 3‘“ . l' I‘ngfié‘:§§b?m ”Egg-w.» ‘ L {M :1 ”ML“ . "53333535 ”1345. Wt. 3mg; ‘ #5}? 1“. L‘nk “Lt 2' a“ .‘ h: .51”; 55.1.1.3 _- . II II- I M t. { Law .. ’ III" . III; 5.53 ‘5“ III-”B" I“ 'I {...II..;I-IIL I III:- I‘EN-fifl‘h 5% 3'55?" I333 'L g" 3"“ 5:“. w'gé'uugg' 11:5,? 3, “3‘2: . 'v"..“w“.i.‘:"fl {5| ‘: 'Kfikwfi. L 1‘.“ "‘1“; I ~ ~ - -. . IL. ..LLLL3I:.IIL§§LLL;LLL% .33“ .. "‘ «1.3%‘éz-ifir 'm 5‘! 51%.; 5; ‘5‘?“ H‘L.L-,“...Lqp.m 5353‘ . 1 53’5“» LN“: {0'0 L: ' .’ .3 ”figs:- r v W.-. 4.. L 9 .r ". ‘ {“20 4 0“ 63.1.: «r, 13!]. - 4 «4. I??? . 5.3%: ... .- ...- ’ ’PLEAJ ~“ V ,7. :2- : .3. ; é—‘E‘JZ: 2' f. .714 a «2 , '51- ... , 5,1. .. , ..., if: . igffifisa ”.557 £57.; ;.r-‘-‘:O{_ I 5. 3' t,‘ b . 5‘.» (:5 : ,5‘3‘5‘ ‘4‘". ll.“ '5 . .3, I “il‘S! . .5. Hi I 2-3.5 '5‘. . 522.15ng 5.53“?“ 5 III.‘ ' I“ ELL: . " ., . ' .L. . , was n53, I‘I“ . . n5“"‘"‘w‘.5'- 2'. LI}; “r. #3835 ..LIIII- 3-» 55%;“ .' Mi; ‘5' I?» (“:5 55W! 55:3, 5 “7.555. ' 515mg“; LLIL ‘ ""5"“ "5":415' I “V“ L11 («mm @559. ,. ”5“ 54151;?“ 555‘ “5813;. . l L's/5&2 mgfifififil’fi‘ffi 4“!” ‘ "554:5 S“ :5: {~qusz ‘ 1i“ 33,3”;150'3' :3 IISLL- -LL 5551'? .3. *‘Wgfiéi "L ‘55. ILN‘ 53“" - :35; (33‘; 59;: ‘gt\n‘ ..." u; L . \‘ 5215;319' 1‘69”“ hr ..f; h 3’» 5:"; 5'. w“ I iti‘i ‘1?" . 1" 5;}. \‘ I 5:45:39 \ 69%;?“ 5%,3‘151‘.“ iii“ Lig‘l 5’1"“. 5:5 ‘11: ' v '3‘; 7‘35th ““551 I . L E 23:33.: “5‘55" 15%: kg. I I“LL‘1I'IL!A.1L’L40553)th L Lfil _ ..,z ' ‘5‘ till “a" 5' '5' 'L‘E“"I‘11‘-":Kx L g {L I. i Q“ 712$ 5.5315354“. 5 L: "1“ ”"15 .5731"L'555:L‘.‘." 151%}; LIME" 'LAL}~ L . £55,,“ ‘9‘ ' l “ {.1 L. ‘ . . _‘ I; ‘q-Y-‘i‘t‘;lh‘!3§ 1 4355:1111]. " J ' 7 , -‘~ -' .‘ .gk-LI“.~‘LL‘-'.;I,¢I*=i‘ In.“ L‘ I .5 L 13,:L‘ZLEL,‘ 534'cdgamgfirxg . "g n L . "I? .... .,-u'l| 24.2?! 1 L -F .- ‘3’. T’ 1!; ‘v’ é’L' “13;“ 7%: It. 5. -:2r‘.‘_" L‘ l .u» N. . .. .51 k... :LLLZ; I.':“‘5[ ’.“'110‘l‘2"“’ , L -.‘ '. no ' ' ..H’d ‘.- r.’_. < L L .n ,«zv ,_ '11:?!“ .. .mé.‘ ”LIL-g 3-2.th “Ink. .-. I19“ “3?; ‘h1‘55é‘ 15 ""9.“ a!!! ‘ '.L n: ' ... “L, . ‘4"; L":{;§;;}f- ‘5‘" " “:4”? ‘ L1,: '3‘) 55l_‘$é‘ ‘1}; '35}! “Mg ‘1 35;?) flat"; 2," I“ L.‘ _L;L_L.‘ 9-“..ka '. ‘ ‘v ‘ . ' ‘ tn “4‘ 111;. l- -' ‘ {I "2‘ '1. 5 q‘k’q‘“i} _. “5‘55“. ”5 .15: 5511615.], !5!|:'i". LN 2-3595“! 2““ fi‘f‘ “.1153 J'fifl‘LJ-L'Lxh 455"" $5 35$ i I!:.L:;.L‘-L-L: , it»: " I} L L 51.5... 5.555%:5” .. 1L. Ly.» {IM- :45“ III I LL-'.I , .35 '.~ \I .15.: U; Vl‘fih‘u‘ “1:99: 13".‘3553 H' .‘g {5 I132. If. gtéiifiiv-gr 1}", ..1 4-". I“ I1" I; *5? : I“ 'l 0'9 . ‘ ’l IVS . ' .JL ' V‘ . vi": 5 «avidly-11151.51! $5.” ‘ f'h‘lf' .11} “#113“; .535, 1 mg“ 9:25.? W M“ 5 \‘1' :33; L. W" ”.5555"15555\l"5"551"-"51‘55'5'5‘Iqu'545‘ "’I H 5 i ‘ > . . . . HI; ‘1 I}. 1L 5L 5 I'LL “'5'“... 51' Ilfliilfl'WMIKimmilfiiiiflfiiflilfilil 3 1293 01594 46 'Liiflifil This is to certify that the dissertation entitled Instructional Technology Evaluation in K-12 Education Syste-s in the State of Michigan: A Study of Evaluation Procedures and Results to Determine the Effectiveness of Technology Systems Applied to Instruction. presented by J anes David Hapes has been accepted towards fulfillment of the requirements for DoctoralI 211.12 degree in Educational Administration \ Major professor Date \2-‘3‘9 1? MS U i: an Affirmatiw Action/Equal Opportunity Institution 0-12771 PLACE ll RETURN BOX to remove thle oheckwt "on your record. TO AVOID FINES return on or bdore dete due. DATE DUE DATE DUE DATE DUE 7—]| I l MSU le An Afflrmetm Action/Equal Opportunity lnetltutlon Wane-m _-.—.—_—__ INSTRUCTIONAL TECHNOLOGY EVALUATION IN K-12 EDUCATION SYSTEMS IN THE STATE OF MICHIGAN: A.STUDY OF EVALUATION PROCEDURES AND RESULTS TO DETERMINE THE EFFECTIVENESS OF TECHNOLOGY SYSTEMS APPLIED TO INSTRUCTION By James D. Mapes A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of ‘ DOCTOR OF PHILOSOPHY Department of Educational Administration 1996 ABSTRACT INSTRUCTIONAL TECHNOLOGY NETWORK EVALUATION IN K-lZ EDUCATION SYSTEMS IN THE STATE OF MICHIGAN By James D. Mapes The researcher's purposes in this study were (1) to assess the roles and responsibilities of persons within Michigan K-12 school systems whose duties influence instructional technology systems; (2) to determine the types of instructional technology evaluation currently in use as identified by what is evaluated and the purposes for which it is evaluated; (3) to determine the relationship between the data used for evaluation and goals suggested for the systems at inception; (4) to determine how data gathered during the evaluation process are used; (5) to gauge the perceived availability and adequacy of training provided to system evaluators and; (6) to determine whether system changes have occurred as a result of evaluation. Survey instruments were sent to both system administrators (intermediate school district superintendents) and system. users (networked high school principals and technology directors). Collected data were analyzed to provide answers to the research questions. Given the limitations of the study, the following major conclusions were drawn: (1) instructional technology evaluation is taking place within K-12 systems, however, comprehensiveness and consistency are lacking within the evaluation process; (2) network goals were usually developed prior to network operations, but the role of those goals in the evaluation process is not clear to many users; (3) network goals focused primarily on instruction and learning and secondarily on fiscal advantages; (4) successful instructional networks can have many different looks yet have the ability to meet the needs of users; (5) system users are afforded opportunities to provide input into evaluation but the avenues are informal and not well established; (6) system administrators 'are given training in evaluation techniques but system guidelines are insufficient for guidance; and (7) a lack of consistent, clear direction exists when evaluative practices are used within instructional technology systems in the State of Michigan. Dissertation Director: Dr. Samuel A. Moore II Copyright by JAMES DAVID MAPES 1996 ACKNOWLEDGEMENTS The completion of any major task requires persistence border- ing on stubbornness, time, and the assistance of many people. Although it may be impossible to acknowledge all who helped in this project, the support and assistance of those who gave willingly of their knowledge and talents is greatly appreci- ated. First I would like to thank my family. My wife, Cindy, spent endless hours proofreading and encouraging, and chil- dren Erin, Kelly, and Brendan found me at the computer many times when they would have preferred that I be elsewhere. The patience which they exhibited was gratefully accepted. I would also like to thank my committee chair, Dr. Samuel Moore II. Dr. Moore was most helpful in the guidance he provided and the encouragement which he gave to push me toward comple- tion. I would thank, too, the other members of my committee, Dr. Steven Kaagan, Dr. Richard MCLeod, and Dr. James Rainey. Their assistance and counsel played a major role in keeping me on track and provided insights into overlooked avenues of inquiry. Finally, I would like to thank Dr. Murari Suvedi. His willingness to answer my questions about the statistical data was most helpful. TABLE OF CONTENTS List Of Tables 0 O O O O 0 O O O O O O O O 0 List Of Apmndices O O O O O O O O O O O O O 0 Chapter I. II. INTRODUCTION . . . . . . . . . . . . . Researcher's Purpose for Conducting Th 1 S S tUdY O O O O O O O O O 0 Statement of the Problem . . . . . . Research Questions . . . . . . . . . Definition of Terms . . . . . . . . Definitions of Instructional Technology. Evaluation of Instructional Technology . Significance of the Study . . . . . Delimitations . . . . . . . . . . . Limitations . . . . . . . . . . . . Outline of Study . . . . . . . . . . REVIEW OF RELATED LITERATURE . . . . . Review of Learning Theory. . . . . . Definition of Effective Instruction. Evaluation of Educational Programs . Evaluation of Effective Instruction Distance Learning Techniques . . . vi ~0me 15 16 17 20 20 21 23 23 33 34 48 52 Delivery of Distance Learning Technologies . . . . . . . . . . Pace of Technological Change . . . . Development of Instructional Technological Systems . . . . . Instructional Technology Evaluation Systems . . . . . . . Commonly Used Instructional Technology Evaluation Systems . III. METHODS AND PROCEDURES . . . . . . . . . Research Population . . . . . . . . . Development of the Survey Instrument Content . . . . . . . . . . . . Construction . . . . . . . . . Validity . . . . . . . . . . . Procedures for Data Collection . . . Procedures for Data Analysis . . . . IV. PRESENTATION OF DATA . . . . . . . . . Introduction . . . . . . . . . . . . Role of Respondent . . . . . . . Primary Responsibilities of Respondents . . . . . . Other Responsibilities Listed by Respondents . . . . . . District/Building Use of Instructional Technology System Description . . . . . . . Operational Description of System . . . . . . . . . vii 53 55 57 61 63 70 70 71 71 72 74 74 75 77 77 78 79 81 82 83 84 V. System Evaluation . . Frequency of System.Eva1uation Goal and Change Data . Ways in Which Instructional Technology Goals are Expressed . . . . Process Evaluation . . Use of Evaluation Data Chapter Summary . . . . . . Results of Data Analysis for Role and Responsibility System Description . . . . Goals and System Evaluation Goals of Instructional Technology Networks. . System Administrator Role . Description of Evaluation Input Elements . . . . Use of Evaluation Reports . SUMMARY, FINDINGS, CONCLUSIONS AND DISCUSSION, SUGGESTIONS FOR FURTHER RESEARCH AND REFLECTIONS UPON THE STUDY. . . . . . . Summary . . . . . . . . . . Findings 0 I O O O O O O O O O O 0 Conclusions and Discussion Implications for Instruction and Learning . . . . . . . . Suggestions for Further Research viii 86 86 87 9O 90 92 94 94 94 94 94 95 95 95 97 97 101 110 118 121 APPENDICES O O O O O O O O O O O O O O O O O O O O O O O 124 BIBLImHY O O O O O O O O O O O O O O O O O O O O O O 162 ix Table 1. 2. 3. 4. 6. 11. 12. 13. 14. 16. 17. LIST OF TABLES Role of Respondent . Other Roles Reported Primary Responsibility of Respondent Other Reported Responsibilities . . Primary Responsibility by Role Other Responsibilities Listed by Respondents . . . . Description of Systems Operational Descriptions of Systems Methods of Signal Transmission . Structured System Evaluation Frequency of System Evaluation . Perceptions Regarding Technology Evaluation Selected Technology Goals Evaluation Process Inputs Evaluation Data Use »'Frequency of Evaluation (Other) 78 79 8O 80 81 82 83 84 85 86 86 87 89 90 91 93 LIST OF APPENDICES Contact Persons for K-12 Instructional Telecommunications Systems List of Intermediate School District Superintendents . . . . . . . . . . . . SurveyInstrument. . e e e e e e . e . COVQI' LGtter e e e e e e e e e e e e e Letter of Permdssion to Conduct Study xi 124 146 148 160 161 Chapter I INTRODUCTION Not long ago in the history of education, as noted by Anglin (1995), instructional technology was limited in scope to the 16mm film, film strips, typewriters, reel-to-reel tape recorders, and the telephone. Over the decades of the 19508, 19608, and 19709, various attempts to add distance learning to the mix of instructional technologies were embarked upon. These endeavors used several broadcast technologies but were dedicated to a common purpose. That purpose was to change the way teachers teach by delivering instruction via one-way video and one-way audio technology thereby increasing the fiscal efficiency of instruction. The efforts met with varying degrees of success depending on the goals agreed upon for instructional delivery and the commitment of both instructional providers and users to making the technology work. Researcher's Purpose for Conducting this Study While educational technology has been a significant part of instructional delivery for many years, the complexity and rapid rate of change have increased the need for serious study, particularly as it pertains to the way in which 2 instructional technology is evaluated. This researcher's specific purposes in undertaking this study were to (a) determine the types of instructional technology evaluation currently in use as identified by what is evaluated and for what purpose, (b) determine who conducts the evaluations and the type of data used, (c) determine if data gathered during the evaluation process are used to identify needed changes, (d) determine what system changes have occurred as a result of evaluation, and (e) detemmine training characteristics and needs of system evaluators. Educational technology is represented by many differing technologies, some high-tech and others of lesser technical complexity. The operational definition of instructional technology used by this researcher is any technology which supports, enhances, or supplants traditional pedagogical techniques. Examples of instructional technology include interactive voice/video/data technologies, internet use, e- mail, voice mail, computers as communication devices, computers as tools, computer networks (LAN and WAN), one-way technologies, or any combination, refinement, or variation of the above technologies. While the definition of instructional technology is very broad, the literature reporting the new technologies and the effect of these technologies on the learner, or instruction in general, is not yet highly developed. A body of research has grown around the use' of video, voice, and data 3 technology, however, which makes it possible to examine the effect of this technology on the learner in addition to how the technology and its impact is evaluated. For this reason, this researcher will focus attention on distance technologies, both synchronous and asynchronous. Some of the information thus gained may prove applicable to other instructional technologies. Statement of the Problem Public education has had a long tradition of evaluating learning systems, curriculum, student outcomes, administrative effectiveness, teacher effectiveness, parent effectiveness, and board effectiveness. When we look at educational technology literature it becomes apparent that various types of evaluation have been used depending upon the values placed on the technology, its use, and the intent of that use. Some evaluators have stressed relative economics as an evaluation basis. Others founded evaluations on the efficiency of a technology-based instructional delivery system versus a traditional instructional delivery system. More recently, others have begun measuring effectiveness in terms of changes in the ways students learn. It may be assumed, perhaps, that technology represents the capability of bringing about new educational services and service providers, new capabilities and organizations in schools, and new levels of student achievement. All of these potential 4 changes increase the need for effective evaluation. The access that schools gain through new delivery mechanisms via technologies that weren't previously available in classrooms require the analysis, review, and evaluation of the services provided. Are othey of any advantage in the learning environment? Has the-process of teaching and learning truly changed? Has the product changed? Will the changes be permanent and effect the way knowledge is transmitted? Do initiatives that are regional, state-wide, or local in nature have the same impact when evaluated according to their effect upon the classroom and learner? Do they fit together in a. comprehensive and mutually sustainable manner? Does research and_ development need to receive added emphasis in the schools? Where do vendors of materials, distance learning providers, business communities, publishers, or broadcasters enter the picture? How are schools assessing the new ' information and services that they are receiving? How will project participation be enhanced with access to partners from widely different geographical areas and cultural backgrounds? How will curricular projects be evaluated which rely increasingly on current primary sources rather than dated secondary sources? How are changes evaluated in student and teacher roles? What will ihappen to a traditional curriculum when the technology exists to accelerate and expand it? How can technology advance problem solving instruction and practice? 5 In order to adequately provide a framework for the answers to these questions, one must, spend some time and effort examining the literature regarding instruction, its effectiveness, and the evaluation of that effectiveness. Perhaps new ways of determining effectiveness require new tools for evaluation. One should be able to accurately view current realities. One also needs to be able to effectively compare different technology-enhanced curricula and identify advantageous technological features. Traditional methods of evaluating the various forms of instructional technology systems have involved examining the cost effectiveness of such endeavors. This desire to prove cost effeCtiveness was evident in the earliest evaluations of instructional technology. While most past studies have shown that cost controls could be instituted in the areas of travel and personnel, little concern was evident with either the changes necessitated in instruction or in the effect of technology on the learner. Later studies tended to include data on types of changes in pedagogy that could be achieved through instructional technology in addition to the data on cost effectiveness. These evaluations were primarily comparative, matching inputs and student outcomes of traditional classrooms with those of technologically enhanced classrooms. 6 Research Questions To determine the extent of instructional technology systems evaluation in use within the state of Michigan, the bases for that evaluation, the rationale behind choice of evaluation types, to determine any relationship that might exist between original goals of the development of a system and the resulting evaluation system, to determine who is responsible for evaluation and to what purpose the evaluation is used, the following research questions were addressed: To address the specific purpose: 1. To determine the extent of instructional technology evaluation in use with selected delivery systems within the state of Michigan serving primarily K—12 students and the frequency of that evaluation. The following questions were generated: 1.1 Does the instructional technology system(s) currently in use within your network receive regular structured evaluation? 1.2 What is the frequency of the network evaluation, i.e., annually, biannually, quarterly, continuously,other? To address the specific purpose: 2. To determine the continued influence of the original goals for the instructional technology network and the comprehensiveness of those goals relative to the instructional technology system being evaluated. The 2.1 2.2 2.3 2.5 7 following questions were generated: Prior to establishment of the instructional technology network were formal goals established? was the governing body of each participating entity asked to adopt the established goals? Was information dissemination adequate to familiarize individual participants with the established goals? Were individual 'users, including* students, teachers, administrators, part of the goal development process? How were the goals expressed? Mark all that apply. 2.5.1 As fiscal outcomes 2.5.2 As instructional outcomes (changes in teaching methods/technologies) 2.5.3 learner outcomes (changes in the ways students learn) 2.5.4 Other (please specify) To address the specific purpose: To determine the impact of the original goals on the evaluation system. The following questions were generated: 3.1 3.2 Were the established goals made a part of the original evaluation system? Has the evaluation system changed since it was first initiated? 8 3.3 Have system goals changed since they were first established? Was the change formalized? To address the specific purpose: To determine the position(s) of persons responsible for system evaluation. The following questions were generated: 4.1 What roles and responsibilities are associated with instructional technology in K—12 school districts? 4.2 How do those roles and responsibilities interact with each other? 4.3 Who are the input providers to the evaluation process? 4.4 How is input solicited? To address the specific purpose: To determine the manner in which the information gleaned from the evaluation process is used and to what end its use is ascribed. The following questions were generated: 5.1 Once evaluation is completed, are reports generated and disseminated? 5.2 To whom are the reports disseminated? 5.3 Does a formal process exist for operationalizing the information from the reports in network operations? 5.4 Is the network administrator responsible for processing the information from the evaluation? 9 5.5 Who determines what network changes result from the evaluation information? To address the specific purpose: 6. To determine the training characteristics and needs of system evaluators: The following questions were generated: 6.1 Does a set of evaluation guidelines exist which serves to assist the system. evaluator in the task of assessment? 6.2 Does the system evaluator receive regular opportunities for training in areas which will provide knowledge and information useful in the practical application of data generated through evaluation? Definition of Terms For the purposes of this study, the following terms were defined: 1. Effective Instruction: Instruction which promotes engaged learning. Learning tasks need to be authentic, challenging, and multidisciplinary. Problem solving is a basic part of the instructional regimen and the concept of a learning community is integral as an adjunct to purposeful" problem solving and collaboration from.multiple perspectives. 2. Engaged Learner: An engaged learner is one who is 10 taught to be responsible for their own learning, who is self- regulating, who is able to define goals, and regulate achievement of those goals. An engaged learner is one who is energized by learning, who can think intuitively, and can solve problems in a cooperative manner. 3. Instructional. Technology (IT): ' Technology' ‘which supports, enhances, or supplants traditional pedagogical technique. Examples include interactive, two-way telecommunications, use of the internet, e-mail, voice mail, computers as communications devices, computers as problem- solving tools, computer networks, and other types of synchronous and asynchronous distance learning. 4. Computer Assisted Instruction (CAI): The electronic version of the standard drill and practice method of instruction common in most schools. Its advantages include allowing teachers to be free to do more one-on-one instruction, allowing students to work at their , own pace without external pressures, providing patient repetition, providing’ positive reinforcement, and designing programs which can easily meet specifications of standardized testing instruments thereby making improvements in test scores easy to measure. ' 5. Computer Managed Instruction (CMI): Student management programs that allow the teacher to see exactly what each student is doing each session on the computer, what skills are being practiced, and with what degree of success. 11 6. Synchronous Communication: Cheng, Lehman, and Reynolds (1992) define synchronous communication as one—on-one communication via the telephone, videophone, multimedia workstation, or other technique, as well as group learning including audioconferencing and videoconferencing. The participants in a discussion or a tutorial are on-line at the same time, although they may be separated by distance. 7. Asynchronous Communication: One-on-one communication including e-mail, voice mail, multimedia workstation, and group learning including computer conferencing, Grief's (1988) computer-supported collaborative work (CSCW) environments, and multimedia networking. Participants are separated by time even if not separated by distance. 8. System evaluation: Any type of educational technology network evaluation of effectiveness, including cost-based, use-based, or learner outcome-based evaluation. 9. Integrated Systems Digital Network (ISDN): Malfitano and Cincotti (1993), in their discussion of future networking, define ISDN as a network capable of carrying all types of messages including audio, video, text, or computer data through the same channels in the same digital format. This will enable messages to be integrated at end user terminals into multimedia presentations. 10. Multimedia teleconferencing: Steinberg (1992) describes multimedia conferencing as teleconferencing via integrated multimedia computer technology used to provide a 12 learning platform which will most resemble real-time, interactive instruction. 11. Audio Conferencing: Telephone contact between two or more sites. Sometimes speaker phones are used with a connection via an audio bridge. No visual communication is possible unless videophones are used. 12. Audiographic systems: Combination of graphic support with audio conferencing. Using computer generated visual material, (i.e. an electronic blackboard, or similar technology), allows for the addition of a video component to twoaway audio interactivity. 13. Broadcast Television: Simple transmission of video and sound over common UHF and VHF television frequencies. Use of telephones allows for interactivity between the origination and reception sites. 14. Coaxial Cable: This is standard commercial video cable as installed by cable television providers. This cable is capable of carrying video, audio, and data signals between points. Although not yet common, bandwidths have become sufficiently wide to support two-way video and audio transmission. 15. Direct Broadcast Satellite (DBS): This is a one-way video technology. Full-motion video is transmitted via satellite to the user. Audio interaction must be by telephone. 13 16. Fiber Optics: Using bundled ultra thin glass fibers full-motion video, audio, and data may be sent with light impulses. Large bandwidths make this a desirable transmission medium. ‘ 17. Microwave: Microwave transmission. systems can be capable of. two-way video and two-way audio transmission. System expenses tend to be high since costly transmitters and receivers are required in addition to towers on which transmission equipment is placed. Transmitters and receivers must be placed in a line-of-sight arrangement and interaction occurs directly over the microwave system. 18. Instructional Television Fixed Signal (ITFS): This is a one-way microwave technology. It is limited geographically since transmission depends upon line-of-sight clearances. A typical system may be able to transmit twenty to twenty-five miles depending upon the height of the tower from which the signal originates. 19. Wide Area Networking (WAN): Wide Area Networking is a communication system that connects geographically remote equipment. It is primarily used to connect data, voice, video, and computer systems and can include local area networks (LAN). 20. High Speed Networks (HSN): These are the networks of the future. They will be the equivalent of super WANs, capable of interconnecting widely differing geographic locations with real-time multi-media services. 14 21. Frame Relay Networking: Frame relay is a wide area networking technology which has been gaining in popularity within recent years. It is a technology that relies upon encapsulating systems and bandwidths ranging from 56Kps to 1.544 Mbps. 22. Asynchronous Transfer Mode (ATM): ATM is a layered architecture 'which allows several services, like voice, video, and data, to be mixed over a network. ATM is not tied to a specific type of transmission medium. 23. E-Mail: Eemail is electronic mail. It is essentially a mail system made possible over a networked system which allows users to communicate with each other individually or, through the use of listserves, collectively. It is a form of asynchronous communication. 24. voice Mail: voice mail is another form of asynchronous communication. Using the telephone system, users are able to leave messages for other users when the receiver of the message is not present or is busy. The message is stored and reviewed when possible, then answered. 25. Internet: A system comprised of scores of independent computer networks - military, academic and commercial - all interconnected. Its genesis was in the 19603 when the Department of Defense commissioned the Advanced Research Projects Agency (ARPA) to configure a computer network. That original network was called the ARPAnet and from it grew today's internet. 15 Definitions of Instructional Technology Most definitions of instructional technology identify the primary function of educational technology as improving the efficiency of the process of learning. Percival and Ellington (1988) list three definitions of educational technology"which. have had. common. acceptance. The United States Commission on Instructional Technology defines educational technology as “...a systematic way of designing, implementing, and evaluating the total process of learning and teaching in terms of specific objectives, based. on research in human learning and communication and employing a combination of human and non-human resources to bring about more effective instruction" (p. 20). The Council fer Educational Technology for the United Kingdom has stated that ”. . .educational technology is the development, application, and evaluation of systems, techniques, and aids to improve the process of human learning" (p. 20). A third definition is provided by the National Center for Programmed Learning of the United Kingdom. The National Center defines educational technology as the... ”application of scientific knowledge about learning, and the conditions of learning, to improve the effectiveness and efficiency of teaching and training. In the absence of scientifically established jprinciples, educational technology’ implements 16 techniques of empirical testing to improve learning situations" (p. 20). Each of these definitions strongly emphasizes an implicit need for evaluation. It is important, too, that instructional technology be defined in a manner flexible enough to provide adaptability to new knowledge about the process of human learning. From the three definitions of instructional technology comes the basis for the borrowed definition used by this researcher. That definition states that inatructional technology is that which supports, enhances, and supplants traditional pedagogical techniques. Evaluation of Instructional Technology Given the description of the engaged learner used by this researcher and the descriptive information regarding instruction which promotes effective learning within that context, several factors need to be included in an evaluation of instructional technology system effectiveness in order to gather information that would determine if the system promoted effective learning. These factors, as identified by Jones (1995), would include 1. Technology Access - Are diverse technologies available to students within their classrooms and within the school setting? 17 2. Community Access -— Does the technology promote the access to approaches which are multidisciplinary, multicultural, and which present multiple perspectives? 3. Operability - Can the technology be operated easily so that it promotes learning and experimentation? 4. Organization - Is the technology distributed in such a way that its use is maximized? 5. Functionality - Is the technology comprehensive? Does it prepare students to use a variety of technological tools? 6. Use - Does the instruction provided through the technology system promote collaboration and problem solving? At the present time no reliable data exist which would indicate the types of system evaluations currently in use in Michigan, the frequency of program evaluation, who does the evaluation, what types of data are used, and who supplies that data. Significance of the Study Currently, about 10,000 studies exist which have been conducted over the past decade dealing with different aspects of distance learning. Of these studies, fewer than 1,000 deal with K—12 populations and, when learning outcomes are included as dependent variables, very few studies remain. There are reasons for this lack of literature. While instructional technologies have received much attention in the media, a relatively small number of educators has been 18 exposed, in an operational way, to the use of the various available technologies. Few educators have yet had a chance to experiment with these technologies in a way comprehensive enough to create an adequate comfort level necessary for promotion of wide spread adoption and use. At least as contributory is the financial constraint which is an ever present part of the K-12 education endeavor. When, according to Ackers and McCain (1995), financial parameters were coupled with the conservative nature of educational practitioners, it is easy to see how most studies have involved other populations. Herman (1994) identifies four questions to which- policy' makers wish answers. These questions include: (a) What are the effects of technology on student learning, (b) what are the effects of technology on students' workforce readiness, (c) what are the effects of technology on teacher productivity, and (d) is an investment in technology cost effective. Because definitive answers have been very difficult to develop, some policy makers have erroneously concluded that technology is a failure in education refonm. Herman (1994) argues that the fault for these inconclusive results lies not with technology based innovations but with both the methodologies and tools that educators have used to make assessments of the effect of technology. Herman also identifies several difficulties in evaluating technology's impact on school reform. The first difficulty lies in the fact that schools are not very good 19 environments for research. It. is extremely' difficult ‘to control the setting and other confounding variables are present. Although comparison groups are frequently used, they are rarely' comparable. Another' difficulty is 'that implementation strategies differ from evaluation strategies. Since there is no standard treatment to assess, the application of standard outcome measures is unrealistic. Herman also notes that the usual short duration of longitudinal study environments cemplicates research although they are necessary in the typical educational environment. It is an observation that policy makers have little patience and want demonstrable results in a short period of time. This complicates the attainment of good research results. Herman also notes that insensitive standard measures and changing primary goals are factors which contribute to the difficulty in finding adequate research results when one attempts to seek support for the concept that technology effectively supports school reform. The last factor, changing primary goals, is an especially important one since -it brings attention to the fact that as practitioners become more comfortable with new technologies, they become more willing to experiment and be creative. When this occurs, confidence and learning increase, leading to the exploration of new applications and substantial changes in instructional methods and curriculum. 20 Very little data regarding evaluation of instructional technology systems are available in Michigan. This study will provide specific information regarding the 'use of instructional technology evaluation, the effectiveness of evaluation, and the needs of evaluators. The information gathered should assist those charged with the development and administration of instructional programs as they determine appropriate methods and directions for instructional technology systems evaluation. Hopefully, increased interest in the concept of instructional technology systems evaluation will be a significant result of the study. Delimitations The delimitations of the study were: 1. This study was restricted to currently operating instructional technology systems that were voice and video capable. Systems that relied upon partial technologies were not included in the study. . 2. This study was conducted in Michigan and the information reported may not be generalized outside the State of Michigan. Limitations The limitations of the study were: 1. Data for this study were collected by mailed questionnaire; therefore, only reported information is included. 21 2. The data were collected via mailed questionnaire; therefore, the researcher‘ had to assume that the questionnaire was read and answered honestly. 3. This study was descriptive in nature and subject to the weaknesses inherent in descriptive research. For example, the questionnaire was designed and implemented to measure perceived uses of evaluation without questioning why the perceptions exist in the reported state. Outline of Study The introduction, an explanation of the researcher's purposes in this study, and the research questions generated for this study are presented in Chapter I. In Chapter II, the literature is reviewed under the following subheadings: Review of Learning Theory, Definition of Effective Instruction, Evaluation of Educational Programs, Evaluation of Effective Instruction, Distance Learning Techniques, Delivery of Distance Learning Technologies, Pace of Technological 'Change, Development of Instructional Technology Systems, Instructional Technology Evaluation Systems, and. Commonly' Used Instructional Technology Evaluation Systems. In Chapter III, the methodology and procedures used in connection with the study are explained. In Chapter IV, the collected data generated by the researcher are presented. 22 In Chapter V, the summary, findings, conclusions, implications, and recommendations drawn from this study are discussed. CHAPTER II REVIEW OF RELATED LITERATURE The literature related to this study is reported in the following areas: (a) review of learning theory as it relates to learning concepts being currently promulgated, (b) review of the development of instructional technology, (c) review of common types of evaluation within the field of education, (d) review of evaluation applicable to instructional technology, and (e) review of types of evaluation systems in current use within the area of instructional technology. Review of Learning Theory In order to provide a framework for identification of effective instruction, an overview of the literature regarding theories of instruction has been reviewed. This process reveals elements which provide common threads of understanding throughout the various ' theories. While the reviewed theoretical frameworks differed in many aspects of their construction, several common elements can be identified which have an impact on the way learning effectiveness and technology effectiveness is evaluated. Many of the learning theories reviewed identified 23 24 problem solving as a central aspect of‘ their theoretical construct. This is true of Anderson's (1983) ACT* theory which focuses on. memory processes that support problem solving. Building upon the identification of three types-of memory structures-- declarative, procedural, and.*working memory-- six principles of essential learning are enumerated. These include (a) identification of the goal structure of the problem. space, (b) the provision. of instruction in the context of problem-solving, (c) the provision of immediate feedback on errors, (d) minimization of working memory load, (e) adjustment of the incremental load of instruction to account for the process of knowledge compilation, and (f) enablement of the student to approach the target skill by successive approximation. Again, according to Anderson (1993), all knowledge begins as declarative information. Procedural knowledge is learned by making inferences from already existing factual knowledge so that new productions of knowledge are formed by conjunction or disjunction of existing productions. Landa (1976), in his alga-heuristic theory, concerns himself with identifying mental processes, both conscious and unconscious, that underlie expert learning, thinking, and performance in any area. His methods represent a system of techniques for getting inside the mind of expert learners and performers to uncover the processes involved in learning. His intent is to break down the processes used by expert learners 25 into elementary components. Performing a task or solving a problem involves the creation of a system of elementary learning which contains components that are necessary in context and sequence for appropriate problem solving. These algorithms are useful in solving some problems which are amenable to precise and unambiguous instruction. Since not all problems lend themselves to straight forward, unambiguous instruction, the concept of heuristics was developed to provide a working framework for problems which contain a degree of uncertainty. Landa (1983) stresses the importance of using prior knowledge to solve problems. He identifies one type of learning method as “snowballing.” By this, Landa means that a system of cognitive operations is cultivated by inculcating a first operation then teaching a second operation and linking it to the experiences associated with learning the first operation. Problem solving abilities are embedded in learners through the use of this process. According to Landa, students should be taught both knowledge and the algorithms and heuristics of experts. In working with adults, Knowles (1984) developed the theory of andragOgy. The theory makes the following assumptions about the design of learning which apply to all learners: (a) Students need to know why they need to learn something, (b) students need' to learn experimentally, (0) students need to approach learning as problem solving, and (d) students learn best when the topic of instruction is of 26 immediate value. Andragogy means that instruction needs to focus more on the process of learning and less on the content being taught. Strategies such as role playing, simulations, case studies, and self-evaluation are useful with the teacher acting as a facilitator or a resource. The following principles are stressed in andragogy theory: (a) students need to be involved in the planning and evaluation of their instruction; (b) experiences, including mistakes, provide the basis for learning activities; (c) students are most interested and motivated when learning has immediate relevance to their lives; and (d) learning should be problems centered rather than content-oriented. Another applicable theory has been under development by the Cognition & Technology Group at Vanderbilt (CTGV). This group has been active in the development of interactive videodiscs designed to allow students to explore and actively construct new knowledge from existing knowledge. As designed by CTGV, the videodisc serves as the “anchor" or the macro- context for all subsequent learning and instruction. According to Branford (1993) ..."the design of these anchors was quite different from the design of videos that were typically used in education...our goal was to create interesting, realistic contexts that encouraged the active construction 0f knowledge by learners. Our anchors were stories rather than lectures and were designed to be explored by students and teachers" (p. 27). In anchored instruction, 27 learning and teaching activities are designed around an “anchor" which should be some sort of case-study or problem situation. Curriculmm materials are designed to allow exploration by the learner. Cronback and Snow (1977), using theories of multiple intelligences similar to those of Gardner, Guilford, and Sternberg suggest that a multidimensional view of ability is most appropriate. As a theoretical framework, aptitude- treatment interaction (ATI) suggests that optimal learning results when the instruction is exactly matched to the aptitude of the learner. This theory has implications for the typical lecture activity which tends to be of the “one size fits all" variety. The following principles were noted in ATI theory: (a) aptitudes and instructional treatments interact in complex patterns and are influenced by task and situation variables; (b) highly structured instructional environments tend to be most successful with students of lower ability and conversely, low structure environments may result in better learning for high ability students; and (c) anxious or conforming students tend to learn better in highly structured instructional environments while non-anxious or independent students tend to prefer low structure. Spiro» Coulson, Feltovitch, and..Anderson (1988), in describing ‘their' cognitive flexibility ‘theory, note four principles ‘which guide the theory. 'These include: (a) learning activities must provide multiple representations of 28 content; (b) instructional materials should avoid oversimplifying the content domain and support context- dependent knowledge; (c) instruction should be case-based and emphasize knowledge construction, not transmission of information; and (d) knowledge sources should be highly interconnected rather than compartmentalized. Spiro and Jehng (1990) described cognitive flexibility as ”the ability to spontaneously restructure one's knowledge, in many ways, in adaptive response to radically' changing situational demands...this is a function of the way knowledge is represented (e.g., processes of schema assembly rather than intact schema retrieval)" (p. 84). The theory is concerned with the transfer of knowledge and skills beyond the point of initial learning. Emphasis is placed on the presentation of information from.multiple perspectives with the provision of the opportunity for learners to develop their own representations of information in order to learn properly. A theory that stresses the idea that learners can exert control by selecting their own instructional strategies in terms of content and presentation is the component display theory. This theory was developed by Merrill (1983). Briefly, its principles include (a) instruction will be more effective if three primary performance forms are present, i.e. remembering, using, and effecting generalization; (b) the primary fonms can be presented by either an explanatory or an inquisitory learning strategy; (c) the sequence of primary 29 forms is not critical, only the provision of all the forms; and (d) students should be given control over the number of instances or practice items they receive. In yet another theory, Gagne (1985), in describing his conditions of learning theory, identifies five major categories of learning: verbal information, intellectual skills, cognitive strategies, motor skills, and attitudes. He stresses that different internal and external conditions are necessary for each type of learning. He posits, for example, that in order for cognitive strategies to be learned, the learner must be provided with an opportunity to practice developing new solutions to problems. Pask, (1975) proposes a scheme called conversion theory which incorporates the fundamental idea that learning occurs through conversations about a subject matter which serves to make knowledge explicit. According to conversion theory, the critical method of learning is “teachback” in which one person teaches another what they have learned. The following principles are emphasized in this theory: (a) students must learn the relationships among the concepts in order to learn subject matter, (b) explicit explanation or manipulation of subject matter facilitates understanding (teachback), and (c) individuals differ in their preferred manner of learning relationships. Piaget (1969), in embellishing the understanding of his genetic epistemology theory, first published in 1929, 30 indicated four hypotheses. These are (a) children will provide different explanations of reality at different stages of cognitive develOpment, (b) cognitive development is facilitated by providing activities or situations that engage learners and require adaptation (assimilation and accommodation), (c) learning materials and activities should involve the appropriate level of motor or mental operations for a child of a given age, and (d) teaching methods should be used which~ actively involve students ‘and present challenges. In his theories of lateral thinking, De Bono (1967) describes lateral thinking as the generation of novel solutions to problems. He states that many problems require a different perspective to solve successfully. There are four critical factors associated with lateral thinking. These are (a) the recognition of dominant ideas that polarize perception of a problem, (b) searching for different ways of looking at things, (c) relaxation of rigid control of thinking, and ((1) use of chance to encourage other ideas. Lateral thinking involves viewing low-probability ideas which are unlikely to occur in the normal course of events. Since this theory stresses finding alternative perspectives on problems by breaking up the elements and recombining them to acquire different solutions, De Bono's work has been highly relevant to the concept of creativity. 31 In their work on learning theory, Craik and Lockhart (1972) cite two principles that they believe instruction should be based upon. First, the greater the processing of information during learning, the more it will be retained and remembered. Second, processing will be automatic unless attention is focused on a particular level. Carroll (1990) suggests that all learning tasks should be meaningful and self—contained, learners should be given realistic projects as quickly as possible, instruction should permit self-directed reasoning and improvising by increasing the number of active learning activities, training materials and activities should provide for error recognition and recovery, and there should be a close linkage between the training and actual work. Carroll called his theory minimalist theory. Because minimalist theory was developed from the perspective of the adult learner, it stresses the necessity of building upon the learner's experiences. The central idea of this theory is to minimize the extent to which instructional materials obstruct learning and focus on activities that support learner-directed activity and accomplishment. Carroll presumes that many other theories of learning fail to take advantage of the aspect of using learner errors as learning opportunities. Many of these theories represent a number of aspects associated with constructivist theory. Constructivism is founded on several premises. One is that children invent 32 their own ideas. This premise presupposes that children fail to learn effectively in the traditional pedagogy which has been a mainstay of instructional delivery for the past century. Instead of listening to and absorbing ideas spoken to them by teachers, followed by practicing abstractions of those ideas through rote tasks, children are invited to add new information to existing concepts and modify their understanding of the concepts through the new data. As this process occurs, children's ideas gain complexity and they gain insight into what they think about the world in which they live.. A second premise is that play and experimentation are critical to ieffective learning. This idea, gleaned from research in child development (Daiute, 1989 and Garvey, 1977) recognizes the importance of imagined situations and events which are part of play. As children play, they explore with their intellects and work out new understandings of situations which occur, changing their ideas and concepts as they gain knowledge from observation and experimentation. While children experiment, they are able to manipulate and test their ideas while receiving constant, concrete feedback about the accuracy of their understandings. A third premise is that of collaboration. When children collaborate or cooperate, they are able to share the process of formulating and testing ideas in a much more efficient manner than doing so individually. Cooperation allows for the 33 consideration and development of ideas other than their own. Each child becomes a resource to every other child and, by sharing progress and goals, a sense of teamwork develops which provides impetus to collective problem-solving skills. The role of the teacher is not traditional in this environment. Rather than the giver of knowledge, recognized under old pedagogical models, the teacher becomes a facilitator who guides children through a particular task. The teacher's role is to provide the tools to promote problem solving and to guide the inquiry that promotes concept development. Definition of Effective Instruction More recent studies have begun to focus on ways in which students learn as defining the parameters for assessing the effectiveness of instruction. Prior to an examination of how evaluation of instructional technology is, or is not, effective, one must examine several basic questions. These include (a) what is effective instruction, (b) how is effective instruction evaluated, (c) what is meant by instructional technology, and (d) how is instructional technology effectively evaluated in light of our knowledge of what makes instruction effective. Recognizing that society and societal demands have changed dramatically over the past decade and one-half, the case can be made that the definition of effective instruction 34 has also changed. Where not long ago instruction was thought to be adequately effective ‘when students exhibited the ability to follow directions, be on time, have adequate computational skills, and be able to communicate at least verbally, students of today need substantial skills in both verbal and graphic communication. They need good problem solving skills and need to be able to work collaboratively. Instruction, in order to be deemed effective, should take into account changing learner/work/societal needs. Evaluation of Educational Programs According to WOrthen and Sanders (1987), “...the practice of evaluating individual performance was evident as early as 2000 B.C., when Chinese officials conducted civil service examinations to measure proficiency of public officials. Greek teachers...used verbally mediated evaluation as part of the learning process" (p. 12). Travers (1983) establishes that in America, however, little existed in the way of formal evaluation before the middle of the nineteenth century. With the arrival of Horace Mann and Henry Barnard in the 18308 and a change in the influence of political and religious beliefs on education, early efforts of evaluation began to formulate. Testing and evaluation reached wide- spread acceptance» during' the first twoi decades of this century due to the work of Edward Thorndike and the advent of the army Alpha and Beta tests. 35 As the current century has progressed, the complexity of evaluation has increased. Wbrthen and Sanders (1987) state that Evaluation is complex. It is not a simple matter of stating behavioral objectives, building a test, or analyzing data, though it may include these activities. A thorough evaluation contains elements of a dozen or more distinct activities, the precise combination influenced by time, money, expertise, the good will of school practitioners, and many other factors. But equally important (and more readily influenced) is the image the evaluator holds of evaluation work: its responsibilities, duties, uniqueness, and similarities ’to related endeavors. (p. 23) Scriven (1973) notes that while evaluation plays many secondary roles in education, its single goal is to determine the worth or merit of whatever is being evaluated. Anderson and Ball (1978) describe six major purposes of evaluation. These include 1. To contribute to decisions about program installation. 2. To contribute to decisions about program continuation, expansion, or certification. 3. To contribute to decisions about program modifications . 36 4. To obtain evidence to rally support for a program. 5. To obtain evidence to rally opposition to a program. 6. To contribute to the understanding of basic psychological, social, and other processes. A r'cultura t ‘cal a Social thr lo ' l roach s Percival and Ellington (1988) identify two distinctly different approaches to educational evaluation. These are the agricultural/botanical approach and the social/anthropological approach. The agricultural/botanical paradigm reflects a scientific approach with its roots in scientific experiments designed to test specific variables relative to plant growth. These types of experiments are characterized by tight control and easily measured outcomes. The result of this paradigm's use in the field of education has provided a traditional, systematic, objectives oriented evaluation procedure. The process is designed to measure the extent to which an educational system has achieved specific goals relating to students' existing skills, knowledge, or both. Output is measured against input and statistically expressed results are usually reported. Extraneous factors such as the environment in.'which learning' occurs, what content and structure look like, who teaches, and how, are considered very little, if at all. According to Percival and Ellington (1988), “this general approach has been used when 37 measuring the relative efficiency of different methods in teaching' toward a common end, and also to :measure the effectiveness of self instructional programs in achieving stated objectives" (p. 134). The social/anthropological approach, also called illuminative evaluation, is concerned with the continuous process of education. The techniques used are much more subjective and require, in many instances, personal value judgements regarding the results. Those who favor this type of approach argue that the variables in educational research are neither identified narrowly nor controlled easily. Inputs and outputs are complex, fail to be easily specified, and not readily measured. The evaluator is usually left to work with attitudes and perceptions. The evaluation, therefore, cannot be rigid in its structure. Unlike the agricultural/botanical method of evaluation which focuses on whether specified goals have been reached, illuminative evaluation is designed to find. out. what has been achieved and. why. Percival and Ellington (1988) indicate several information sources which are useful in providing feedback. These include (a) results from student assessments, student questionnaires, and interviews; (b) observations of instructional systems in programs; and (c) feedback from staff directly involved with the instructional system and feedback from people having an indirect link with the instructional system. These 38 information sources are useful and necessary whether evaluation is formative or summative. Within these two broadly defined categories of instruction exist more narrowly delineated approaches. According to WOrthen and Sanders (1987), six general approaches to evaluation include: (a) objectives-oriented evaluation, (b) management-oriented evaluation, (c) consumer- oriented evaluation, (d) expertise-oriented evaluation, (e) adversary-oriented evaluation, and (f) naturalistic and participant-oriented evaluation. Objectives-Oriented Evaluation Approach The objectives-oriented approach to evaluation has been in place since the 19308. It was developed by Tyler (1942, 1950) and is now known as the Tylerian evaluation approach. With Tyler as the expositor of the objectives-oriented approach to evaluation, the following steps were used: (a) establish broad goals or objectives, (b) classify the goals or objectives, (c) define objectives in behavioral terms, (d) find situations in which achievement of objectives can be shown, (e) develop or select measurement techniques, (f) collect performance data, and (g) compare performance data with behaviorally stated objectives. Any discrepancies apparent between performance and objectives would invoke modification activities to eliminate the discrepancy. webb, Campbell, Schwartz, & Sechrest (1966) have pointed out that 39. the attention placed on this approach to evaluation has caused an improvement in tests and testing and made it possible to include unobtrusive, non paper and pencil types of measurements in evaluation. Others, like wake (1975), have criticized objectives-oriented evaluation approaches for treating education as a technology. Madaus (1983) has noted that the tendency on the part of teachers to “teach to the test" is encouraged by this type of evaluation since, frequently, the teacher's performance may well depend, at least in part, on the results of students' efforts on the test . ‘ Mapagement-Oriepted Egaluatiop Approach This is an evaluation approach meant to serve decision makers. It was developed during the mid-19608 by Stufflebeam (1968) and Alkin (1969). Stufflebeam (1973a, p. 129) listed four different kinds of educational decisions. From the four types of decisions, he developed his CIPP model. These are l. Context evaluation, to serve planning decisions. Determining what needs are to be addressed in an educational program helps in defining objectives for the program. 2. Input evaluation, to serve structuring decisions. Determining 'what resources are available, what alternative strategies for the program should be considered, and what plan seems to have the best 40 potential for meeting needs, facilitates design of program procedures. 3. Process evaluation, to serve implementing decisions. How well is the plan being implemented? What barriers threaten its success? What revisions are needed? Once these questions are answered, procedures can be monitored, controlled, and refined. 4. Product evaluation, to serve recycling decisions. What results were obtained? How well were needs reduced? What should be done with the program.after it has run its course? These questions are important in judging program attainments. House (1980) points out both the strengths and weaknesses of this approach to evaluation: . . .the decision making approach provides a valuable insight into evaluation. It stresses the importance of the utility of information. Evaluation information is meant to be used. Connecting evaluation to decision- making underlines the purpose of evaluation. It is also practically useful to shape an evaluation in reference to actual decision-making considerations. Even if one cannot define precisely the decision alternatives, one can eliminate a number of lines of inquiry as being irrelevant. (p. 232) 41 Why should the decision-maker, who is usually identified as the program administrator, be given so much preference? Does this not put the evaluator at the service of top management and make the evaluator the ”hired gun" of the program establishment? Does this not make the evaluation potentially unfair and even undemocratic? The answer' demonstrates potential weaknesses of the decision-making approach. (p. 231) Copsumer—Oriepted Evaluation Approach Scriven (1967) contributed to this approach as he made his distinction between formative and summative evaluation. He stated that the summative role of evaluation “...enables administrators to decide whether the entire finished curriculum refined by use of the evaluation process in its...formative role, represents a sufficiently significant advance on the available alternatives to justify the expense of adoption by a school system" (pp. 41-42). Scriven (1974b) published criteria for evaluating any product. These criteria included 1. Evidence of achievement of important educational objectives. 2. Evidence of achievement of important non-educational objectives (for example, social objectives). 3. Follow-up results. 42 4. Secondary and unintended effects, such as effects on the teacher, the teacher's colleagues, other students, administrators, parents, the school, the taxpayer, and other incidental positive or negative effects. ‘ 5. Range of utility (that is, for whom will it be useful). 6. Moral considerations (unjust uses of punishment or controversial content). 7. Costs. Other examples of this type of evaluation approach include the Educational Products Information Exchange (EPIE) and Morrisett and Stevens (1967) Curriculum Materials Analysis System (CMAS). Each of these evaluations judges content, transportability, and effectiveness. WOrthen. and. Sanders (1987) define ‘the strengths tof consumer-oriented evaluation as occurring in two ways, “...(1) they have made available evaluations of educational products as a service to educators who may not have the time or information to do the job thoroughly; and (2) they have advanced the knowledge of educators about the criteria most appropriate to use in selecting educational products" (p. 96). The largest drawback to consumer-oriented evaluation according to WOrthen and Sanders (1987) is that there is an increased cost to the consumer since the time and money invested in product testing will usually be passed on to the consumer. There is also the chance of the threat to local 43 initiative since educators and curriculum may become overdependent on commercial products and services. Expertise-Oriented Evaluatipn Approaches Expertise-oriented evaluation approaches typically are synonymous with accreditation. This type of evaluation is in dispute 'within the field of assessment. Many, including Scriven (1984), view this as not being truly evaluative. Others, such as Orlans (1971), see accreditation as highly evaluative. Regardless of the view, most would agree that accreditation has played a major role in changes in education. Accreditation types of evaluation can be either formal or informal. An example of a formal accreditation would be the North Central Association (NCA) process for accrediting school buildings within K712 education systems in the United States. Another would be the National Council for Accreditation of Teacher Education (NCATE). These systems include minimum standards which are seen as important for all schools as well as internal self-study components which customize accreditatiOn. The latter facet of accreditation led Kirkwood (1982) to criticize accreditation for lacking “...similarity of aims, uniformity of process, or comparability among institutions" (p. 9). Examples of informal accreditation take the form of ad hoc review panels, blue-ribbon panels, or funding review panels. 44 Criticisms of expertise-oriented evaluation include the views of Gustafson (1975) who notes the chances of permitting evaluators to make judgements which reflect too much personal bias, or Blanpied 8 Borg (1979) who suggest that not all those perceived as experts are, in fact, such. Scriven (1984) has also pointed to several problems with accreditation evaluations. These include ”(1) no suggested weightings of the variability of standards, (2) fixation on goals that may exclude searching for side effects, (3) managerial bias that influences the composition of review teams, and (4) processes that preclude input from the institution's most severe critics" (pp. 87-88). Kirkwood (1982) lists accreditation's achievements (1) in fostering excellence in education through development of criteria and guidelines for assessing institutional effectiveness; (2) in encouraging institutional improvement through continual self-study and evaluation; (3) in assuring the academic community, the general public, the professions, and other agencies that an institution or program has clearly defined and appropriate educational objectives, has established conditions to facilitate their achievement, appears in fact to be achieving them substantially, and is so. organized, staffed, and supported that it can be expected to continue doing so; (4) in providing counsel and assistance to established and developing 45 institutions; and (5) in protecting institutions from encroachments that might jeopardize their educational effectiveness or academic freedom. (p. 12) Adversary-Oriented Eyaluation Approaches Worthen and Sanders (1987) indicate that ”...most approaches to educational evaluation rest in part on the assumption that the evaluator should be impartial toward that which is evaluated. Evaluators who hold this view exert considerable effort in trying to prevent their. personal biases from influencing their findings and judgments" (p. 113). Even when stringent efforts are made to control and eliminate proclivities from research efforts, bias, however unintentional, creeps in. Efforts to eliminate preconceptions are applied in adversary-oriented evaluation approaches. Levine (1982) describes the adversarial model in this way: In essence, the adversarial model operates with the assumption that truth emerges. from a hard, but fair fight, in which opposing sides, after agreeing upon the issues in contention, present evidence in support of each side. The fight is refereed by a neutral figure, and all the relevant evidence is weighed by a neutral person or body to arrive at a fair result. (p. 270) Guba (1965), while not the first to propose a method of adversary-oriented evaluation patterned after the legal system, was a strong supporter of such a system and Wolf 46 (1975) proposed four stages of his judicial evaluation model. Wolf's stages are 1. Issue generation: identification and development of possible issues to be addressed in the hearing. 2. Issue selection: elimination of issues not at dispute and selection and further development of those issues to be argued in the hearing. 3. Preparation of arguments: collection of evidence and synthesis of prior evaluation data to develop arguments for the two opposing cases to be presented. 4. The hearing: including’ prehearing' discovery sessions to review cases and agree on hearing procedures, the actual presentation of cases, evaluation of evidence and arguments, and a panel decision. worthen and Sanders (1987) indicate several strengths of the adversary-oriented approach to evaluation. One of these advantages is that by building opposing viewpoints into an evaluation system, both positive and negative aspects of an educational program receive substantial examination. Because this evaluation system.exmmines opposing viewpoints, diverse views are invited and valued rather than excluded. Further, the tendency of an evaluation to invite criticism from those whose viewpoints have not been heard is negated in this process. 47 weaknesses of this approach have also been noted. Pophem and Carlson (1977, p. 104) have pointed to “...a disparity in proponent prowess" as a deficit in adversary evaluation, much as it can be a deficit in courtroom procedures. Brathwaite and Thompson (1981, p. 16) note that a serious problem is the complexity of the process in human terms. "...a large number of participants, many of whom are in important roles" creates a ”heroic model" of evaluation. House (1980) contends that the adversary model may resolve conflicts but that it has limited potential for arriving at the truth. Naturalistic and Participapt-Orientad Evaluation Approaches Stake (1967) was the first evaluation theorist to promote this evaluation theory in the field of education. Hi8 were the initial rules that have guided the development of this evaluation approach. Stake (1975b, p. 19) listed four characteristics of naturalistic evaluation. These are “(a) they depend on inductive reasoning, (b) they use a multiplicity of data, (c) they do not follow a standard plan, and (d) they record multiple rather than single realities." Parlett and Hamilton (1976, p. 12) listed three basic stages for the process of evaluation. These are 1. Observation, to explore and become familiar with the day-to-day reality of the setting being studied. 48 2. Further inquiry, to focus the study by inquiring further on selected issues. 3. Explanation, to seek to explain observed patterns and cause-effect relationships. Critics of this approach to evaluation have focused on its built in subjectivity as the object of their criticism. Proponents of the procedure have also focused on subjectivity, drawing upon the inherent human connection allowed within the framework of this type of evaluation. While the participant-oriented evaluation approach has promise, objective-oriented. evaluation approaches .have a fifty year head start on acceptance and it .may prove difficult for relatively new approaches to gain wide-spread acceptance. Evaluation of Effective Instruction For the purposes of this study, the researcher will rely upon the concepts of constructivism and the engaged learner. Considered, in addition, will be the environment that promotes engaged learning as proposed in studies by Barbara Means, Beau Fly Jones, and others as a basis for defining those components of effective instruction which must be present in an evaluation system. Means (1991) identified eight components of effective instruction. These include (a) a vision of learning, (b) the tasks that define the nature and level of achievement, (c) the assessment of principles 49 and practice, (d) the instructional mode, (e) the characteristic of the learning context, (f) the classroom organization, (g) the learner roles, and (h) the teacher - roles. According to the previously mentioned authors, an engaged learner is one who is responsible for his or her own learning. Engaged learners are taught to be self regulating. They are able to define goals for themselves and evaluate their achievement. For the engaged learner, learning becomes an “energizing" experience that leads to a desire to solve. problems and subsequent adeptness at problem solving tasks. Engaged learners are actively involved in planning their own learning. They have practice in thinking both linearly and intuitively. They are taught to apply knowledge across applications in order to solve problems. Engaged learners have the ability to work collaboratively in seeking answers and solving problems. In Order to facilitate the training of this type of learner, instruction needs to be designed to promote engaged learning. Learning tasks need to be authentic, challenging, and multidisciplinary, since the requirements in homes and the work place will put added value on corresponding knowledge and skills. The curriculum in a classroom which produces authentic learning is project based and relies heavily upon problem solving activities. The learning community consists of both the core community of the 50 classroom and the larger, more loosely defined, community outside the classroom. Students are afforded the opportunity to experience the value of diversity and the strengths and advantages of multiple perspectives. Both interactive and generative instruction are present. By definition, interactive instruction is that which actively engages the learner, while generative instruction encourages the learner to construct new knowledge based on previous learning. and manipulate that knowledge in ways meaningful in new situations. The teacher in a classroom where engaged learning is valued assumes a role which differs substantially from the tradition of being an information giver. The teacher becomes a facilitator, providing an environment and experiences that are requisites of successful collaboration. The teacher is also a guide who models behaviors that promote cooperative problem solving. This may include coaching or showing learners ways of mediation designed to bring about solutions to problems. Many times, the teacher is a learner or a co- investigator with the students. Jones, valdez, and others (1995) have begun to question whether effective and meaningful evaluation can take place when the aspect of effective learning is not taken into consideration. They question further the typical practice of determining the effectiveness of a technology program by comparing it to a ”regular" program on the basis of students' 51 performance on standardized tests. A8 presented in a North Central Regional Education Laboratory (NCREL) survey drafted by Jones, Valdez, Nowakowski, and Rasmussen (1995, p. 6), three overriding thoughts emerged regarding evaluation of technology effectiveness. These included 1" Effectiveness is not a function of the technology, but rather of the learning environment and the capability to do things one could not do otherwise. 2. Technology in support of outmoded educational systems is counterproductive. 3. The reliance on standardized tests is inappropriate. Technology works in a school not because test scores increase, but because technology empowers new solutions. Jones, et.al. (1995) posit that effective learning is what should be measured by a sound evaluation system. These authors have used Mean's seven variables of effective learning' as a Jbasis for formulating their ideas. These variables, when present in any classroom, indicate that effective teaching and learning are taking place. The variables include the following: (a) Children are engaged in authentic and mmltidisciplinary tasks, (b) assessments are based on students' performance of real tasks, (c) students participate in interactive modes of instruction, (d) students work collaboratively, (e). students are grouped 52 heterogeneously, (f) the teacher is a facilitator in learning, and (9) students learn through exploration. An engaged learner, then, is one who is responsible for his own learning, who is a strategic learner, who doesn't learn primarily because of the- possibility of extrinsic reward, and who approaches learning tasks collaboratively. Distance Learning Technologies Anglin (1995) states that beginning in the late 19608 and. early 19708, educators, technicians, and. other professionals began to consider the efficacy of delivering voice and video using technologies that would permit two-way interaction. Early efforts were as crude in their application of available technology as the technology was unsophisticated. Instructional technologies were developed that relied upon distance learning systems that were usually hybrid. Most often, different technologies were used for delivering voice and video. A typical type of system, one that saw a degree of academic reception in the late 19708 and early 19808, used microwave signals to transmit one-way video and telephone circuits for two-way audio. In spite of technical difficulties involving line-of-sight transmission and topography, weather interference, and inexperience of instructional providers and users, this type of technology is still in use today. 53 Early distance learning technologies, and their somewhat limited impact upon students, were the genesis of today's classroom technologies. The types of instructional technologies that may be accessed by teachers, administrators, and. students include: compact. discs, the internet and world wide web, CD roms, microcomputer laboratories, local area networks (LANs), wide area networks (WANS), hypertext, virtual reality, interactive software, e- mail, voice mail, educational television, cable television, interactive voice/video/data systems, satellite downlinks and uplinks, laptop and desktop computers configured as stand alones, and other emerging technologies. Delivery of Distance Learning Technologies According to Hannafin (1992), the distance learning technologies which are receiving wider use in education today are those which transmit voice and video or voice, video, and data. The delivery of these services differs widely from school district to school district and state to state representing' a :multitude of variants of the same basic technology. The delivery' of instruction using ‘video can. be by satellite uplink and downlink. It can be transmitted via microwave. It can be sent by Instructional Television Fixed Signal (ITFS) or by wire, cable or fiber. Further, the signal may be either analog or digital. Seldom is video transmitted 54 without voice. In some applications, the voice transmission may be one-way or it might be two-way through the use of a standard telephone return. In other instances, voice and video are both truly two-way. Some two-way voice and video systems are full range, allowing the person transmitting to see and hear simultaneously all those who are receiving the transmission. Some systems transmit by a method that allows the person transmitting to see only the individual or group speaking. This type of system, called voice activation, switches transmission origination with the location of voice. Still other systems operate by surveying sites. They scan from site to site at predetermined time intervals. Few systems are designed to transmit data with voice and video. While many data transmission systems are in place which operate as stand alones, it is becoming evident to system designers that creating band—width for the inclusion of data transmission in voice/video systems allows for many advantages. Not the least of these is the more effective use of software through greater efficiency in the distribution of site licenses. Fewer software licenses need to be purchased if an instructional technology system allows time sharing of licenses by different users or users at different geographic locations. Even when a signal is transmitted by copper wire, the type of medium used represents a diversity of capacity. Those systems which use Tl transmission lines with compressed copper technology may or may not have enough band-width 55 available for full range, real time video. Operators might also need to invest in expensive equipment to allow them to surmount the point-to-point transmission limitations typical of compressed copper systems. This difficulty is usually overcome by video bridging which can add large additional expenses to the network. Some systems use coaxial cable for transmission. Usually this is done when the system is a cooperative venture with a local cable television operator. This technology can be better than compressed copper but may not be fully functional if the cable operator has too little excess capacity on the cable system. Other systems use fiber optic cable. Fiber can provide almost unlimited capacity, but that capacity may be unattainable without large capital expenses for owning or leasing the fiber. An added complication is the unwillingness of owners to lease ”dark fiber" to users for transmission purposes. Dark fiber is unlighted, unused fiber that represents excess capacity. Pace of Technological Change It is sometimes difficult to remember that the development of any substantial change in technology comes slowly. Usually that slow rate of change reaches a point where increasing acceleration becomes the norm. So it is with instructional technologies. The pace of progression is demonstrated when it is considered that, in the chronology of the microcomputer, its genesis can be traced back seventy 56 years to 1926 and Dr. Julius Edgar Lilienfield's application for a patent on a “Method and Apparatus for Controlling Electric Currents," otherwise known as a transistor/amplifier. Anglin (1995) points out that it took twenty-two years before John Bardeen, Walter Trattain, and William Schockley of Bell Labs perfected the first workable transistor. Four years after the Bell Labs transistor, in 1952, IBM unveiled its first computer, the Defense Calculator, which used a few transistors but relied heavily upon vacuum tubes. It wasn't until 1957 that the first all transistor calculator was introduced, again by IBM. At this point development began to accelerate with the introduction of the integrated circuit by Texas Instruments and Fairchild Semiconductor in 1959. In 1960 Digital Equipment Corporation sold the first minicomputer for $120,000. In June of 1973 the term “microcomputer" appeared in print in an advertisement for a French computer called the Micral. This developmental timetable spanned 47 years. Over the next 22 years microcomputer development evolved into our present day machines. One reason, of course, that the time frame for development of the microcomputer was so elongated, was that no one had a clear idea of .the potential for these new machines nor were they cognizant of a method of evaluating what they had in a way which would allow for an evolutionary vision. Software development in the areas of today's most common applications, other than data processing, was not a 57 serious consideration until the late 19608. The rapid development of improved and more powerful software, however, has caused greater attention to be focused on evaluation of instructional technology. It is the issue of evaluation that raises intensifying concern today. Development of Instructional Technology Systems According to Ely (1993), instructional (or educational) technology is primarily a 20th century movement. Most of the major developments of its infancy occurred during and immediately after world War II. Initially the emphasis was on audio-visual communications media but the field became more focused on the systematic development of teaching and learning procedures. There is not an overwhelming amount of literature that specifically describes the development of instructional systems. However, a search through documents produced over the past 20 years reveals the changes in cognition that have characterized the thinking and creativity involved in planning for instruction technology. According to Elton (1977), instructional technology has undergone a progression of emphases since WOrld War II. At first, techniques and technologies that supported mass instruction were promoted. This was followed by an emphasis on individualized learning, followed in turn, finally, by the move to cooperative group learning. In each case, three developmental phases were apparent. Initially, the research 58 phase was present. In this stage, basic concepts were identified and developed along with supporting technologies. In the second phase, the development phase, concepts were converted to practice and supporting materials were refined. In the final stage, identified as the use stage, techniques, materials, and technologies began to receive wide-spread use. In a recent article, Boysen (1996) noted that ”Stone upon stone, chalk upon slate, our teachers and students trudge into the information age. Hypoallergenic chalk and simulated slate pass for technology innovations in schools stalled near the on-ramps to the information highway" (p. 7). While this may be overstating the case, it is evident that many writers have decried a lack of commitment and vision necessary for educational endeavors to truly enter the reaim of technological promise. Technology adoption and use has developed slowly over the past several decades. Romiszowski (1993) has identified four phases of technology development specifically related to distance education that trace a developmental history which began just prior to the outbreak of World war II. The first generation of distance education, one which still is widely utilized, was the print-based model. It was typically asynchronous and one-on-one in nature. The second generation of distance education, which developed through the 19608 and 19708, was characterized by the addition of radio and television broadcasts to the use of print media. Typically, instruction was broadcast through 59 the electronic media with support and follow-up using the transfer of printed materials. Teleconferencing systems characterize the third generation of distance education systems. According to Barker (1992), these systems began with audio conferencing but have progressed to the use of supporting visual and text materials. The development and use of video conferencing has begun to overcome the limitations of expense that have been present in the past. .It is now becoming economically accessible to educational systems.‘ Kurshan (1994) takes another approach to defining the chronological context of instructional technology. She identifies four models of conversation that have evolved from a traditional teacher-student representation to one in which technology enhancements have created significant alterations. Kurshan's four conversational models are (a) Direct Instruction, (b) Real-Time Conversation, (c) Time-Delayed Conversation, and (d) Learn by Doing. The traditional approach to direct instruction involved supporting technology, such as overhead projectors and textbooks, and were designed to support a teacher's conversations in the classroom. Today's technology support for direct instruction reduces the limitations of the' traditional approach_ by engendering more active two-way communication through the use of live or taped video, computer based instruction, and e- mail. Students receive an enhanced variety of information in much greater volume than was previously possible. 60 Real-time conversation was traditionally supported by the use of a chalkboard or placing participants around a seminar table. The supporting technology which has evolved has allowed the substitution of telephone conferencing, audio/video conferencing, and two-way video conferencing. Time-delayed conversation was represented in the past by the exchange of paper in the classroom. Today, this conversation is supported by fax machines, file transfer protocols, computer conferencing, and shared text via networks. Word processors, statistical and simulation software, online libraries and databases, and listserves extend students' abilities to learn by doing. These technologies continue to supplant the traditional uses of typewriters, libraries, slide rules, laboratories, and internships. According to Hawkins and Collins (1992), “...over the past sixteen years, the central issues for research, development, and implementation of technology in education have evolved through different phases. Initial concerns in the early part of the 19808 focused on getting computer-based technologies in place in sufficient numbers in schools, and creating circumstances focused on learning about the technology objects themselves" (p. 63). A second, overlapping phase saw a focus on the creation and implementation of computer-based programs that emphasized learning using technology. Again, according to Hawkins and Collins, 61 “...there is considerable evidence that well-designed technologies can qualitatively change the nature and depth of students' learning/achievement" (p. 63). Awareness that technology has little ability to transform education and educational settings is beginning to be realized. When changes are made in the way things are done within the educational setting, technology can be a powerful tool in promoting significant change. Olson (1985) indicates that the function of the mind is changed by the computer through altering one's knowledge and by altering the operations that one applies to the knowledge base. Instructional Technology Evaluation Systems Moors (1981) notes three elements of program evaluation. They are (a) analysis or documentation of program.aomponents or processes, (b) measurement of variables associated with a program, and (c) recommendations based upon evaluation of the information obtained. While program evaluation may be well defined, the systems in place to perform the activity of evaluation are not. Harris and Bell (1981) describe methods of evaluation as classified according to four basic categories. The first category of evaluation technique is based on a classical approach rooted in scientific methodology. These methods have as a primary goal the objectivity of results. The evaluator is typically external to the process and disinterested in 62 what is being evaluated. The range of inquiry is narrow and strictly defined, usually fully specified prior to the start of evaluation..As a means of nullifying any possibility of influence by the evaluator's biases, large numbers of subjects are studied. Attention is focused on product rather than process. The second category of evaluation also utilizes a disinterested evaluator but recognizes that the values of the evaluator may be important to the evaluation process. Boundaries of inquiry are not so tightly defined and the results of the evaluation are 'often presented from a multitude of perspectives representative of the variety of inputs processed during the evaluation. The third category of evaluation type differs from categories one and two in three basic ways: (a) The evaluator is not separate from those being evaluated, (b) value neutrality is unlikely to occur, and (c) those taking part in the evaluation process have an interest in the results of the evaluation. The goal of evaluation techniques in this category is to focus on the information relevant to the process of instruction. The results of the evaluation may be seen as judgmental. Category four techniques focus on how learners learn from, and interact with, their environment. The process of knowledge development and generalization are valued. Self appraisal and problem solving are aspects that receive a 63 great. deal of attention. These are usually' small scale evaluations involving a limited number of participants. Category four inquiry and, to some degree, category three evaluation tend to fall within the parameters of a cognitive approach to evaluation. This type of evaluation stresses three questions. These are (a) what are the stages of the learning process, (b) how can they be evaluated, and (0) what types of changes is evaluation likely to lead to. Typically, investigations are carried out during learning rather than as an assessment of what learning has occurred. This type of evaluation recognizes that meaningful learning involves building up the cognitive structure, incorporating new concepts into that structure, and developing richer patterns of relationships with the environment. Commonly Used Instructional Technology Evaluation Systems Today, evaluators need information in several areas when it comes to evaluating instructional technology systems. The information must deal with cost effectiveness, especially as one surveys the plethora of available transmission systems that may be:more or less efficient depending on the goals and design of the network, as well as the physical and geographic location of sites. Evaluators must also evaluate how a system delivers information, since technology changes the ways in which information and information resources reach schools.. The number and variety of information providers available to 64 schools increases as technology becomes more pervasive in the educational setting. Instructional technology accessibility is also necessary if we are to realize the importance of the user-friendliness of the technology. The access must also be equitable so that differences between students in “have not" districts and districts with greater resources are not exacerbated to the extent that quality differentials become glaringly apparent. Instructional technology, used correctly, can have a great impact on delivering learning opportunities to at-risk student populations. Evaluation systems that examine cost versus benefit and those which focus 'on the technology being used, have a substantial history of examination. These evaluation systems are still widely utilized and are necessary, even if the approach of using them. as the sole source of network evaluation is questionable. WOodley and Kirkwood (1986) illustrate a traditional approach to instructional technology evaluation. They describe evaluation based on six areas of measurement. The first is a basic measurement of activity. These measures include questions about the number of courses produced, the number of students served, and the number of students turned away when demand becomes greater than the ability to deliver. A second measure is that of efficiency. This measure includes questions about the number of students completing the course and the workload they attempted. Other efficiency measures might deal with cost-effectiveness and 65 making comparisons with students in traditional settings. A third measure is that of outcomes. The authors indicate that measures of learning tend to be covered by formal exams and assessments. These measures can also be designed to be long- term in nature as employment records are checked and employer satisfaction with graduates is evaluated. A fourth evaluation area is that of program aims. This measurement involves. evaluation of a network's basic goals in terms of what and whom they intend to teach. A fifth measurement area is policy evaluation. This can take the form of market research, such as surveying students to determine their opinions of various policies and procedures, or to monitor the impact of a practice or program on students. The sixth and final form of measurement is organizational evaluation. This simply involves determining the efficiency and efficacy of the way the network conducts its business. As has been indicated previously, some evaluation systems tend to center attention on the technology used to teach. By doing so, they tend to further the notion that ever more powerful new technologies will be the driving force that will change how schools transform themselves to deliver knowledge in a rapidly evolving social structure. Means (1994) states I have come to believe that the causal relationship flows at least equally strongly in the other direction-- that is, that education reform makes a school ripe for /— 66 technology. Teachers who rethink their curricula, replacing short pieces of didactic instruction on separate topics in discrete disciplines with multidisciplinary projects in which students tackle meaningful, complex tasks over extended periods of time, are establishing the prerequisites that will allow them to apply technology meaningfully to support student work. (p. 163) New information which must be considered in instructional technology evaluation is being proposed continuously. In a report for the North Central Regional Educational laboratory, Ramirez and Bell (1994) sum up the types of considerations that must be evaluated. These include: 1. The impact of technology on teaching and learning. Technology reduces the traditional teacher as lecturer approach and makes active participants out of students. It makes accessing information much easier and provides for real-life experiences for students as they interact in real-world, real-time activities. Student efforts become more collaborative and cooperative. 2. Use'of technology as a tool to help reduce inequities can be achieved if policies ensure that technology is accessible and affordable to all classrooms. Technology has the ability to remove the very real barriers of time and distance which negatively effect 67 rural schools and communities. They are able to have access to the same types of information at the same levels of quality as schools and communities in other, less rural areas. 3. Integrating technology into the fabric of instruction requires changes throughout the school organization. A variety of instructional technologies must be present within any system to make it fully functional and supportive to learning. Changes in the structure of the school day and year may be necessary to create opportunities for integration of technology. Changes in the structure of physical learning environments may also be necessary to make the technology readily available to users. 4. Professional development is crucial to integrating technology successfully into classrooms. Ensuring access and comfort for teachers is an important precursor to promoting extensive use of technology by students. Teachers should also be involved in the decision.making. Burnham (1994) emphasizes that technology enhanced education, specifically distance education, should be recognized as an endeavor different from traditional face to face instruction. Hofmeister, Carnine, and Clark (1993) note that by focusing on the acquisition and power of the hardware that is necessary to support technology, we lose sight of 68 other variables that represent the pedagogy. There is a danger that the 'values of, and the focus of knowledge dissemination can be lost. Jones, valdez, Nowakowski, and Rasmussen (1993) indicate that "when technology effectiveness is conceptualized as an intersection between learning and technology, it is possible to provide specific indicators of engaged learning and high- performance technologies that. promote learning" (p. 45). Writers such as Ray (1991), Resnick and Resnick (1992), Perelman (1992), and Sabelli and Barrett (1994) have begun to promote the idea that traditional models of learning will not be adequate to meet .the needs of education in the next century. The old models that stress basic skills and content using the transfer/transmission mode of instruction are in conflict with emerging models. They are giving way to equipping students with the abilities to think strategically as they problem solve, to be able to work productively as they continue to learn within the context of a constantly changing environment, and to work together collaboratively on a personal level as well as on an increasingly global one. These new necessities bring into question the central purpose of evaluation systems that traditionally measured technology and cost/benefit by ' comparing technology-enhanced programs with traditional instructional delivery models. Hudson and Boyd (1984) indicate that we will not always be able to anticipate the effect that new communication and information 69 technologies will have on learning and on students. They cite the ability of computers to teach more than content as problem-solving skills. They are also useful in stimulating intellectual curiosity. CHAPTER III METHODS AND PROCEDURES The research procedures used in conducting the study are described in this chapter, which is divided into the following sections: (a) Research Questions, (b) Research 'Population, (c) Development of the Questionnaire, (d) Procedures for Data Collection, and (e) Procedures for Data Analysis. The researcher's major focus in this study 'was to develop a profile of system evaluations used throughout the State of Michigan. A further focus of the study was to determine the bases of those evaluations. A third area of focus was to determine the extent to which goals developed during the planning stages for system implementation were used as a referent for evaluation. Finally, the study was designed to . determine the frequency of instructional technology evaluation across the state. Research Population In the State of Michigan, the mechanics for planning, initiating, promoting, and gate-keeping of instructional 70 71 technology systems have been assumed by the intermediate school districts. These fifty-seven districts, or current subdivisions of them, remain heavily involved in nearly all of the major K—12 instructional technology initiatives active within the state. The population for this study consisted of all fifty-seven intermediate school district superintendents, high school building principals whose buildings participated in an identified synchronous or asynchronous distance learning environment, and technology systems administrators. The list of current (as of July 1, 1996) intermediate school district superintendents was obtained from documents supplied by the Michigan Association of Intermediate School Administrators. The lists of high school principals and systems administrators were supplied by the Michigan Department of Education, Regional Educational Materials Centers, and the Michigan School Directory. All intermediate school district superintendents were included in this study. ‘No sample was drawn from this group. True samples were drawn from the other participating groups. >Development of the Survey Instrument Cgptept The initial phase of the process of questionnaire development began with a review of the related literature and consultation with various practitioners in the field of 72 instructional technology. Many of the individuals who were interviewed either by phone or in person were members of the National Rural Education Association, the National ,‘School Boards Association, the Global Village Schools Network or the North Central Regional Lab. Questions were generated from articles in which the authors sought to describe the various types of evaluation systems in place. These systems seemed to fall within three identifiable categories. These categories included evaluations based on relative economics. An example is the comparison of the cost of pedagogy delivered via two- way or modified oneaway systems versus instructional delivery in traditional classrooms. Also included were evaluation systems based on the values of the technology itself and the effect of the evaluation on the process of teaching. A final type of evaluation measured how students learn differently, thereby' placing 'value on the process of learning in a technologically enhanced classroom. The original pool of items was reduced by screening out those deemed irrelevant to the purposes of this study and combining some similar response items. CW Following the definition of terms and a list of instructions for completion, thirty-two questions were presented in the questionnaire. These thirty-two items were related to the professional position and responsibilities of 73 the respondent, the type of instructional network with which the respondent was associated, whether the system was evaluated, and if the evaluation was completed on a regular basis. Some of the questions required a “yes" or "no" response while others were answered by making a choice from several possibilities. Some of the questions required only the best single choice per item while others allowed respondents to make multiple selections. Additional questions were open— ended. Since the initial group of individuals surveyed included all intermediate school district superintendents, those who were not a part of an instructional technology network were not required to complete all items on the survey. The second group of respondents, the high school principals and technology directors, were asked to complete all items since they were identified as being users of instructional technology systems. The qmestionnaire was constructed with the assistance and advice of a consultant, whose experience in the development and evaluation of tests, surveys, and questionnaires was of great benefit as was his experience in the interpretation of research results. 74 2811414112 Two panels were asked to review the questionnaire relative to its content and construction prior to its use. Both panels included superintendents, principals,, network administrators, and evaluation practitioners. These individuals were considered to be in a position to judge the merits of the content of the instrument. In response to their input, some questions were changed while the content of others was clarified. The instrument's reliability was determined using Cronbach's alpha procedure. The reliability coefficient for scales pertaining to perceptions about goals and their effect on evaluation ‘was .67. Similarly, the alpha score for perceptions on evaluation results and their dissemination and use was .89. These reliability scores were considered acceptable. Procedures for Data Collection The questionnaire was first mailed to the fifty-seven intermediate school district superintendents, the high school principals, and the technology coordinators during the week of September 23, 1996 (Appendix B). Each questionnaire was accompanied by a post card which the respondent was requested to return upon completion of the questionnaire. This was to insure confidentiality of the responses. 75 A letter of introduction explaining the purpose of the study was sent to each person receiving the survey (Appendix D). This letter detailed general instructions for completing the survey, mailing instructions, when to return the accompanying post card, and an expression of gratitude for participating in the study. The respondents were requested to return the completed questionnaire in a self-addressed, stamped envelope and, in addition, to return the postcard after completion of the survey. The postcard provided leach respondent with an opportunity to request the results of the survey. Those participants who did not return the postcard indicating that the survey had been completed and returned were mailed a second letter encouraging their participation along with a questionnaire identical to the first one mailed. Procedures for Data Analysis The data were analyzed using the Statistical Package for Social Sciences (SPSS) Program. Descriptive statistical tools were used to analyze the data. The data were processed and examined for each of the research questions. Descriptive statistical analyses, such as percentage, frequency, mean, and standard deviation were utilized to describe the findings. Cross-tabulation analysis was used to determine the association between selected variables. 76 Data acquired during the course of this study are found in Chapter IV. CHAPTER IV PRESENTATION OF DATA Introduction The researcher's purpose in this study was to examine data that revealed characteristics of the evaluation of instructional technology networks in Michigan. Specifically, the goals were to determine if evaluation was taking place as a part of system operation, how frequently the evaluation procedure was used, who was responsible for performing the tasks associated with evaluation, if the individual responsible for evaluation was being given adequate training, and whether evaluation was connected substantially to the original proposed uses and goals of the instructional technology system. Each of these questions was examined using a survey containing items relating to each area. The data analysis for each of the questions included in the survey instrument are listed in the pages which follow. Of the 230 survey instruments mailed, 134, or 58.3% were returned. The first three questions were designed to elicit information about the person filling out the survey 77 78 instrument. These included queries about the respondent's role within the organization, the respondent's primary responsibilities, and other responsibilities that would be considered part of the respondent's job accountability. Bola pf Respgpdent Survey item number one asked each respondent to identify their role within the organization. The possible responses were superintendent, technology coordinator, principal, or other. Responses to the survey items are shown in Table 1. Table l.--Role of Respondent Value Label Frequency Percent Wet ’ " “ 32 ' 24.: Tech. Coordinator 25 ’19.2 Principal 47 36.2 Other 25 19.2 All of Above 1 .8 134 100.0 Table 1 shows the distribution of respondents according to the reported role within their organization. One respondent listed their role as all of the possible responses. That response is shown as “all of above" in the table. Some of the respondents chose "other" as their preferred option for response. The choices made and the frequency of the choices are reported in Table 2. 79 Table 2.--Other Roles Reported Reported Role Frequency of Response n = 25 3Computer Teacher 3 ‘Deputy Superintendent 1 IDirector of Curriculum and Instruction 3 1 'Director of Environmental and Utility Services Teacher and System Manager jInstructional Services Director ‘Director of Technology #NWD Regional Educational Materials Center 1(REMC) Director H Assistant Superintendent - Librarian 3 The most frequently chosen response within the category ”other" was the role of teacher. Three respondents indicated that they functioned as computer teachers while four individuals identified themselves as teachers who were also responsible for managing their building or district technology systems. Erimary Responsibilities of Respondents Item number two required the respondents to describe their primary responsibility within the organization. The choices included direct oversight, instructional technology network administration, building administration, or other. The frequency of choices and the percentage of each choice is shown in Table 3. 80 Table 3.--Primary Responsibilities of Respondents Value Label Frequency Percent Direct Oversight 43 33.9 Instructional Technology 11 8.7 Building Administration 48 37.8 Other 25 19.7 134 100.0 Nearly 20 % of the total respondents chose ”other" as their response to survey item 3. The choices made and the frequency of those choices are shown in Table 4. Those replying indicated responsibilities both within and outside the normal view of network.manager. Table 4.--Other Reported Responsibilities Primary Responsibility Frequency of Choice Total n = 25 'Indirect Oversight 2 'Building Network Administration 'Teaching Computer Classes 'WAN Installation 'Teaching 'Classroom Computer Labs/Staff Assistance Integrating Technology Into Curriculum - lISD Oversight HHNO‘NWQ Table 5 shows a comparison of primary responsibility by the role of the respondents. Frequencies and percentages are reported. 81 Table 5.--Primary Responsibility by Role Pr 3 Role (Responsibility Superintendent Technology Principal Other Coordinator N (%) N (%) N (%) N (%) Dire“ 1s (44%) 11 (27%) 1 (2%) 11 (27%) Oversight i _ Instructional 1 Technology -- 8 (73%) 1 (9%) 2 (18%): Building 5!! 'nistration -- 2 (4%) 44 (92%) 2 (4%) . Other 11 (44%) 4 (16%) -- 10 (40%) Column Total 29 25 46 25 ! (23%) (20%) (37%) (20%) _,_._ —-—-——-———-—- =II==I=L ——~-— A total of 125 of the 134 individuals who returned survey forms responded to this question. There is a variable degree of responsibility cross over between categories on either axis. The data represented in this table indicate a substantial amount of confusion regarding role and responsibility. Only building administrators indicated a consistent view of the focus of their primary responsibility. WWW Question number three was open ended and invited the respondent to list duties beyond that for which they were primarily accountable. The responses to this question are shown in Table 6. 82 Table 6.--Other Responsibilities Listed by Respondents. Responsibility 8 “— Frequency of Mention (Teaching 4 iTraining/Staff Development 8 ‘Planning/Evaluation 7 (Public Relations 1 )Purchasing/Procurement 3 -Network Coordination 12 :Program.Administration l4 Because of the variety of responses represented in the returned survey forms, those which were very similar were grouped together. For example, under the category of network coordinator, no distinction was reported regarding the type of network being coordinated. Individual responses reflected coordination of telephony networks, computer networks, distance learning networks, and internet access systems. Similarly, in the area of program planning no distinction was made about the types of programs represented. '8 “ct u' ' Us 0 nst ctiona c 010 The information upon which the identification. of potential' respondents was made did not specify ‘whether instructional technology networks were operational or in the planning stages. For this reason survey item 4 appears in the questionnaire. Respondents were given the opportunity to indicate if their’ building’ or district. was part of an instructional technology network delivering educational 83 programming. Eighty percent of the respondents indicated that their building or district participated in a functioning network while twenty percent indicated a negative response. Those who replied that they were not part of a functioning network were not asked to go further in answering survey questions. tem c i 'o The variety of responses received to question five are reported in Table 7. They are listed by response and frequency. The data indicate the variety of instructional technology networks functioning throughout the State of Michigan at the present time. Since each network has developed as either a local or regional initiative, no uniformity of design was promoted among planners. Table 7.--Description of Systems. System Description Frequency of Choice :Internet 7 fi lSatellite DistributiOn Uplink or Downlink (Live Cast/Community Channel ‘Interactive TV +Computer Networks :Automated Library gAdministrative Data System ;Bthernet Network :Agency HAN \DUQNNNON 'squdl 1 ....l- .- r 84 Qperational Description of System Two survey items were used to focus the attention of the respondents on the type of technology system being investigated. Four choices were given for survey item 6 including: (a) two-way video and audio, (b) one—way video and two-way audio, (c) one-way video and one—way audio, and (d) other. Responses are shown in Table 8. Table 8.--Operational Descriptions of Systems Description Frequency Percent Two-way Video and Audio 63 63.0 One-Way Video and Two-Way 14 14.0 Audio One-Way Video and Audio 8 8.0 Other 15 15.0 0f the 100 respondents to survey item 6, reported in Table 8, 63 indicated that their systems were capable of both two-way video and audio transmission, 14 noted systems capable of two-way audio and one-way video, and 8 individuals chose one-way' capacity for both audio and. video. Other definitions indicated by the remaining 15 respondents included, most frequently, internal and external data systems or data/voice networks. The other responses included (a) one- way video (b) TI-IN, and (c) LAN/WAN. Survey item 7 listed seven choices for respondents to indicate how signal was transmitted in the systems with which they worked. The possible responses included (a) fiber optic 85 cable, (b) coaxial cable, (c) compressed copper wire, (d) microwave/telephone, (e) satellite downlink/uplink, (f) hybrid transmission, and (g) other. Data for survey item 7 are shown in Table 9. Table 9.--Methods of Signal Transmission -‘-7 Method of Signal Transmission Frequency of Response Fiber Optic Cable 51 Coaxial cable 21 Compressed Copper Wire 8 Microwave/Telephone . 9 Satellite Downlink/Uplink 23 Hybrid Transmission 4 Other 11 The most frequently chosen methods of signal transmission included (a) fiber optic cable, (b) coaxial cable, and (c) satellite downlink and uplink. Only 4 respondents indicated that their systems were hybrid, that is, using two or more of the signal transmission methods. Those who chose the answer ”other" exhibited a narrow range of responses. These fell into five categories which included (a) T1 transmdssion, (b) twisted pair, (c) microwave/cable, (d) ISDN, and (e) ATM (automatic transfer mode). Response (b) is identical to item 8.3 on the survey instrument. It should be noted that T1 cable is a compressed copper technology and that microwave/cable is a hybrid transmission system. 86 S m. a t'o This question required only a ”yes" or “no" response on the survey instrument. Most respondents indicated that system evaluation was a part of their technology network and that it was structured and occurred on a regular basis. Results are reported in Table 10. Table 10.--Structured System Evaluation Choice Frequency Percent No 38 39.2 Yes 59 60.8 Ereqpency of Syetem Eyaluatiep Four responses were possible including annually, semi- annually, quarterly or other. Table 11 shows the frequency of the responses given the four possible choices. Table 11.--Frequency of System.Evaluation. Increment Frequency Percent Annually 35 58.4 Semi-Annually 5 8.3 Quarterly 6 10.0 Other 14 23.3 Fourteen respondents listed a choice of “other" on this survey item. Those responses are shown in Table 12 along with the frequency of each response. 87 Table 12.-—Frequency of Evaluation (Other) Continuous Process/Ongoing 6 (No Evaluation Yet But Planned As Needed Informally (Monthly via Committee (Daily JAs Convenient Goel and Chepge Dete These items address the respondents' views concerning the effect of network goals on the evaluation system and conversely the effect of the evaluation system on network goals. Data from these items appear in Table 13. Respondents were asked to react to the statements posed in the survey by indicating‘ if 'they' (a) strongly' agreed, (b), agreed, (c) disagreed, or (d) strongly disagreed. Mean and standard deviation are reported for each response. The data in Table 13 indicate that 80% of those responding agreed that goals were established for networks prior to the actual start-up date. When asked whether the governing bodies of the networks were asked to adopt the established goals, 77% agreed that these bodies had been given the opportunity to adopt the goals. The same percentage of respondents indicated that efforts were made to familiarize network participants with the network goals. Only 60% agreed that individual users were 88 made a part of the original evaluation system. In response to whether the evaluation systems have changed since they were first established, 63% of those replying agreed that change had occurred. The final two survey item responses reported in Table 13 dealt with change. Of those responding, 60% agreed that. change in ‘technology’ has come as a .result. of the collection of evaluation data. Only 55% agreed that change in the instructional process has resulted from system evaluation. 89 Table 13.--Perceptions Regarding Technology Evaluation Statements Strongly Strongly Agree Dis- Hean* Agree agree Disagree (sd) n (3) n (%) n(%) n (%) n43) §§223.22.“§§Z§§"..r. 21 53 12 6 2-03 ‘established (22.8) (57.6) (13.1) (6.5) (.79) (The governing body of 18_ 53 16 5 2.09 (each participating (19.6) (57.6) (17.4) (5.4) (.76) network adopted the :established goals. _ Effort was made to 22 50 16 6 2.27 (familiarize (23.4) (53.2) (17.0) (6.4) (2-20): (participants with the ' (network goals. - ‘Individual users were 15 41 31 7 2.32 part of the goal (16.0) (43.6) (33.0) (7.4) (.83) ‘development process. Goals became part of 16 35 25 5 2.24 the original (19.8) (43.2) (30.9) (6.1) (.84) eValuation system. :System goals have 15 47 19 2 2.10 changed since the (18.1) (56.6) (22.9) (2.4) (.71) (network was initiated. :The evaluation system 8 37 25 .1 2.27 (has changed since it (11.3) (52.1) (35.2) (1.4) (.68) gwas first established §Changes in technology 13 35 28 4 2.29 ;have resulted in (16.2) (43.8) (35.0) (5.0) (.80) ichanges in the jevaluation system. (Changes in pedagogy 6 37 32 3 2.41 ;have resulted in (7.7) (47.4) (41.1) (3.8) (.69) ‘ ichanges in the ‘ :evaluation system. I *Mean was computed based on a four point scalezb4m; strongly agree,3 8 agree, 2 = disagree, and 1 = strongly disagree. 9O wa s 'n ic nst cti nal 010 N t or a s e e e sed Respondents were given a choice of four possible alternatives to answer the question. These choices included fiscal outcomes, instructional outcomes (changes in teaching methods/technologies), learner outcomes (changes in the way students learn), and other respondent specified outcomes. The breakdown to the responses is reported in Table 14. Table 14.--Selected Technology Goals Types of Technology Goals ‘As fiscal outcomes As instructional outcomes As learner outcomes Respondents who chose “other" as their response to survey item 19 focused on two areas for their answer. These were (a) goals were expressed as the number of additional classes offered, and (b) goals were expressed in terms of curricular extensions and enhancements. It is arguable that both of these choices impact fiscal outcomes. Further clarification would be needed to make this determination, however. r e 8 a1 ation These items were responded to by either a “yes" or “no". Each of these questions focused attention on the ways in 91 which input is made concerning the evaluation system. Table 15 contains the data from questions 20 through 24. In nearly one-half of the responses it was noted that the system administrator was not the individual charged with system evaluation. The research data do not indicate what other positions exist which would include evaluation as a specific responsibility. The data in Table 15 also indicate that, in most instances, the system administrator has other duties to perform within the organization. Remaining data in this table show that, while input is sought from system users, no formal means exists for using the data in evaluation. This was opined by nearly one-half of the users responding to the survey . Table 15.--Evaluation Process Inputs. Responses Statements Yes No number (%) Number (%) Is the system administrator 48 (52.7) 43 (47.3) ‘responsible for system evaluation? 'Is system. administration the sole 18 (19.8) 73 (80.2) (function of this position? (Is input solicited from system users 82 (87.2) .12 (12.8) :as part of the evaluation process? Is input from local 80 (85.1) 14 (14.9) .districts/buildings sought as part (of the evaluation process? (Does a formal process exist for 47 (50.5) 46 (49.5) jinput into the evaluation process? 92 gse of Evaluation Qata These items again required responses on the five-point scale. All of these questions dealt with how the evaluations and the results of the evaluations were used. The results of these questions are reported in Table 16. Two-thirds of those responding agreed that evaluation reports were provided local administrators. An identical percentage agreed that reports were made available to boards which governed instructional networks. When respondents were queried regarding the dissemination of evaluation results to system users, only about 50% agreed that this was done. Two-thirds of those replying agreed that a process existed for using the data gained from evaluation to make changes in the instructional technology networks. Only slightly more than 57% of those responding indicated that the system administrator was responsible for processing the evaluation data. A similar percentage of respondents agreed that local administrators were responsible for implementing system changes. Almost three-fourths of those responding agreed that system administrators were provided with training adequate to allow them to implement system changes indicated as necessary by evaluation data. Only 46% of—those replying indicated that sufficient guidelines were provided to network administrators to direct activities in system assessment. 93 Table 16.——Eva1uation Data Use \ Strongly Strongly Agree Disagree . Mean* Statements Agree Disagree (sd) n (%) n (%) n.(%) n (%) Reports are 'generated for 10 46 21 6 2.22 Ilocal (12.1) (55.4) (25.3) (7.2) (.84) administrators. ‘Reports are 12 40 21 4 2.22) generated for the governing l d. (15.6). (51.9) (27.3) (5.2) (.77 iEvaluation ‘results are 9 35 ~ 27 5 2.37 disseminated to (11.8) (46.1) (35.5) (6.6) (.78) (system users (A process exists fizzworklzganges 9 42 21 6 2'31 based on (11.5) (53.9) (26.9) (7.7) (.78) evaluation data [The system Sizzg‘mr 6 37 26 6 2.43 evaluation (8.0) (49.3) (34.7) (8.0) (.76) ‘results. (Local (administrators have the primary 9 42 29 2 2.29 ‘responsibility (11.0) (51.2) (35.4) (2.4) (.69) (for implementing ‘system changes ‘Guidelines exist .to guide the 7 27 38 2 2.47 system evaluator (9.4) (36.5) (51.4) (2.7) (.71) in assessment. Training opportunities are 14 41 16 6 2.18 provided the (18.2) (53.2) (20.8) (7.8) (.82) system evaluator. *Mean was computed on a four-point scale: 4 I strongly agree, 3 8 agree, 2 8 disagree, and 1 = strongly disagree. 94 Chapter Summary Resuits of Date Apelysis for Role and Responsibility In this section, the role of the respondents according to :major organizational accountability is examined. The primary responsibilities associated with that role are also identified. As a third part of the survey, each respondent was asked to identify other responsibilities not necessarily associated with the traditional scope of position. System Description Survey respondents were requested to reply to items designed to provide information about the types of instructional technology networks with which the respondents worked. Survey items 3 through 7 addressed this issue and yielded descriptive information. Descriptive Informetioo Items 8 through 17 assessed how goals drove network development and evaluation, who participated in goal development, how familiar users were with system goals, and what effect goals have had on network evolution. Respondents were asked, in addition, whether pedagogical techniques have resulted in changes in the evaluation system. Goals of Ioeoroctionai Teohpology Netgorke Survey item 19 addressed the ways in which respondents might express network goals. Choices given fell into the .95 areas of fiscal outcomes, instructional outcomes and learner outcomes. Respondents were also given an opportunity to choose ”other" and specify differing ways. to express goals. System Adm} nietrator Roie Items 20 and. 21 addressed the role of the system administrator and the responsibilities of that position regarding evaluation. It was left to the respondent to seek a definition for system administrator, since the term has nearly as many meanings as there are differing systems. Description of Evaluation Input Elements Items 22 through 24 inquired about who provided information to the evaluation process regarding perceived changes in instructional technology networks. Information was also obtained .assessing the presence of a formal means of input into evaluation systems. Qse of ryaluatioo Reports This section of the survey instrument, containing items 25 through 32, considers the use of evaluation reports. Specifically, whether the reports are used, whether they are disseminated widely, who is responsible for change using information contained in the reports, and whether those charged with evaluation receive information used in the practical application of evaluation results. 96 In Chapter V, the findings presented in Chapter IV are summarized, conclusions are illuminated which may be drawn from those findings, and recommendations for further study are presented. CHAPTER V SUMMARY, FINDINGS, CONCLUSIONS AND DISCUSSION, AND SUGGESTIONS FOR FURTHER RESEARCH This chapter contains a summary of the study, findings, conclusions and discussion, implications for instruction and learning, and suggestions for further research. Summary Evaluation of educational programs and practices has been present for many decades within the American school system. It has long been used to provide a basis for decision making and for the formation of policies and procedures. It has also been used to monitor how effectively and efficiently education funds are spent and to improve educational materials and programs. Some endeavors in the educational arena have a significant body of evaluation research attached to them. Others have been subjected to little study. This research examined the use of educational technology systems in the State of Michigan. Little information exists which provides descriptive data about instructional technology systems evaluation. Some data are available from the Michigan Department of Education and a listing of contact people is 97 98 available in this research document (Appendix A). This data collection does much to describe K-12 video, voice, and data systems but makes no attempt to provide any information regarding evaluation of those identified networks. With the increasingly rapid growth of instructional technology systems within this state, it is important that a groundwork be laid for meaningful and useful evaluation. With that necessity in mind, this study was initiated to seek information regarding evaluation of instructional technology within the State of Michigan. Areas specifically addressed in the study were (a) the types of instructional technology evaluation currently in use as identified by what is evaluated, and for what purpose the evaluation is completed; (b) the role and responsibility of the individual within each organization whose duties encompass the technology system; (c) the relationship between the data used for evaluation and goals suggested for the system at its inception; (d) how data gathered during the evaluation process are used; (e) whether system changes have occurred as a result of evaluation; and (f) the jperceived. availability' and. adequacy' of training provided to system evaluators, and if that training facilitates the transformation of collected data into information of practical value to system users. Research questions were generated and a survey instrument was designed to determine (a) the types of instructional technology evaluation currently in use as 99 identified by what is evaluated and for what purpose, (b) who conducts the evaluations and the type of data used, (c) if data gathered during the evaluation process are used to identify needed changes, (d) what system. changes have occurred as a result of evaluation, (e) the role of the system administrator, (f) descriptions of input elements, (9) how evaluation results are reported and used, and (h) what training characteristics and needs of system. evaluators exist. The survey instrument consisted of thirty-two items relating to instructional technology evaluation. The instrument contained items related to administrator and evaluator roles and responsibilities, outcome expression, use of evaluations, and bases for instructional technology system development. The 'population. selected. for ‘the study’ included. all fifty-seven intermediate school district superintendents in Michigan. Also included was a true random sampling of high school principals from across the state whose buildings were part of a network as identified by the Michigan Department of Education. A true random sampling of persons identified as system administrators by the same source was selected in addition to the two other groups. A total of 230 surveys were mailed with an accompanying letter (Appendix C) during the week of September 15, 1996. A second mailing to non- respondents was sent during the week of September 30, 1996. 100 Of the 230 survey instruments mailed, 134 were returned for a participation rate of 58.3% The analysis of the data obtained from the returned survey instruments used descriptive statistical tools. The data were processed and analyzed for each of the research questions. Descriptive statistical analyses such. as percentage, frequency, mean, and standard deviation were utilized to describe the findings. Cross-tabulation analysis was used to determine the association between selected variables. Chapter I listed the limitations inherent in this study. They were 1. Data for this study were collected by mailed survey, therefore, only reported information is included. 2. Data were collected via a mailed survey instrument, therefore, the researcher had to assume that the instrmment was answered honestly. 3. The study was descriptive, and subject to the weaknesses inherent in descriptive research. For example, the survey instrument was designed to measure respondents' perceptions regarding the uses of instructional technology evaluation, and did not question why the perceptions existed. 101 Findings In ‘this section, research findings are presented. Readers are reminded that the findings reported are based only upon the responses of the study participants. These respondents included intermediate school district superintendents, high school principals, and technology systems administrators as identified by the Michigan Association of Intermediate School Administrators and the Michigan Department of Education. Research Objective 1 The first objective of the researcher was to determine the extent of evaluative effort used within selected instructional technology systems serving K-12 students within the State of Michigan. It was also an objective to determine the frequency of evaluation within those systems. Two questions were generated by this research objective. They were 1.1 Does the instructional technology system(s) receive regular and structured evaluation? 1.2 What is the frequency of the network evaluation? Findings Sixty-one percent of those individuals who responded to these survey items indicated that the instructional technology systems with which they were associated were evaluated on a regular basis. The patterns of evaluation, 102 that is the planned frequency of evaluation, varied considerably. The most common basis for assessment was annual. Other evaluations occurred on a semi-annual or quarterly basis, still others cited a continuous process of evaluation. Some respondents noted that evaluation was an ad hoc process that was accomplished as needed or when it was convenient to do an assessment. No attempt was made in this research study to determine what factors constituted need or convenience. A significant number (39%) of respondents indicated only sporadic assessment or no evaluation at all. Only 77% of those who indicated that regular evaluation was taking place suggested that the evaluation occurred on at least a quarterly basis. The remaining respondents noted that they were planning to evaluate but hadn't initiated a process yet or that evaluation would be done as convenient. Evaluation is far from universal. No generally accepted frequency rate exists for evaluation. Some systems are highly structured while others receive no evaluation. It appears that present levels of network evaluation are not adequate for determining whether system improvements need to be made or which specific improvements are called for. Effort needs to be made to.make evaluation both consistent and universal. Research Objective 2 The second objective was to determine the influence of the original goals for the instructional technology network 103 upon current network operations and evaluation. Five questions were drawn from this objective. They were 2.1 Prior to the establishment of the instructional technology network were formal goals established? 2.2 Was the governing body of each participating entity asked to adopt the established goals? 2.3 Was information dissemination adequate to familiarize individual participants with the established goals? 2.4 Were individual users, including, students, teachers, and administrators, part of the goal development process? 2.5 How were goals expressed? Findings Responses to the questions associated with research objective 2 indicated, in general, that goals were established for instructional technology networks prior to implementation and that the oversight bodies charged with network governance were given the opportunity to formally adopt those network goals. A substantial variance in answers was present when respondents were asked whether information about the network goals was given to individual network participants. There was only weak agreement about whether network users were part of the goal development process. 104 Two observations can be made based on the data provided as a result of this research. First, while 80% of respondents agreed that goals were established for networks, only 60% agreed that system goals have changed as a result of network evaluation. A smaller percentage (58%) agreed that changes in technology have resulted from system evaluation. Finally, only 55% of those responding agreed that changes in pedagogy have resulted in changes in evaluation. A loosely coupled relationship may be present between goals and evaluation. Uncertainty about how to evaluate and lack of appropriate, solid, evaluation data may serve to confuse both users and administrators as attempts are made to initiate system change and development. It should be noted, too, that while 73% of respondents indicated that evaluation outcomes were expressed as learner outcomes or instructional outcomes, only 55% of respondents agreed that pedagogical changes have resulted in evaluation changes. This suggests that either changes in the classroom have not been matched with changes in the evaluation system which measure such changes or, conversely, evaluation data has not driven changes in the instructional delivery methods used in the classroom. A further possibility is that changes in the way teaching and learning is occurring have not been realized with the inclusion of new instructional technologies. 105 Research Objective 3 The researcher's objective was to determine the impact of the original network goals on the evaluation system. Three statements were generated by this objective 3.1 Goals became part of the original evaluation system. 3.2 The evaluation system has changed since it was first established. 3.3 System goals have changed since they were first initiated. mm Only tenuous agreement was exhibited by respondents regarding incorporation of initial goals into original evaluation systems. This was also true when respondents were questioned about whether changes had occurred in the evaluation system since network inception. The means for both these responses fell between ”agreement" and ”disagreement" with a slight tendency toward ”agreement." There was somewhat stronger agreement regarding changes in system goals. It appeared that goals did not always remain static within technology networks. The factors that influence these changes were not determined by this research. These data suggest confusion about the role of goals in the process of building and evaluating instructional technology networks. Were the goals initially established for the networks sufficient to 106 drive network development and keep that development consistent with the original focus of the network planning, or were those goals not incorporated firmly enough in the planning and evaluation processes to serve as guideposts in development? Reeearch Objective 4 The purpose of objective 4 was to determine the position(s) of persons responsible for system evaluation. This expressed objective generated four questions. 4.1 What roles and responsibilities are part of the K- 12 instructional technology network? 4.2 How do these roles and responsibilities interact with each other? 4.3 ‘Who are the input providers to the evaluation process? 4.4 How is input solicited? Findings As part of the survey process, respondents were asked to identify their role within the organization, their primary organizational responsibility, other duties that may have been assigned to them, and other responsibilities for which they may be accountable. Clear delineation of roles and responsibilities was not present in the data. While role was easily defined for a majority of respondents, the responsibilities assigned to similar roles varied a great 107 deal and were non-standard when applied to instructional technology network involvement. One could not view the role as identified by individual respondents and assume a set of responsibilities. Similar network duties transcended role choices. When the data were assessed to determine who has input to the evaluation system, it was noted that system users as represented by both individuals, and building and district level groups, have avenues of input. In many instances it would appear that the means of input tend to be informal. Only slightly more than one-half of those responding indicated that a formal means of system input was present as a part of network evaluation. Research Objective 5 Objective 5 was to determine the manner in which the information gleaned from the evaluation process was used, and to what end its use is ascribed. Five questions were generated, including, 5.1 Once evaluation is completed, are reports generated and disseminated? 5.2 To whom are the reports disseminated? 5.3 Does a formal process exist for operationalizing the information from the reports in network operations? 108 5.4 Is the network administrator responsible for processing the information from the evaluation? 5.5 Who determines what network changes result from the evaluation information? Findings Research data for these questions are reported in Tables 15 and 16 in the previous chapter. When asked to respond with a “yes" or ”no" to a question about the responsibilities of the system administrator, it appeared that evaluation was one of the duties identified with this position by about one-half of the respondents. While this would seem to indicate an amount of variability in the primary role of the person charged with network evaluation, it is necessary to temper any judgement. with the note that not all networks are evaluated. In Michigan, it appears that network administration is a sole duty of the person charged with network responsibilities in a minority of instances. More than 80% of those responding indicated that system administrators have other responsibilities. Data reported in Table 16 indicate that survey respondents are divided in their view of the system administrator as the designee for dissemination of evaluation results. When asked a series of questions about dissemination of information resulting from evaluations, the respondents tended. to agree that local administrators received information about the results. There 109 was also some agreement that information was generated for local governing boards. Fewer respondents agreed that results were generated for system users, and no clear agreement appeared regarding a formal process for making changes in networks based on the infommation gained from evaluation. Research Objective 6 The objective of this portion of the research was to determine the training characteristics and needs of system evaluators. Two questions were generated 6.1 Does a set of evaluation guidelines exist which serves to assist the system evaluator in the task of assessment? 6.2 Does the system evaluator receive regular opportunities for training in areas which will provide knowledge and information useful in the practical application of data generated through evaluation? Fiodings As many respondents disagreed with the statement that evaluation guidelines exist for the use of system evaluators as agreed. Clearly the area of guidelines can have a significant impact on the ways in which systems work and their effectiveness in promoting better means of instruction and learning. There was greater agreement when respondents were queried about training opportunities for evaluators. The 110 responses had more of a tendency to lean toward agreement that adequate opportunities did exist for training and that the training provided necessary knowledge and information useful in the practical application of information gathered from the data generated in the evaluation process. Conclusions and Discussion Conclusion 1 Results from this study indicate that instructional technology evaluation is occurring in many of the functioning networks within K-12 educational systems. The results also point to a lack of comprehensiveness and consistency within the evaluation process. Discussion A majority of those responding to the survey indicated that evaluation was a formal process and that it occurred regularly. The frequency of evaluation was reported to range from daily or ongoing, to annually, to as convenient. Not all of those replying to these survey items indicated regular evaluation as an aspect, of their network. Some of the responses noted that evaluation was completed when it was convenient or when such an activity was needed. This would indicate that evaluation varies substantially in its structural formality. Of those responding to the survey, only 61% indicated that evaluation was occurring in a formal, 111 regular manner. Of that number, only 77% indicated that evaluation was taking place on at least a quarterly basis. Given the range of answers and the indication that no evaluation was occurring in the instance of 39% of the respondents, a conclusion can be drawn that current levels of instructional technology network evaluation are not adequate to provide information necessary to make decisions about system improvements and direction. A further conclusion may be derived regarding how evaluation is viewed as a component of instructional technology networks. Greater emphasis needs to be placed on the necessity of integrating formal, goal based evaluation into the process of building, maintaining, and re-directing instructional technology networks. Failure to take this action could doom current instructional technologies to the same fate as that of previous classroom technologies, which, although promising at inception, failed to produce desired results in pedagogical changes that improve the way in which learners learn. Conclusion 2 Network goals were usually developed prior to network operations genesis, but the role of network goals in the evaluation process is not clear to many users. 1 s'on Survey respondents tend to agree that goals have had a role in network evaluation as part of original evaluation 112 plans. Some agreement was also evident that governing boards were asked to adopt these goals once they were established for the network. It would appear that attempts were made to determine the purpose and. use of instructional technology networks during their planning process. Confusion exists about the role of goals as a Ibasis for the (evaluation process, who had input into the goal development process, and the effect that goals have had on networks. While 80% of respondents agreed that network goals were established, only 63% agreed that the goals became part of the evaluation system. These findings support the idea of a lack of adequate follow-through, planning, and commitment on the part of those who operate instructional technology networks. A well planned system of establishing need, building a delivery system based on the established need, and assessing progress toward meeting goals designed to meet identified needs is lacking in many instructional technology systems. The research data showed a high degree of variability. existing in the definition of the role of the system administrator. Most respondents indicated that those charged with system administration were also responsible for other organizational tasks. The research indicated that in about one-half of the cases the system administrator was not responsible for system evaluation. The loosely coupled relationship between goals and outcomes may well be compounded by the lack of any clear 113 understanding about who is responsible for evaluation of systems. Conglusim Respondents indicated that network goals focused on instruction and learning primarily, and fiscal advantages of instructional technology secondarily. In practice, however, the distinctions and priorities became less clear. Discussion Most of those responding to this survey item indicated that network goals were expressed as either instructional outcomes or learner outcomes. Their choices would lead a casual observer to conclude that a high degree of influence on instruction would result. Examination of the remainder of the data generated in this area shows a high level of uncertainty on the part of respondents regarding such a conclusion. The research indicates a low level of agreement with the idea that changes in technology have resulted in concomitant changes in evaluation. There is no clear consensus that goals influence evaluation or that changes in either technology or pedagogy have resulted in assessment changes. Conclueion 4 Successful instructional networks can have many different looks yet have the ability to meet the needs of 114 network users. It is probably more important to spend time planning well than it is to expend effort attempting to achieve a high technology level without an adequate plan for construction, use, and evaluation. Discussion Instructional technology networks take many shapes in Michigan. Those responding to the survey reported a wide variety of network technology types. These ranged from two- way interactive sites to one-way satellite receptors. Signal transmission modes varied along with the types of systems. Many relied upon fiber backbones. Still others used cable to provide Abroad bandwidths capable of supporting massive amounts of data flow. Others were more modest in the scope of technology and transmission employed. Some networks linked communities together ‘while others linked. classrooms. .As technology networks continue to be planned and implemented, a logical next step develops. That step is the linking of the networks to create larger learning communities. The feasibility of this effort has been demonstrated by the growth of the internet. As long distance providers, local telephone companies, and cable distributors compete for each other's customer pools, attempts will be made by the entities who are able to look further ahead to control a variety of information access points. When this occurs it may well be that the process of linking is substantially enhanced. System 115 development can be either positive, or negative, or both. While system variability provides networks that meet local community and classroom needs, the same variability may impede the development of truly integrated systems by allowing the use of incompatible transfer mechanisms. Care must be taken to make certain that system compatibility is a part of planning. When planning is left as it is now, a serendipitous process, local needs will be planned for, but a larger view of the technology may not be taken. If this occurs, eventual compatibility will be achieved, but at a very high price in both money and time. C si System users are afforded opportunities to provide input into instructional technology systems evaluation, but in many instances, the avenues of input are informal and not well established. Discussion A high percentage of survey respondents reported agreement ‘that system! users, ‘whether individuals, school building representatives, or district representatives, have opportunities for input into their systems and the evaluation process. Only' one-half of those responding identified. a formal means of system input. The lack of fommality may have implications for whether such input is used or assessed, or if the information thus gained has any discernable influence 116 on instructional technology systems or the evaluation of those systems. As in other observations, a lack of concise direction is evident in the structure of network evaluation. Inconsistency appears to be one of the constants of evaluative factors. Conclusion 6 Those responsible for system evaluation are afforded opportunities for training in ways that facilitate the use of evaluation results to generate practical applications. It is not clear whether evaluators are given adequate guidelines to assist them in their task of assessment. It may be that system evaluators are left to their own means of determining which issues involving instructional technology are important and should be programmatically included. It would also appear that, in the absence of clear guidelines, the process for using evaluation information is not generally or equally vested throughout organizations. Discussion Opportunities to increase: evaluation, skills and 'the ability to generalize collected data to practical application should continue to be afforded those charged with system evaluation. As systems change and evolve, as technologies become more diverse, and as student needs become increasingly heeded, it will be imperative that evaluators become cognizant of expectations for proper assessments. It seems 117 reasonable to assume that goals ‘will change along ‘with practices, that necessities of distance and time will make instructional technology ever more pervasive. Well trained evaluators will be needed not only to provide assessments but also to train other evaluators. It is important, too, that educational institutions examine their goals and values to determine the proper course for development of instructional technology systems. Guidelines that reflect these needs and the plans to remediate them should be available to evaluators so that the assessments performed reflect the developmental needs of the system and its users. Conclusion 7 A lack of consistent, clear direction exists when evaluative practices are used within instructional technology systems in the State of Michigan. Discussion Confusion exists regarding several aspects of instructional technology evaluation. Some of these areas of uncertainty are basic. No framework: exists for insuring consistency in the way that evaluation is carried out. It appears that network goals are not consistently included as a basis for evaluation thereby undermining the effectiveness of that evaluation for assessing progress against those goals. Roles of those involved in various ways with instructional technology networks are not always clearly defined nor are 118 the responsibilities associated with those roles consistently A assigned. This circumstance may cause a lack of accountability for system effectiveness. System users do not always have formal ways to input information into the evaluation system. The lack of formality could cause the disregard of important system data and reduce the positive impact of evaluation. No discernable pattern of responsibility for system evaluation was apparent, nor was there clear consensus about who should be implementing changes indicated by evaluation. This situation could result in evaluative data not being used to institute needed change since clarity of who is responsible for system change is uncertain. Implications for Instruction and Learning As technology becomes ever more a part of the process of educating children, important decisions must be made. The readings of this researcher have revealed two distinct modes of technology use in today's classrooms. Nearly every student in this state has daily access to some type of technology which serves to support and enhance that student's learning. Some technologies are relatively simple, and often taken for granted, while others are highly complex. Whether ”high tech" or ”low tech", the use of instructional technology seems to fall within two broad categories. These categories are (a) 119 technology as an adjunct to teaching and learning, and (b) technology as an agent for pedagogical change. When used as an adjunct, technology can simplify the life of the teacher. It can also assist in making the teacher's time more student oriented and less task oriented. Use of technology for drill and practice, i.e., computer assisted instruction (CAI), and for records management, i.e., computer managed instruction (CMI) , should provide greater opportunities for face-to-face, student-teacher interaction. Use of basic computer applications can assist students in organizing concepts and examining their structure. This is specifically true of word. processing and data base applications. While of value in the classroom, it can be argued that these types of technology uses do not take full advantage of the capacity and power of available instructional technologies. Even the most advanced distance learning systems are being used based on traditional models of classroom interaction. While there is validity associated with these uses, attention needs to be focused on their potential. Currently available technologies can be agents for educational change. With the advent of distributed instructional technology systems, such as the Internet and WANs rather than LANs, organizational structures are present which allow ~for enhanced contribution of information by system users. Any number of users can contribute services and 120 information to a distributed system in which any number of others can share. These types of systems lend themselves to cooperative use and collaborative instruction. By allowing users to take advantage of on-line conferencing and bulletin boards, by providing access to remote products and files, and by allowing users to have real time communication with other users located anywhere in the world who are able to access the same data, collaboration is promoted and knowledge building communities with no physical boundaries are established. Instructional technology is used to provide challenging tasks, enhanced opportunities, and real life experiences to learners. Students are less likely to learn in the abstract, detached, atmosphere of the traditional classroom and more likely to learn by doing. This type of engaged learning allows guided participation rather than demanding that the teacher be the expert. Socratic questioning, intelligent tutoring, error diagnosis and analysis, and technology adaption to student growth and need becomes very important. The differences between adjunctive technology and change agent technology lie not so much in the technology as in its use. As an example, when one examines integrated learning systems (ILSs), it is possible to see a centralized instructional technology system that many might judge to represent high technology. Often, however, when viewed through the filters of performance and change, little benefit 121 can be seen from its use. Integrated learning systems in many environments are used to support traditional tasks and assessments, traditional roles for both teachers and learners, and are generally aimed at the enhancement of basic skills. These are the aspects of integrated learning systems which serve to limit usefulness. ILS value can be enhanced if the system is decentralized. LANs can be connected to WANs and the internet to create a distributed system which expands available resources beyond the limits of the LAN based ILS. Access to third party software, capability to use teacher- student constructed multi-media presentations, and two-way accessibility all serve to move instruction from the traditional, low skill focus to providing opportunities for the promotion of higher order skills. Suggestions for Further Research The following areas are recommended for further study and. could answer questions that ‘were generated but not answered by this study: 1. While this study identified the types of technologies that exist in networks across the state, no attempt was made to match system type to expected learner outcomes. 2. Roles were identified of individuals who impacted their networks in various ways. A further area of research suggested by the evident mixture of roles and responsibilities would be to determine which roles most 122 fit which responsibilities, and in what instances should crossover be encouraged or discouraged. This study suggests that a significant number of instructional technology networks receive no evaluation. The ability of networks which receive evaluation to reach established goals should be compared to systems which receive no regular evaluation. An important question in determining the relative success of a system would be to ask how' well the linkages work from assessing need to developing goals to designing a program to meet goals and objectives to provision of necessary training to facilitate goal realization. If evaluation is taking place within instructional technology networks as is suggested by this study, an area of closer examination should focus on the evaluation being used by schools, and whether the evaluation causes changes that result in new constructs of teaching and learning. This research also suggests that system goals change as networks change. Given this suggestion, research should be conducted that would provide data regarding whether evaluation drives system change or whether technology growth and availability drives evaluation. Further study could. be jperformed ‘to identify factors ‘which. drive evaluation change. 123 There is an indication that as pedagogy changes so does evaluation. Research could be conducted to determine if pedagogical changes drive evaluation changes or if instructional technology evaluation can drive pedagogical change. This research project left unclear the answer to the question of who is, or who should be, responsible for implementing system changes as a result of evaluation. Further research could provide additional insight and perhaps help to clarify the confusion that seems to exist about who does what. It is clear that no systematic approach exists to system administration. Did these positions occur in a somewhat haphazard manner as systems developed? APPENDICES APPENDIX A CONTACT PERSONS FOR K-12 INSTRUCTIONAL TELECOMMUNICATIONS SYSTEMS APPENDIX A Michigen Intermediate ichool Dietrict grimary Network Contacts W Vickie Eggers. Distance Learning Consultant Allegan County ISD 310 Thomas Street Allegan. MI 49010 Phone: (616) 673-2161 Fax: (616)673-2361 Email: veggers@accn.org WWW Number of local school districts: 8 K-12 student population: 16.422 W Tom Baker. Assistant Superintendent, Interim Technology Coordinator Alpena-Montmorency-Alcona (AMA) ESD 2118 US 23 South Alpena, M14970? Phone: (517) 354-3101 Fax: (517) 356-3385 Email: bakert@ns.amaesd.k12.mi.us Cmneraldistrlcihacmgnd Number of local school districts: 4 K- 12 student population: 8,000 124 125 - , "oi01_l.n-_.- Tom Mohler. Superintendent Barry [SD 535 Woodlawn Avenue Hastings. MI 49058 Phone: (616)945-9545 x 11 Fax: (616)945-2575 Email: tmohlerOremcl2.kl2.mi.us Elizabeth Forbes . Oflice Administrator Phone: (616)945-9545 x 18 Fax: (616) 945-2575 Email: eforbes®remc12.k12.mi.us Generaldistdcthackmund Number of local school districts: 2 K- 12 student population: 5,600 Ear-M WW Faye DeMarte. Director of Instructional Services Bay-Arenac ISO 4228 Two Mile Road Bay City. Mi 48706-2397 Phone: (517) 667-3280 Fax: (517) 667-3286 [1 Dale Robbins, Video Production Phone: (517) 667-3230 Fax: (517) 667-3286 W Number of local school districts: 8 K- 12 student population: 20.912 126 W Jim Bembenek. Director. REMC ll Berrien 18D 7 11 St. Joseph Avenue Berrien Springs. MI 49103 Phone: (616)471-7725 Fax: (616)471-1221 Email: jbembeneoremcl l.k12.mi.us Generaidismmhackmnnd Number of local school districts: 16 public and 22 private/parochial K- 12 student population: 33. 110 mm Eric Bruner. Technolog' Coordinator Branch ISD 370 Morse Street Coldwater. MI 49036 Phone: (517)278-5521 Fax: (517)279-5777 W n tfr lecmmni tinsn r Dawn Atkinson. Instructional Services Coordinator Cass (Lewis Cass) ISD 61682 Dailey Cassopolis. MI 49031 Phone: (616)445-6202 Fax: (616) 445-2981 Email: datkinsoaremcl 1 .kl2.mi.us MW Number of local school districts: 4 K- 12 student population: 7 , 500 127 -. | ‘ _ ...-.0! l‘k'i'09._' Richard Diebold. Director of General Education Charlevoix-Emmet ISD 08568 Mercer Boulevard Charlevoix. MI 49720 Phone: (616) 547-9947 Fax: (616)547-5621 Email: rdiebosunny.ncmc.cc.mi.us Qeneraldistdcthacmnd Number of local school districts: 11 locals. 3 public school academies K- 12 student population: 11.000 Jack A. Keck, Director. PACE Telecommunications Consortium Cheboygan-Otsego-Presque Isle (COP) ISD 6065 Learning Lane Indian River. M149749 Phone: (616) 238-9394 Fax: (616) 238-7153 Email: paceofreewaynet Cmneraldistdcthackgmnnd Number of local school districts: 22 K- 12 student population: 20.000 W n f r 1 mm ni n n rk : G. R. Zubulake. Superintendent Clare-Gladwin ISD 4041 East Mannsiding Road Clare. MI 48617 Phone: (517) 386-3851 Fax: (517)386-3238 Email: gzubulakOremcenehhscmichedu SW Deb Dunbar. REMC 5 Director. Gratiot-Isabella ISD Phone: (517)875-5101 Fax: (517)875-2858 1181' 1 n Number of local school districts: 5 K-12 student population: 9.700 128 W Rose Dudash. Data Processing Clinton County RESA 4179 South U. S. 27. Box 438 St. Johns. MI 48879 Phone: (517) 224-6831 Fax: (517)224-9574 Email: dudashOscnc.ccresa.k12.mi.us r 1 Number of local school districts: 6 Lyle Spalding, Superintendent Crawford-Ogemaw-Oscoda-Roscommon (COOR) ISD 11051 North Cut Road Rosoommon. MI 48653 Phone: (517) 275-5137. Ext. 220 Fax: (517)275-5881 We: Mike Wahl Phone: (517)275-5121 Fax: (517)275-8210 WW1 K- 12 student population: 10.700 W n fr 1 mmni insn rks: Dan Stemhagen, Director. REMC #1 Copper Country ISD Box 270. 602 Hecla Street Hancock. MI 49930-0270 Phone: (906) 482-3907 Fax: (906)482-5031 Email: dsternhaoingham.kl2.mi.us ne 1 un Number of local school districts: 13 K- 12 student population: 7.786 129 Diane Maltby, Technology Consultant Delta- Schoolcraft ISD 2525 Third Avenue South Escanaba. MI 49829 Phone: (906) 786-9300 Fax: (906)786-9318 Email: maltby®cedar.cic.net Generaldistmthackemund Number of local school districts: 8 K- 12 student population: 9. 157 Bruce Steinberg. Technology Coordinator Dickinson- Iron ISD 1074 Pyle Drive Kingsford. Ml 49802 Phone: (906) 774-1827 Fax: (906) 779-2087 Email: bruces®diisd.k12.mi.us Generalmstncthackmmm Number of local school districts: 6 K- 12 student population: 8,000 mm film oontaot for ielggommoniogiione negorks: Jack Thompson. REMC Director Eastern Upper Peninsula ISD Box 883. 315 Armory Place Sault Ste. Marie. MI 49783 Phone: (906) 632-3373 Fax: (906) 632-1 125 Email: thompson®eup.kl2.mi.us MW Number of local school districts: 12 K-12 student population: 9.038. 130 -_,, .a 0., ,610-li,,., .- , Ron Faulds. Director of Technolo Eaton ISD 31 1 West First Street Charlotte. MI 48813 Phone: (517) 484-2929 Ex. 20 Fax: (517) 543-8016 Email: rfaulds@eaton.k12.mi.us Qeneraldistrlctbackmund Number of local school districts: 7 (2 primary) K-12 student population: 15.000 W l : Beverly Knox-Pipes. Director. Technologr Support Services Genesee ISD 2413 West Maple Avenue Flint. MI 48507-3493 Phone: (810) 768-4436 Fax: (8 10) 768-4505 Email: b1moxpip@gisd.gisd.kl2.mi.us Barbara Bartkowiak. Novell Network Supervisor Phone: (810) 7 68-4549 Fax: (810) 768-7571 Email: bbartkowogisd.gisd.kl2.nn.us Qweralflsimthackgrmmd K- 12 student population: 83.714 W P_ri__meg( oontaoi for telgommo nigiione neiworks: Graydon Blank. Superintendent Gogebic-Ontonagon ISD 202 Elrn Street Bergland. MI 49910 Phone: (906) 575-3438 Fax: (906) 575-3373 n i b un ' Number of local school districts: 8 K- 12 student population: 4,660 131 Deborah Dunbar. Associate Superintendent. Technologl Media/Instruction Gratiot- Isabella RESD 1131 East Center Street. Box 310 Ithaca. MI 48847 Phone: (517) 875-5101 Fax: (517)875-2858 Email: ddunbarlZ3@aol.com Matt McMahon. Coordinatdr of Technology Resources Phone: (517)875-5101 Fax: (517)875-2858 Email: mmcmahonoremcenehhscmichedu Matt McMahon, Coordinator of Technology Resources Phone: (517) 875-5101 Fax: (517) 875-2858 Email: mmcmahoneremcenehhscmichedu Number of local school districts: 9 K-12 student population: 15.157 John Ciaravino. Technology Hillsdale ISD 3471 Beck Road Hillsdale. MI 49242 A Phone: (517)439-1515 x 112 Fax: (517)439-4388 Email: ciara®scnc.hcisd.kl2.mi.us m Pin—nary oon iaot for telecommo nioaoons networks Randy Maurer. Medial Data Management Specialist Huron ISD - 1160 South Van Dyke Bad Axe. MI 48413 Phone: (517) 269-9284 Fax: (517) 269-2844 Email: 74557 .701@oompuserve.com WW Number of local school districts: 16 K- 12 student population: 7 .000 132 mm Donna Rehbeck. Director lngham ISD 2630 West Howell Road Mason. MI 48854 Phone: (517) 344-1217 Fax: (517)676-1277 Email: drehbeckOingham.k12.mi.us Smialcontactiouidmnemorlssz Frank Bommarito. Media Fax: (517) 676-9726 Email: ibommariOingham.k12.mi.us WW: Jo Ellen Miskowski. Intemet/ Info Systems Director Phone: (517)244-1278 Fax: (517)676-1277 Email: jmiskowsomgham.kl2.mi.us Number of local school districts: 12 K- 12 student population: 55.000 InniaJSD. Contents: General background information Data/ Internet information ' 6101011111- .-_ i, __ ’ "O .,: Michael Keast. Assistant Superintendent Ionia ISD 2191 Harwood Road Ionia. MI 48846 Phone: (616)527-4900 Fax: (616)527-4731 Email: mkeastOremc8.kl2.mi.us Number of local school districts: 9 K-12 student population: 13.164 133 Mary iiruger. Director of General Education Iosco ISD 5800 Skell Avenue Osooda. MI 48750 Phone: (517) 739-0300 x 27 Fax: (517) 739-0061 Robert Hayhurst. Director of Technology Jackson County ISD 6700 Browns Lake Road Jackson, M14920] Phone: (517) 787-2800 Fax: (517) 787-2026 Email: hayhurst®scnc.jcisd.k12.mi.us ' Sperialcontactfizrdatanetmks: Richard Otto. Coordinator of Data Processing Phone: (517) 787 -2800 Fax: (517) 787 -2026 Email: otto@scnc.jcisd.kl2.mi.us Number of local school districts: 12 districts K-12 student population: 28,699 134 e. ,.-_ ‘01." H. H ‘ J. Mark Raine . Director. REMC 12 &:Instructional Center Kalamazoo Valley ISD 1819 Milham Kalamazoo. MI 49002 Phone: (616) 385-1582 Fax: (616)381-3523 Email: jraineyOremc12.kl2.mi.us Gary Hubbard Phone: (616) 385-1588 Fax: (616) 381-0156 W: Don Dailey. Tbchnology Services Coordinator Phone: (616) 385- 1559 Fax: (616) 381 ~0156 Email: ddaileyOremc12.k12.mi.us number of local school districts: 9 K- 12 student population: 35.000 mush Connie Solis. Assistant Director Technology/REMC Kent ISD 2930 Knapp NE Grand Rapids. MI 49505 Phone: (616) 364- 1333 Fax: (616) 364-1489 Email: csolis@remc8.k12.mi.us Greg VerVeer, Technical specialist Phone: (616) 364-1333 Fax: (616) 364- 1489 Email: gverveerOrernc8.kl2.mi.us Beth Joyce. Computer Network Coordinator Phone: (616) 364- 1333 Fax: (616) 364-1489 Email: bjoyce®remc8.kl2.mi.us Qeneraldistrictlzaclground Number of local school districts: 20 K- 12 student population: 112.956 135 mm Larry Godwin. Director of Career 81 Technical Education Lapeer ISD 690 Lake Pleasant Road Attica. MI 48412 Phone: (810) 664-1124 Fax: (810) 724-7600 Chuck Madden. Computer. Maintenance Supervisor Phone: (810) 664-5917 Generalmstncthackmmind Number of local school districts: 5 K- 12 student population: Approx. 15,000 . . ‘ . _ O. . ‘I' - 2 Elizabeth L. Berrnan. Assistant Superintendent for Instruction Livingston Educational Service Agency 1425 West Grand River Howell. MI 48843 Phone: (517) 546-5550 Fax: (517) 546-7047 Email: bennanOscnc.lesa.kl2.mi.us Ingrid DuLac. Media Specialist Phone: (517) 546-5550 Fax: (517)546-7047 Email: dulacoscnc.lesa.kl2.mi.us Larry Straits, Data Processing Director Phone: (517) 546-5550 Fax: (517) 546-7047 Email: su'aitsescnc.lesa.k12.mi.us Generaldfitflctbackgronnd Number of local school districts: 5 K- 12 student population: 23,258 136 Thomas R. Juett. Director of Technology Services Macomb ISD/REMC 18 44001 Garfield Road Clinton Township. MI 48038- 1100 Phone: (810) 228-3410 Fax: (810) 286-1523 Email: tom.juett@moa.net Arnie Comer. Instructional .Technolog Coordinator Phone: (810) 228-3408 Fax: (810) 286-1523 Email: arnie.comer@moa.net Bill Thompson. Telecommunications 8: Systems Manager Phone: (810) 228-3388 'Fax: (810) 286-8998 Email: bill.thompson@moa.net WW Number of local school districts: 21 K- 12 student population: 1 17 .000 Dr. Robert C. Tilmahn. Superintendent Manistee ISD 772 East Parkdale Avenue Manistee. MI 49660 Phone: (616) 723-1689 Fax: (616) 723-1690 Generamistncthackgrmnd Number of local school districts: 4 and 1 Charter K- 12 student population: 3.990 137 Marquette-Altman I Dr. June M. Schaefer. Superintendent Marquette-Alger ISD 427 West College Avenue Marquette. MI 49855 Phone: (906) 226-5101 Fax: (906) 226-5134 Email: jschaefeOnmuedu Generalmstricthackgmund Number of local school districts: 12 K-12 student population: 12.757 W n : Marsha Barter. Supervisor. General Education. and James Pinkerton. Administrative Assistant. Vocational Education Mason-Lake ISD 2130 West US 10 Ludington. M14943] Phone: (616) 757-3716 Fax: (616) 757-2406. 757-4208 Email: barterewestshoreccmius Mary Polcin. Supervisor. Marsha Barter. Business Services Phone: (616) 757 -37 16 Fax: (616) 757-2406 Generaldismctbackgronnd Number of local school districts: 6 public. 1 parochial K- 12 student population: 6,500+ 138 W Contents: General background information Video network information Datal internet information Paul Bigford. Director Of General EducatIOn Mecosta- Osceola ISD Post Office Box 1137 Big Rapids. M14930? Phone: (616) 796-3543 Fax: (616) 796-3300 Email: pbigfordoedoenehhscmichtdu r Number of local school districts: 5 K-12 student population: 10.400 Joseph Kukulski, Superintendent Menominee ISD 952 First Street Menominee. MI 49858 Phone: (906) 863-6550 Fax: (906) 863-7776 W W: John A Person. Deputy Superintendent Midland County ISD 3917 Jefferson Avenue Midland. MI 48640 Phone: (517)631-5892 Ext. 109 Fax: (517)631-4361 W K- 12 student population: 13.948 139 W Brimammntacimuecommnnicatiensnetworksz Peter Finney. Supervisor/ Media Center Monroe ISD 1101 South Raisinville Road Monroe. MI 48161 Phone: (313) 242-5799 Ext. 3100 Fax: (313)242-5807 Email: flnneyOmisd.k12.mi.us Specialrentactmrdatanemmsz Rick Angelocci. Coordinator. Education Technology Phone: (313)242-5799 Ext. 3010 Fax: (313)242-5807 Email: rick®misd.kl2.mi.us Generalflstricthackgmnnd Number of local school districts: 9 K- 12 student population: 25.000 WISE Edmamcentactfortclecemmimieaticnsnemmsz Dr. Brian Wood. Director of Instructional Services. Mr. George Winchell. Technology Coordinator Montcalm Area ISD Box 367. 621 New Street Fax: (517) 831-8727 W Number of local school districts: 7 K- 12 student population: 13.800 + W n f r 1 mm ni n n rk : Dr. Larry ivens. District Technology Coordinator Newaygo ISD 4747 West 48 St. Fremont. MI 49412 Phone: (616)924-8838 Fax: (616) 924-8817 Email: dr_ivens@ncats.newaygo.mi.us n r i a n Number of local school districts: 5 140 W Linda Erkkila. REMC Director Oakland Schools (ISD)/REMC 17 2 100 Pontiac Lake Road Waterford. M148328 Phone: (810) 858-1966 Fax: (810) 858-2164 Email: linda.erkldla@oakland.k12.mi.us Pam Wilhelrne. Oakland Schools Television Network Operator Phone: (810) 858-2163 Fax: (810) 858-2164 Email: pam.wilhelme@oaldand.kl2.mi.us MW: Jim Graham Phone: (810) 858-2077 Fax: (810) 858- 1903 Email: jimg‘ahamooaklandkllmtus Generalnistnethackgronnd Number of local school districts: 28 with four charter schools K-12 student population: 206,528 approximately W Primary contact for telecommunications networks: Rosemary Cary. Director of Technology Oceana ISD 630 Harvey Street Muskegon. MI 49442 Phone: (616) 777-2637 Fax: (616) 773-1028 Email: rcary@remc4.k12.mi.us 141 W Greg Shepard. Technology Coordinator Ottawa Area ISD 13565 Port Sheldon Road Holland. MI 49424 Phone: (616) 399-6940 Fax: (616)399-8263 Email: gshepard®remc7.kl2.mi.us i : Dennis Drooger. Technology Services Support Generatinstnctbackgmund Number of local school districts: 11 K- 12 student population: 37.910 WIS]! John Tanner. Technology Supervisor Saginaw ISO 6235 Gratiot Road Saginaw. MI 48603 Phone: (517) 799-4321 Fax: (517) 799-5991 Email: tannerj®isd.saginaw.k12.mi.us Sneeialmntactfondatanetmorksz Dan Finnigan. Supervisor of Technology Phone: (517) 799-4321 Fax: (517) 799-5991 Generalmstiicthackerormd Number of local school districts: 13 K- 12 student population: 32.000 m m gauge; for telmmmonioeiione negorks: Jill Western. LAN Technician Sanilac ISD 175 East Aitken Road Peck. Ml 48466 Phone: (810) 648-4700 Fax: (810) 648-4834 Email: jwesternoscc.sanilac.k12.mi.us n 1 n Number of local school districts: 7 142 W David George. Assistant Superintendent for School lrnprovement Shiawassee RESD 1025 North Shiawassee Street Corunna. MI 48817 - 1100 Phone: (517) 743-3471 Fax: (517) 743-6477 George Schultz. Director. Administrative Technologl Phone: (517) 743-3471 Fax: (517) 743-6477 r 1 Number of local school districts: 8 K- 12 student population: 14.842 St. Clair ISD James F. Fraser. Jr., Director of Information Technology St. Clair ISD Post Office Box 5001 Port Huron. MI 48061-5001 Phone: (810) 364-8990 Fax: (810) 364-7474 Email: jfraserOstclah-lSDklZJnLus Cindy Rourke. Dean of Learning Resources Phone: (810) 989-5642 Fax: (810) 984-2852 Email: crourkeoedcenehhscmichedu Andy Frey. System Network Engineer Phone: (810) 364-8990 Fax: (810) 364-7474 Email: afreyeskyfrycom Who Number of local school districts: 7 K- 12 student population: 30.000 143 WED Erimantceniactiorielecomrnunicationsnemksz Joan Hiddema. Director of Technologl St. Joseph ISD Post omce Box 219 Centreville. MI 49032 Phone: (616)467-5400 Fax: (616) 467-4309 Generalnistncthackgmund Number of local school districts: 9 K- 12 student population: 12.560 W WWW: Steve Norvilitis. Director. REMC 2 Traverse Bay Area ISD 880 Parsons Road Traverse City. MI 49686 Phone: (616)922-6216 Fare (616) 922-7870 Email: snorvilitis®tbaisd.k12.mi.us Robert Chauvin. NMC University Center (Phone: (616)922-1078) Ronda Edwards. Director. Media Services. NMC (Phone: (616)922-1076) Fax: R. Chauvin: (616)922-1080 Specialceniactiornaranetmrksz Don Shikowski. N MC Project Interconnect (Phone: (616)922-1094 ) Dave Warnerat. TBA Computer Center (Phone: (616) 922-6270) Fax: D. Shikowski: (616)922-1570 M W: Robert Frost. Information Systems Coordinator 'Iluscola ISD 1385 Cleaver Road Caro. MI 48723 Phone: (517) 673-5300 Fax: (517) 673-4228 Email: rjfrost@edcen.ehhs.cmich.edu W N urrrber of local school districts: 9 K-12 student population: 12.000 144 W151: WWW: Chris Hill. Systems Manager Van Buren ISD 701 South Paw Paw Street Lawrence. Ml 49064 Phone: (616) 674-8091 Fax: (610)674-8726 Email: hillc®edcen.ehhs.cmich.edu Generalflstnctbackgmnnd Number of local school districts: 11 Number of K-12 Students: 18.000 W Bnmmmntactferieleccmmunicamnsnetmrks: Vivian Lyte. Director of Instructional Services Washtenaw ISD 1819 South Wagner Road. POB 1406 Ann Arbor. M148106-1406 Phone: (313) 994-8100 ext. 1251 Fax: (313) 994-2203 Email: vlyte@isd.wash.k12.mi.us Smialrentactiernatanemngsz Karen Domino. Data/Internet Phone: (313)994-8100 ext. 1281 Fax: (313)994-2203 Email: kdomlno®isd.wash.k12.mi.us Qeneraldisiricthackgronnd Number of local school districts: 10 K- 12 student population: 44,440 145 W n l n Ron Sniderman. REMC 20 Director Wayne RESA 33500 Van Born Road Wayne. MI 48184 Phone: (313)467-1580 Fax: (313) 326-0857 Email: rjs@server.greatlakes.kl2.mi.us Sneeialreniactforxldeonenmrlgs: Ken Schramm. Television Consultant Phone: (3 13) 467 - 1305 Fax: (3 13) 326-0857 Email: schrakawcresa.kl2.mi.us Deborah Belaire. Director . Phone: (313)467-1596 Fax: (313)326-2610 Email: belatrdOwcresa.kl2.mi.us Generaliiistricthackgmnnd Number of local school districts: 34 K- 12 student population: 401.018 1., .-_ , UL" i ‘ " ,-: Michael E. Blan hard. ID tor. Wexford-Missaukee Area Career Technical Center (WMACTC) Wexford-Missaukee ISD 9905 East 13th Street Cadillac. MI 49601 Phone: (616)779-8500 Fax: (616)779-0071 Gordon Baldwin. Technology Technician Phone: (616) 775-2294 Fax: (616) 775-0022 Email: gbaldwinomichwebnet Gordon Baldwin, Technology Technician Phone: (616) 775-2294 Fax: (616) 775-0022 Email: gbaldwinOmichwebnet Generalflisiricthaekgmnnd Number of local school districts: 7 _ K- 12 student population: 10,136 Public and 708 Non-Public APPENDIX B MICHIGAN INTERMEDIATE SCHOOL DISTRICT SUPERINTENDENTS Appendix B Michigan Intermediate School District Superintendents Allegan County ISD James Pavelka. Superintendent Barry ISD Thomas S. Mohler. Superintendent Berrien ISD Jerry Reimann. Superintendent Calhoun ISD Roger T. LaBonte. Superintendent Charlevoix- Emmet ISD Mark Eckhardt. Superintendent Clare-Gladwin ISD G. R. Zubulake. Superintendent COOR ISD Lyle Spalding, Superintendent Delta-Schoolcraft ISD Dennis J. Stanek. Superintendent Eastern UP ISD Jerry L. Gallagher. Superintendent Genesee ISD David E. Spathelf. Superintendent Gratiot-Isabella RESD Douglas W. Sasse. Superintendent Huron ISD William H. Mayes. Superintendent Ionia ISD George Hubbard. Superintendent Alpena-Montrnorency-Alcona ESD Thomas T. Lanway. Superintendent Bay-Arenac ISD Jon M. Whan. Superintendent Branch ISD Robert L. Redmond. Superintendent Lewis Cass ISD John D. Ward. Superintendent Cheboygan-Otsego-Presque Isle ISD James Mick. Superintendent Clinton County RESA Larry A Schwartzkopf. Superintendent Copper Country ISD Paul G. Ollila. Superintendent Dickinson-Iron ISD Mary L. Brien. Superintendent Eaton ISD Jon Tomlanovich. Superintendent Gogebic-Ontonagon ISD Graydon E. Blank. Superintendent Hillsdale ISD Gary Moore. Superintendent Ingharn ISD Jann Jencka. Superintendent Iosco ISD Jerome Allore. Superintendent 146 Jackson ISD Gerald B. Kratz. Superintendent Kent ISD George J. Woons. Superintendent Lenawee ISD William J. Ross. Superintendent Macomb ISD Michael R. DeVault. Superintendent Marquette-Alger ISD, June M. Schaefer. Superintendent Mecosta-Osceola ISD Roger D. Dixon. Superintendent Midland ISD . James A. McKimmy. Superintendent Montcalm Area ISD Bradley J. Hansen. Superintendent Newaygo ISD Roland D. Marmion. Superintendent Oceana ISD Thomas J. Pelon. Superintendent Saginaw ISD Larry Engel. Superintendent St. Joseph ISD Larry Campbell. Superintendent Shiawassee RESD Patrick C. Gilbert. Superintendent mscola ISD John T. Moore. Superintendent Washtenaw ISD Michael O. Emlaw. Superintendent Wexford-Missaukee ISD William Penny. Superintendent 147 Kalamazoo Valley ISD Larry E. Wile. Superintendent Lapeer ISD Peter M. Holley. Superintendent Livingston ESA Charles L. Johnson. Superintendent Manistee ISD _ Robert C. Tilmann. Superintendent Mason-Lake ISD Scott J. Russell. Superintendent Menominee ISD Joseph Kukulski. Superintendent Monroe ISD Gerald R. Wing. Superintendent Muskegon Area ISD Michael H. Bozym. Superintendent Oakland Schools James Redmond. Superintendent Ottawa Area ISD J. Randall Bergers. Superintendent St. Clair ISD Joseph Cairni. Superintendent Sanilac ISD Frederick M. Cady. Superintendent Traverse Bay Area ISD Michael D. McIntyre. Superintendent Van Buren ISD James D. Mapes. Superintendent Wayne RESA Michael Flanagan. Superintendent APPENDIX C SURVEY INSTRUMENT APPENDIX C INSTRUCTIONAL TECHNOLOGY EVALUATION SURVEY JAMES D. MAPES Van Buren Intermediate School District 701 South Paw Paw Street Lawrence, MI 49064 148 149 This questionnaire focuses on the issues surrounding. the evaluation of instructional technology programs active within school districts and consortia of districts in the State of Michigan. Some definitions are provided below so that you know how certain terms are used in the context of this survey. Please use these definitions to help you respond to the questions throughout the survey. Instructional Technology which supports, Technology: enhances or supplants traditional technique. Examples include interactive two-way telecommunications, use of the internet, e-mail, voice mail and other types of distance learning. System Evaluation: Any type of a formal evaluation of educational technology network effectiveness, including cost-based, use-based or learner outcomebased evaluation. The evaluation should be directed at establishing. the effectiveness of current practice and determining indicators of needed change. System evaluation should be completed periodically. System Any individual whose primary Administrator: responsibility is to ensure that the instructional technology system is functioning properly and that the needs of users are being met. 150 INSTRUCTIONS Please fill out your responses as completely as possible. You may make any comments or explanations on the survey form which you think may clarify a point. Mail your completed form in the enclosed stamped, pre-addressed envelope. Mail the pre-addressed post card indicating that you have completed the survey form. (This ensures confidentiality for respondents) Participation in this survey is completely voluntary. You may elect not to participate or to not respond to individual questions. - Please note that by completing and returning the survey instrument and contributing to the data pool, you are giving your consent to use the data provided. 151 Please check the most appropriate response. 1. Your Role ( ) Superintendent ( ) Technology Coordinator ( ) Principal ( ) Other (please specibl) Your Primary Responsibility ‘ ( ) Direct Oversight ( ) Instructional Technology Network Administration ( ) Building Administration ( ) Other (please speciiy) Please list any other responsibilities: Does your district/building make use of an instructional technology network for delivery of educational programming? ()Yes ()No 152 5 . If yes, please briefly describe your system. If your district/building is not currently part of an instructional technology system, please go no further. ' Thank you for taking time to complete the survey and placing it in the mail in the enclosed pre-addressed, postage paid envelope. Please remember to mail the enclosed postcard indicating that you have completed the survey. Thank you! 153 If you answered YES to question #4, please complete the remaining survey questions. Please check the most appropriate response. 6. How would you describe your system operationally? 6.1 ( ) Two-way video and audio 6.2 ( ) One-way video and two-way audio 6.3 ( ) One-way video and one-way audio ( ) 6.4 Other (please specib') 7. How is signal transmitted? 7.1 () Fiber optic cable 7.2 () Coaxial cable 7.3 () Compressed copper wire 7.4 ( ) Microwave/telephone ' 7.5 () Satellite downlink/uplink 7.6 ( ) HYbrid transmission(p/ease specify-n which made of transmission) 7.7 () Other (please specibl) 8. Does your instructional technology system receive regular structured evaluation? ()Yes ()No 154 9. If the answer to #8 is yes, is the evaluation conducted? 9.1 ( ) Annually 9.2 ( ) Semi-annually 9.3 ( ) Quarterly 9.4 ( ) Other (please speciijl) Please answer the following questions by circling the response that best describes your feelings and knowledge. Use the following guidelines. a . Strongly Agree b. Agree c. Disagree d . Strongly Disagree a. Unsure 10. Prior to the establishment of the instructional technology network formal goals were established. Strongly Agree Agree Disagree Strongly Disagree Unsure . 1 2 3 4 5 11. The governing body of each participating network entity adopted the established goals. Strongly Agree Agree Disagree Strongly Disagree Unsure 1 2 3 ' 4 5 155 12. Effort was made to familiarize individual participants of the established goals. Strongly Agree Agree Disagree Strongly Disagree Unsure 1 2 . 3 4 S 13. Individual users, i.e., students, teachers, administrators, were part of the goal development process. Strongly Agree Agree Disagree Strongly Disagree Unsure 1 2 3 4 5 14. The established goals were made a part of the original evaluation system. Strongly Agree Agree Disagree Strongly Disagree Unsure 1 2 3 4 5 15. System goals have changed since the network was estabflshed. Strongly Agree Agree Disagree Strongly Disagree Unsure 1 2 3 4 5 16. The evaluation system has changed since it was first established. Strongly Agree Agree Disagree Strongly Disagree Unsure , 1 2 3 4 S 156 17. Changes in available technology have resulted in changes in the evaluation system. Strongly Agree Agree Disagree Strongly Disagree Unsure 1 2 3 4 5 18. Changes in pedagogical techniques have resulted in changes in the evaluation system. Strongly Agree Agree Disagree Strongly Disagree Unsure 1 2 3 4 5 Please check all that apply. 19. Indicate how instructional technology network goals are expressed. 19.1 ( ) As fiscal outcomes 19.2 () As instructional outcomes (change in teaching me thods/ technologies) 19.3 ( ) As Ieamer outcomes (changes in the way 5 students learn) 19.4 ( ) Other (please specify) 157 Please indicate a yes or no by placing a check mark next to your response. 20. Is the system administrator responsible for system 21. 22. 23. 24. evaluation? ( )Yes ( ) No Is system administration the sole function of this position? ()Yes ()No Is input from system users solicited as part of the evaluation process? ()Yes ()No Is input from local districts/buildings sought as part of the evaluation process? ()Yes () No Does a formal process exist for Input into the evaluation process? ()Yes ()No 158 Please circle the response that you feel is most appropriate 25. Following evaluation, reports are generated for local administrators. Strongly Agree Agree Disagree Strongly Disagree Unsure 1 2 3 4 5 26. Reports are generated for members 'of the governing board. Strongly Agree Agree Disagree Strongly Disagree Unsure 1 2 3 4 S 27. Evaluation results are disseminated to system users. Strongly Agree Agree Disagree Strongly Disagree Unsure 1 2 3 4 S 28. A formal process exists for making changes in the network which reflects information that appears as part of a system evaluation. Strongly Agree Agree Disagree 1 Strongly Disagree Unsure 1 2 3 4 5 29. The system administrator is given the responsibility for processing evaluation results. Strongly Agree Agree Disagree Strongly disagree Unsure 1 2 3 4 5 159 30. Local administrators have the primary responsibility for Implementing system changes as a result of evaluation. Strongly Agree Agree Disagree Strongly Disagree Unsure 1 2 3 4 ' 5 31. Evaluation guidelines exist to guide the system evaluator in his/her task of assessment. Strongly Agree Agree Disagree Strongly Disagree Unsure 1 2 3 4 5 32. Training opportunities are provided to the system evaluator which yield information useful in the practical application of data generated through evaluation. Strongly Agree Agree Disagree Strongly Disagree Unsure 1 2 3 4 5 _ Thank you for taking time to complete the survey and placing it in the mail in the enclosed pre-addressed, postage paid envelope. Please remember to mail the enclosed postcard indicating that you have completed the survey. Thank you! APPENDIX D COVER LE'I'I'ER APPENDIX D .Jmas D. Mme: I 34433 38TH AVENUE PAW PAW. MICHIGAN 49079 e I o/557- I 873 EMAIL: JMAPE$@ACCN.ORG September 15. 1996 Dear Colleague: Please allow me to introduce myself. My name is James Mapes and I am a doctoral student at Michigan State University. The area of research in which I have an interest is technology, specifically the way in which instructional technology delivery systems are evaluated for effectiveness. As a means of pursuing this research, I am seeking information form you and others in similar positions. I am enclosing a 32-item survey which I ' trust you will consent to complete and return. The survey instrument will not take a great deal of time to finish and anonymity is guaranteed. To assure a blind process, the packet which you have received includes a postcard. After you have completed and mailed the questionnaire, I would request that you mail the postcard. This will indicate that the questionnaire has been completed. Please be certain to mail the postcard separately from the questionnaire and on a date other than the one on which the questionnaire is mailed. Please note, too, that by returning ’ the survey and contributing to the data pool, you are giving your consent to use the aggregated data provided. The questionnaire should be returned in the stamped, pre-addressed envelope by Friday, October 1 1 , . 1996. Thank you for consenting to assist me with my research and for taking the time to fill out the survey instrument. If you would like a synopsis of the results of this study, please indicate so on the enclosed postcard. The synopsis will be mailed upon completion of the doctoral program. Sincerely, James D. Mapes 160 APPENDIX E LETTER OF PERMISSION TO CONDUCT STUDY w- ’- Iaksnwlrnn lvllaalaaa-w ”3*“ «team an.“ ' m” MICHIGAN STATE UNIVERSITY July 23, 1996 10: James D.‘Iages 34433 33th va. Paw Paw, II 49079 23: IRES: ”-313 TITLE: INTRDCTIONIL TECHNOM NEVALUITIOI II R 12 EDUCAIION SYSTEMS IN THE STATE OF MICHIGAN: A STUDY OF EVAEUATION PROCEDURES AND RESULTS TO TECHNOLOGY ‘ ' 8’81IIS APPLIED TO INSTRUCTION REVISIOI’EIOUISTID: CISIGOIY: . APPROVAL DESI: 07,23/9‘ the adversity oo-ittee on heaeereh Involving amen Bub ects' (m1 review of this {project is complete. anpleaaed to adv as that the rights and wel areo hunansubjects appear to be adequately meted“. and cathode to obtain informed consent are appropriate.ted ev mtore.the scams approved this project and any I: siona listed m: acme approval is valid for one calendar ar, inning with the approval date shown above. Investigate yers plfiingto continue a project he one year must use the green renewal fora (enclosed with original roval letter or when a project is renewedlt oseek t certification. There is a maximu- of four sucht expedi renewals ssihle. Investigators wishing to continue a reject beyond the time need to submit it or coqlete rev ev. MIMMtreview ewanychangeainroceduresinvolvinghunan subjects, rior toin tiation of t change. If this is done at that time o renewal, please use the sen renewal torn. 'l'o revise an approvedp protocol at an oher time during the year, send yourn tten request to the DIES Chair requesting revised approval and referencing the p'roject s IRE # and title. Include request a descripticaip of. the change and any revised ins ts, consent forms or advertisements that are applicable. Should either of the following” arise msduring the ecourse of the work, investi state must notiyUCR 132mm? onptly: v(1) rohleas I teds do effects cow aints, e c. )i ingKI-an . . (2) changes inthe thechresear env ironnent or new information indicating greater risk to the huaan sub ects than existed when the protocol was previously reviewed approved. If we can he of any future help please do not hesitate to contact us at (517)355-2130 or par (517II i- 171. a ”In. 161 BIBLIOGRAPHY BIBLIOGRAPHY Acker, S. a McCain, T. (1995). Tun-way yideoconfenencing far K-12 papulations: A neseanch synthesis and action aganda. Columbus, OH: Center for Advanced Study in Telecommunications, The Ohio State University. Alkin, M. C. (1969). Evaluation theory development. Eyaluation CQImIanLJ. 1-7. Anderson, J. (1988). The ancnitectuna of cognition. Cambridge, MA: Harvard University Press. Anderson, S. B. (1978). The prafaasion and pnacaice o: prngnan agalnaai_n. San Francisco: Jossey-Bass. Anglin, G. J. (Ed.). (1995). Instanctianal technology: Past, present and fatnre (2nd ed.). Englewood, CO: Libraries Unlimited. Bangert-Downs, R. L., Kulik, J. A., & Rulik, C. C. (1985). Effectiveness of computer-based education in secondary schools. JOurnal of Camputer-aasad Inatrnction, 12(3), 59-68. Barker, 8., & Goodwin, R. (1992, April). Audiographics: Linking remote classrooms. The Computing Teaanan, 19(7), 11-15. Barron, L. C. & Goldman, E. S. (1993). Using technology to prepare teachers for the restructured classroom. In Taannalagy_and adnaatianal nefann (pp. 67-90). Henlo Park, CA: SRA International. Becker, H. J. (1992b). Computer-based integrated learning systems in the elementary and middle grades: A.critica1 review and synthesis of evaluation reports. Jounnal a: Educational Camputing Research, 3(1), 1-41. Blanpied, W. A. & Borg, A. F. (1979). Peer review of science education proposals at the National Science Foundation. Science Eancation, 63(3), 417-421. 162 163 Bolt, Beranek, & Newman (BBN). (1993). The Co-NECT Schnal: Design fan a naw genaration 9f Amarican schoois [School Design Guide]. Cambridge, MA: Author. Branford, J. & Stein, B. S. (1993). The Idaal Pronlam Soive; (2nd Ed). New York: Freeman. Branford, J., Sherwood, R., & Basselbring, T. (1990). Effects of the video revolution on development: Some initial thoughts. In G. Forman 5 P. Pufall (Eds.), Consannctiviam.in aha camputa; aga (pp. 2-54). Hillsdale, NJ: Erlbamm. Braithewaite, R. L., & Thompson, R. L. (1981). Application of the judicial evaluation model within an employment and training program. at ‘ . - ° 0 i I; f .0 9.1L: 2.; 9. i: ‘ t . 1 C_ D .’ Quarterl¥l_ls(2): 13-16- Burrus, D. & Gittines, R. (1993). en 8: 4 ta 0 'es h t‘w' ev u ion' e o ives. New York: Harper-Collins. Carnevale,.A. P. (1992). Skills for the new world order. Amanican School agard Jounnal, 172(5), 28-30. Carroll, J. M. (1990). Tna_Nn;nna;g_Ennnai. Cambridge, MA: MIT Press . Center for Educational Leadership and Technology. (1993). v' io i the futur : assach t 3'2 st centu sc ls (DCPO Project No. BR 92-1 STU). Marlborough, MA: Author. Cognition and Technology Group at vanderbilt (1993). Anchored instruction and situated cognition revisited. Edugatianal Technologyl_;3(3), 2-10. Cheng, 8., Lehman, J. & Reynolds,.A. (1992). What do we know about asynchronous group computer-based distance learning?. Educatianai Teannology. 3i(11) 16-19. Cognition and Technology Group at vanderbilt (1991a). Technology and the design of generative learning environments. Egucational_Tecthle!l_;lI5)I 34-40- Council for Educational Development and Research. (1993). Policy on u t’ t A o . washington, DC: Author. Craik, F. & Lockhart, R. (1972). Levels of processing: A framework for memory research. Joanna; at varbal Leanning & yarbal Beharigrl_llu 671-684- 164 Cronback, L. & Snow, R. (1977) Aptitudes and instructional methoos: h hendhook for reseeroh on intereotions. New York: Irvington. Daiute, C. (1989). Play as thought: thinking strategies of young writers. Harvand Educational Review, 59(1), 1-23. De Bono, E. (1967). New think: The use of lateral thinking in the generation of newlideee. New York: Basic Books. Duffy, T. M., & Jonassen, D. H. (1991). Constructivism: New implications for instructional technology. Educational Technology, 31, 7-12. Ely, Donald P. (1993). The field of educational technology: A 0 en f en asked estions (Grant No. RR93002009). Washington, DC: Office ofEducational Research and Improvement. Fine, C. S., & Friedman, L. B. (1991). Netional Geographic Society's kids network in Iowe [An Evaluation Report]. Oak Brook,IL: North Central Regional Educational Laboratory. Fletcher, J. D. (1990). Effectiveness and cost of interaCtive videooiec instruction in defense treining ano education (IDA Paper No. P-2372). Alexandria, VA: Institute for Defense Analyses. Fletcher, J. D. (1992). Cost—effectivenees of interactive courseware. washington, DC: The Technical Cooperation Program. Gagne, R. (1985). The conditions of learning (4th ed.). New York: Holt, Rinehart & Winston. Garvey, C. (1977). Pley. Cambridge, MA: Harvard University Press. Goldberg, B., & Fortunato, R. (1994, July). The Co-NECT School Network. In B. F. Jones (Chair), Using technology to promote learning end collehoration. Technology Module developed for the Sixth International Conference on Thinking, Boston. Gomez, L., & Hollan, J. (1994). Fran librany to collaboratoty: Supporting putposeful work in the di ital 'b a of t e future (Proposal to the National Science Foundation). Evanston, IL: Northwestern University. Gore, Albert. (1994). The national infonnation infrastpuctnre: Agenoe f0; aotion. washington, DC: 0.8. Government and the Internet: (niientia.doc.gov) 165 Greenberg, R., Kolvoord, R. A., Magisos, M., Strom, R. G., & Croft, S. (1993). Image processing for teaching. Journal of Science Edncation and Technology, 2, 469-480. Grief, I. (Ed.). (1988). Qonputer supported cooperatiye work: A hook of readings. San Mateo, CA: Morgan Kaufmann. Guba (1969). The failure of educational evaluation. Educational teohnology, 2, 29-38. Gustafson, T. (1975). The controversy over peer review. Soienoel l99(4219), 1060-1066. Hannafin, J. J.(1992). Emerging technologies, ISD, and learning environments: Critical perspectives. Educational Technology esear h a d ve nt 0, (1) 49-63. Hawkins, J., Brunner, C., Chaiklin, S., Ghitmmn, J., Mann, F., Magzamen, S., & Moeller, B.(1987). Interactive technology ae a partner in the development of ingniny ekills. Cambridge, MA: Bank Street College of Education, Center for Children and Technology. Hawkins, J., & Collins, A. (1992). Design-experiments for infusing technology into learning. Educationel Technology, 32(9). 63- Herman, J. L. (1994). Evaluating the effect of technology in school reform. In B. Means (Ed.), Technology eno eduoetional reform. San Francisco: Jossey-Bass. House, E. R. (1980). Evaluating with validity. Beverly Hills, CA: Sage. Hunter, B. (In press). Learning and teaching on the internet: Contributing to educational reform. In J. Keller (Ed.), Publio eccess to the intetnet. Cambridge, MA: John F. Kennedy School of Government. International Technology Education Association. (1989). Ieohnology edncation in aetion: Outstending ptogrene (ERIC Document Reproduction Service No. ED 336636). Reston,VA: Author. Jones, B. F. (1994). Ameriea's choice in education end t o : S s i ilities for e it an ce le ce. Oak Brook, IL: North Central Regional Educational Laboratory. .166 Jones, B. F., Valdez, G., Nowakowski, J. & Rasmussen, C. (1995). Plugging in: Chooeing end nsing educetional technology.. Washington, DC: Council for Educational Development and Research. Jones, B. F., Valdez, G., & Rasmussen, C. (1994). Redefining teohnology effectiveness in education. Paper developed for the Illinois State Board of Education. Oak Brook, IL: North Central Regional Educational Laboratory. Karim, G. (1994). histanee ednoetion in lndiana: opportunitiee end ohellenges foe rural and small schoole. OakBrook, IL: North Central Regional Educational Laboratory. Kirkwood, R. (1982). Accreditation. In H. E. Mitzel (ed.), Enoyclopedia of educational geeearch, 1, 5th ed. New York: Macmillan and The Free Press. Knirk, F. G. (1992). Facility requirements for integrated learning systems. Eduoationel teehnology, 32(9), 26-32. Knowles, M. (1984). The edult leatneg: A negleoteo species (3rd Ed.). Houston, Tx: Gulf Publishing. Kulik, C. C., & Kulik, J. A. (1991). Effectiveness of computer- based instruction: An updated analysis. gonpntete_in_nnnen Behavior. 7, 75-94. Kulik, J. A. , Kulik, C. C., & Bangert-Downs, R. L. (1985). Effectiveness of computer—based education in elementary schools. Compotens in Hnnen Beheyio;, l, 59-74. Kurshan, B. L. (1994). An eoucetor's gnide to eleotronic netnorhing: Creating vintnal commnnitiee. Information Resources Publications, Syracuse University, New York. Landa, L. (1976). Tnstructional regnletion and conttol: oybennetics, algorithnization, end heutistics in edueation. Englewood Cliffs, NJ: Educational Technology Publications. Landa, L. (1993). Landamatics ten years later. Eoucational Technology, 33(6), 7-18. Lesgold, A. M. (1986). Preparing children for a computer-rich world. Eoncetional Leaderehip, 43(6), 7-11. Levine, M N. (1982). Adversary hearings. In N. L. Smith (ed.), Communication strategies in evaluation. Beverly Hills, CA: Sage. 167 Malfitano, R. & Cincotta, P. (1993). Network for a school of the MCMahon, H., & O'Neill, B. (1993). Computer-mediated zones of engagement learning. In T. M. Duffy, J. Lodwyck & D. G. Jonassen (Eds.) Designing enyironnents fog conetpnctive leenning (pp. 37-58). New York: Springer-verlag. Means, B- (Ed-)- (1994)- W 1‘ l 1' I h ea r ' e. San Francisco: Jossey-Bass. Means, B., Blando, J. H., Olson, K., Middleton, T., Morocco, C. C., Rena, A. R., & Zorfass, J. (1993). Using technology to sopport education refonn. Washington, DC: Office of Educational Research and Improvement, U.S. Department of Education. Means, B., & Olson, K. (1993). Supporting eohool refogmgwith educational technology. Menlo Park, CA: SRI International. Merrill, M. D. (1983). Component Display Theory. In C. Reigeluth (Ed.), lnstnnctional design theories in action. Hillsdale, Mbkors, J. R., Goldsmith, L. T., Ghitman, M., & Ogonowski, M. S. (1990). Eyalnetion of Kids Network: Hello! and acid rain nnite. Cambridge, MA: TERC Communications. west Haven, Ct: Author. Mundt, J. (1994). The Glenvien model connonity networking via h;oedhend_eehle. Available Internet, http://www.ncook.k12.il.us/dist34.homepage.html. National Education Association, Special Committee on Telecommunications. (1992). Eduoational Telecommnnications. West Haven, CT: Author. Newman, D. (1992). Technology as support for school structure and school restructuring. Phi Delte happen. 74, 308-315. North Central Regional Educational Laboratory (NCREL). (1994). Tow lo ' as ct o educa ' : P ' petepeotiyee_l (NCREL Policy Brief). Oak Brook, IL: Author. Office of Technology Assessment, Congress of the United States. (1993). Adult literacy end new teohnologiee. Washington, DC: Author. Office of Technology Assessment, Congress of the United States. (1988). Poger on: New tools for teaching and leerning (OTA- Set-380). washington, DC: Author. 168 Office of Technology Assessment, Congress of the United States. (1989). Linking for learning: A new couree fog education (OTA-Set-430). washington, DC: Author. Olson, D. R. (1985). Computers as tools of the intellect. Edncetional Reseanehep, 14(5), 5-8. Orlans, H. (1971). The political uses of social research. Ameri an Academy of Politicel and Sooial Soience Annels, 394, 28-35. Papert, S.(1993). thldpenfe mechine: Rethinking sohool in the age of the oomputer. New York: Basic Books. Parlett, M., a Hamilton, D. (1976). Evaluation as illumination: A new approach to the study of innovatory programs. In G. V. Glass (Ed.), Evaluation studies review annual (V01. 1, pp. 29-43). Beverly Hills, CA: Sage. Pask, G. (1975). Convepeation, cognition, and learning. New York: Elsevier. Pea, R. D. (1992). Augmenting the discourse of learning with computer-based learning environments. In E. De Corte, M.C. Linn, H. Mandl & L. verschaffel (Eds.), Competep-based leenning environments and problem solving (pp. 313-344). New York: Springer-verlag. Pea, R. D.& Gomez, L. M. (1992). Distributed multimedia learning environments: Why and how? Interactive leenning Envinonments, 2L2), 73-109. PereLman, L. J. (1992). School's out: Hyperlearning, the new W. New York: William Morrow & company, Inc. Percival, F. & Ellington, H. (1988). A hendhook of ednoational technology. New York: Nichols Publishing Company. Phea, J. (1993) The Internet in k-12 education. Pittsburgh, PA: Carnegie Mellon University. Phillipa, J. (no date). Twenty -finet centuny school: Linhing educational reform.with the effeotive use of technology. Unpublished. (Available from John Phillipo, CELT, Marlborough, MA, 508/624-4877.) Piaget, J. & Kegan, P. (1969). The mechanieme of penception. London : Rutledge 169 Popham. W. J., & Carlson, D. (1977). Deep dark deficits of the adversary evaluation model. Eduoational researoher, 6(6), 3-6 0 Ramirez, R., 8 Bell, R. (1994). Byting back: Policies to support the nse of technology in education (Occasional Paper). Oak Brook, IL: North Central Regional Educational Laboratory, Regional Policy Information Center. Ray, D. (1991). Technology and restructuring part I: New educational directions. Computing Teaoher, 18 (6), 9-20. Resnick, L., & Resnick, D. P. (1992). Addressing the thinking curriculum: New tools for educational reform. In B. R. Gifford & M. C. O'Connor (Eds.), Changing assessments: e n 'v v'ews f a itude achie ement d 'ns ruction (pp. 37-75). Boston: Kluwer. Rist, R. (1991). Introduction. In R. L. Blomeyer, Jr. & C. D. Martin (Eds. ), Case studies in computer aided leetni ing. New York: Falmer Press. Robey, E. (Ed. ). (1992). opening the doors: Using technology to improve education To; studente with disabilitiee. Silver Springs, MD: Macro International. Robinson, S. (1992). Integrated learning systems: Staff development as the key to implementation. Eduoational Technologyl 32(9), 40-43. Roblyer, M. D. (1988). The effectiveness of microcomputers in education: A review of the research from 1980—1987. Technical Horizons in Education Jonrnal, 16(2), 85489. Romiszowski, A. J., & Iskandar, H. (1992, November). gee of voice: meil tntoring in distenoe education. Paper presented at the ICDE WOrld Conference in Distance Education, Bangkok, Thailand. Sabelli, N., & Barrett, L. (1994). Leerning in the futnpe. Summary of proceedings for a National Science Foundation WOrkshop, October, 4—6, 1993. Washington, DC: National Science Foundation. Salomon, G., & Gardner, H. (1986). Computer as educator: Lessons from television research. Edncetional Reseencher, 15(1), 13— 19. Scardamalia, M., & Bereiter, C. (In press). Computer support for knowledge building communities. Jonrnal of the heetning Sciences. _ 170 Scriven, M. (1967). The:methodology of evaluation. In R. E. Stake (ed. ), gnttic nlnmigyaluation (American Educational Research Association Monograph Series on Evaluation, No. 1). Chicago: Rand McNally. Scriven, M. (1973). The methodology of evaluation. In B. R. WOrthen & J. R. Sanders (Eds.), Educational eveluation: Theoty end pneotice. Belmont, CA: Wadsworth. Scriven, M. (1974b). Standards for the evaluation of educational programs and products. In G. D. Borich (Ed.), Eyelneting educetional programs end producte. Englewood Cliffs, NJ: Educational Technology Publications. Scriven, M. (1984). Evaluation ideologies. In R. F. Connor, D. G. Altman, & C. Jackson (Eds.), Evaluation studies review annual (V01. 9). Beverly Hills, CA: Sage. Shavelson, R. J., webb, N. M., & Hotta, J. Y. (1987). The concept of exchangeability in designing telecourse evaluations. Journal of Distence Edncation, 2(1), 27-40. Sherry, M. (1992). Integrated learning systems: What may we expect in the future. Educetionel Technology, 32(9), 58-59. Shore, A., 8 Johnson, M. F. (1992). Integrated learning systems: A vision for the future. Educational teohnology, 32(9), 36- 39. Spiro, R. J., Coulson, R. L., Feltovich, P. J., 5 Anderson, D. (1988). Cognitive flexibility theory: Advanced knowledge acquisition in ill-structured domains. In T. Duffy 8 D. Honassen (Eds.), Constnnctivism and the technology of inetruotion. Hillsdale, NJ: Erlbaum. Spiro, R. J., & Jehng, J. (1990L Cognitive flexibility and hypertext. In D. Nix, & R. Spiro (Eds. ), cognition, -d ; '-1_ d . t u d A. or . ice:. io tec olows (pp. 163—205). Hillsdale, NJ: Erlbamm. SRI International. (1993). hn d e at' re 0 (Commissioned Papers). Menlo Park, CA: Author. Stake, R. E. (1967). The countenance of educational evaluation. Teecheps Qollege Aeoord, 68, 523-540. Stake, R. E. (1975b). r ev uat' ar 'c l res sive eveluation (Occasional Paper, No. 5) Kalamazoo, MI: Western Michigan University Evaluation Center. 171 Steinberg, E. R. (1992). The potential of computer-based telecommunications for instruction. Journal of Computer-Based Instruction, 19 (2) 42-44. Stufflebeam. D. L. (1968). Eyaluation as enlightenment fog decieion meking. Columbus, OH: Ohio State University , Evaluation Center. Stufflebeam, D. L. (1973a). An introduction to the PDK book: Educational evaluation and decision-making. In B. R. WOrthen & J. R. Sanders (Eds.), Edncationel evaluation: Theony and peactioe. Belment, CA: Wadsworth. Thompson, A. D., Simonson, M. R., & Hargrave, C. P. (1992). Edncationel technology: A teview of the researoh (Rev. Ed.). Washington, DC: Association for Educational Communications and Technology. Travers, R. M. W. (1983). How research hae changed American schoole. Kalamazoo, MI: Mythos Press Tyler, R. W. (1942). General statement on evaluation. Journal of Edncetional Reseanch, 35, 492-501. Tyler, R. W. (1950). Basic principles of curriculum end inetruotion. Chicago: University of Chicago Press. Van Dusen, L. M., & worthen, B. R. (1992). Factors that facilitate or impede implementation of integrated learning systems. Edncational Technology, 32(9), 16-21. waks, L. J. (1975). Educational objectives and existential heros. In R. A. Smith (Ed.), Regaining educational leedership. New York: Wiley. webb, E. J., Campbell, D. T., Schwartz, R. D., & Sechrest, L. (1966). Unobtnnsive measures: Nonneective reseetch in the sooial eciences. Chicago: Rand McNally. Weir, S. (1993). Electronic communities of leapners: Fact or fiction? (TERC Star School project evaluation). Cambridge, MA: TERC Communications. westrum, W. J. (1994). histence learning: Highlights for the teeeench base. Oak Brook, IL: North Central Regional Educational Laboratory. ‘Wilson, K. & Tally, W. (1991). Looking at multimedia: Design issues in severel discovepy oriented progtams (Tech. Rep. No. 13). New York: Bank Street College of Education, Center for Technology in Education 172 WOlf, R. L. (1975). Trial by jury: A new evaluation method. Phi Delta Keppan, 57(3), 185-187. woodley, A. & Kirkwood, A. (1986). Evaluation in Distance Leetning. (Paper 10). Open University, Walton, Bletchley, Bucks (England). Institute of Educational Technology. Worthen, B. R. & Sanders, J. R. (1987). Educational evaluation: alternetive epppoaches and ptactical gnidelines. New York, NY: Longman. "1111111111111111“