{XI/1" .. I u",,"l';‘_" ('1‘! IIIIII A' "Int-1?;gfifr’n’r'fl' "n/ VIZ???" ' ' r v [> ' :u’: » {'ffi . 41"- If? 2' III” P -'I"Iu urn} [[Irin‘vikp; h , .a ' .r {[':"";-(’v a"; "'l"l:" L . {‘ . 1', Ir gain“. I-I ' IPW fir"! .I-I‘m 1.. ki’pl Ln” I“: ‘r v [VI/”2|. r Vinny-"run, 2 I "J 2'43. “~f' 2, ,» {2,7 ' WI ' . ~\ '1'" . ' ., , A 1-" n ”(if "i7... '.‘,.’.’ I u I‘ [7,: a? .:, {5; ,f; 1""; , ' ' r, ‘r- .u .19' r , .44}, . .‘IL‘ ‘ :5 ., i) ff", 'h‘Jr 22:3 0.3:} r , ”til” 1‘! '- l ”3 I 1:], > n» A” vm,’p._’»": v ,' “at" ".1?" i."’ I: 37‘ as .m . h.‘ h’:>;-"{ 43;}: 2 ...? 1,; ' l'ffl', '36.? lnwf: [Ignc’S w; J"? 393 " "I" , 111.51wi a £594» ‘-‘ ,1 5:11:43; V- 1 . V9,; w::.’.>’.‘r'~>,s “1924’“ .5- . . ’ixfi'uy‘n J‘nyi‘ élfiswcvv’) 5’? 5-". 7 wan-nu. v ’il‘.‘ - 1 a _ ’9 I . n) \x 3 11% 52% 5"1" «XVIL’L .273 x." 1,4,, V 4.99»! [34.7- ' “f- ' + ' {4: x ‘- 1 2+ 3134+. _ » , r. ’9»? ’ "'t"‘,'"l‘ ' " I "j?" We 9': ~ I"); “my???” ., . 1,1". 4.“ av . at? w? ‘5’ r :39. 6910.4 iiiA»; If v. Wig "ft. 9:32, ,9- } ,~ 2:2. 9. 4-, <5 :rzgi‘ . 9» 3:1.“ 35 ‘ 511$ \ { " 05‘}??? rah; » 7}" «uh L {3'14 , 5 _ . {C5 98’4“? :1 Wm‘ 1:: +1 , " "7' >55 , . 3+4», .9.) 4.. -,‘>' ’ 4 b- .3 .,,. M~ ”it, "N5 ’7: - hit“; ':' "2’ , f» ’49: 1 r r.- , 24:»- ~« :2? < 4.2 :2 >2 ~3+4+923991M +5.4” 299:: 2:" 13:9; $924235," 4 3* : . .ng . - o , ,4»). ”‘ ~ 285'} $22. «3%: 6,9124»: '4 7 h “ . , '2 . L n L 1;?" in? ”3355“.“ ~_~« ’1':J5;:.:'§.' '1' ' 'L -. ”-54:9554 :-— fins : ,» + ' ' ‘:.+ $331+” $~zfi=wt+n++xz—"»' 2’5 « . ' “1‘!“ A . .4 - , , .5?’ Hykr+ . I ,9»; I «I }rf,a-,. F ,, (J ‘ ”if? "A i 2 - ‘ 3 "FE, {#9 . ‘ylry‘méffi, ”fig-fl: 51:?ng 9,4315}??? ‘-' gm 1 1;: , 392%? 9,51% 4;". :3 5.2% ~ L+*:»tr'.=1.~:-'-:>'2¢w, ”ma 1+ ”Mi-”95¢ "'L-v' v.4 “IQ [a]; {[er [lg-,1“ -[[);9" 3.9: 5.. 7: ’J‘ [2: 353,53?” [’n‘: .1 at}; a?” N 152» 11-2.: « {+2 a u w». 22 w ~ - a ,. , my f. . 2 f” .r r “9., {955.9(1) fin» ~ [[Jhg‘fy... A I" yr- "4 2:) my mr L [543;125 .. . g ,2"! . .- -...,..m.-/ [3’ if? "q?“ I’- ‘7 3’ s ,5 ,. . h- I - M- . ’Q'c ,2 ’4 #239? {3‘3 13.71% ,. f“... . 92:: [ 4" .wggffirfi' P"'% 73,937,?!“ J ‘ -% ”I [[I‘ ”iii-ii- [géy ” )9 .‘i’eflx’ ,. "*’ ' ' 95%;}.31'j It??? fivul‘é.. f {rwfi‘ ' ’5’” ’5' L“ $5 4 ”FM" . L», >- a ' [is ad}?! ." ,{vé’ . M2”, r ii: . If. 9“" . [ [ ' 3. 3X1. :9) 5:ny .2:’:‘:}{v:jrrr~"':j ' . ' r r iii 1 . “31,": [[Lflzt, 7,, :91,- 3'; :q' 3;”; '5' ‘; 31: x. fry jaw-:25! 4’5 r '-' ” f a \brn a: .1 i‘ (’39: r‘- [ 120’» XL/r/ .‘f":“ 19.}, #1:» '7'!) gégfigw‘? £17" I} 3' 45’9” 22" "5' :4 "+4442?” 9"?ij " 2%,); .. . "2 [,rnyfiwfifl' ”9“- ”? 1].”? M}? ”21:71. . é’ .rLfL}, {$9,}3izgiéfifijgrfiw[99L 277’ ’ ’ f 9’4, $97.13 f“"’}”/))>’N :44,;»;§’r'2§;’:}"v’2,{/ 4;. ~," 2%!“ -,, . '94 3544444474232. 1:2” I”??? '2 ’1’" -4 If "$2323, fifhgkdg'rflg 4337'" 4’41"” 24/? ”13’ v’fu’ LM'KW’F' ”V '65} #224 film fl“ a 4. ‘32:; ~;’ , +4“; 1? fig}: 3%? $33} fian‘ 3-. N :3’: if}, 97”.; [‘ #yéfi? I; ' fl ('1’, [I»".."‘,.’.!‘ ,lijl 1',va ’ a. “I”?! v (I; I./;’L-’I' iii/n). up”. pig/j]; ”Nani," '] ? [isn't-l)?! {J}. ,1." L ‘ 7 )5” if?!» m N "’ ,r, ":QLQ? W"; {42" ,. ' 2191?” H ' If: 1'9)” x’fi’ifif/rpégfigéé ,lfffl/fffl/IVA’IML if"? kg; {1.25;}! iii/fa n [ 03%}:9’9‘5? {Pg-gfl’é/ 9.724.:{34 :_[éi[-€‘r‘vudrfil [1,“? 2' ~ ' a» 2+" 4w W 2259,4995; 24.44.99,.9999, car- [:3 ”'2’; AI '2 4 9 L , , r. n . ’2" fl 2m 5 , ‘ . ' “’ ”+37% nan-14444 3"?” " W;_22:~2H~;§9’1-‘74;99f/42M?L 3/ . . ,- L' ' {($12’I'igfyrf'fifi-J'NL,’ #2" 33929;?) 'firfifii {f’l’""}r {/‘jyffmfi'jmw '4 {F4922 'fikw’fi’w " . 'L. >2, 2“ MIL/5}?" Mailflfvmfifyié a. .1' +4: D . .1 :2, 9622:! , .944;,:44?§Cg449§2{4;4¢;¢f’ 94,239., i; I 2 3h 1% [L 94‘ r N'xnl; {VIC ”I! !' 9{'>"H[9[, ’kxfl;[. 7M9}? 1 I I. l‘ ' “’7':n‘ {If-(r I 5 >1 WP ( 71"]:er ' LI ‘ ) "(I [J r’r/L [2:53; [95 ’." [6‘9” "ta/VP“ ‘9.” ’1 [J' I:r:{d’.'[:14'r'[.r, _ '15.” ff t’9_,l,i'("[‘ '9‘ f), vi/f.’ #95:? y-’9-’1.9’9.).,’ :- :I'I‘flf Irf't’f' ’r X ”.3 {fl/Ir‘iy . ”(lid ’9' IfiJ12/9’9‘/'a'.4g"~’,‘ 3;},(1 “in?!” H"! ’.(' I gymnfn ”(L/""flflfihg'f/NH 9,9794; #4318905??? -’ arr/7,; fl I’r,‘I"AJmf),,,‘!‘H ., _ "’54 ”2,9316% 4 "’ ' ’»;'- 1r“ rm! .r/i’viiwfy‘ ”201’ . '2 ' 42".; 5419194}; ’9?" "4 9 ”gm/2 9! Lily/(1:0 ”fray-r at?! '2 > n W r’.;’L'.«' (I ['[fl/nf’.“ (If: ’3; Im’3‘39’il” ,. Jr); . [12,? I v H. ",I 933,: f. ‘ ~ ,0: n w MYW 'Ir‘r-"rr'w W" H :9" “My '1 W2”; ,’;.‘,"9r.-,[.' 2,1 ,grr, n9 1V9! I?) 1,! J.” 1,2“ [44'] (1 If} [A 1/13? )1.“ [mtr [‘9] 3L1]! 7"»). {ayl ‘ g??? [10,. ’9) Hf” {pl/"ll ,J‘r .; IV 9'; 1994'," ,2le 129‘,” [9393' I {N L. {9111 XJL’I‘LIL ’P r "’/r if? 9;, ,1, '9”: [9’ If,” 1 9 '21,“, ,rn'r' ;v[/[ I’rnvu . ”(9,, I’ 9[n. . l.- .l ’rn";J 5!" I); ('11,; 5,124’3‘i23’17’ I, :2; r r , , L, ’31,!!92197-4'12’l'ffl If", [if-(M ”41/;ll’l;r‘ ‘2’“); 2‘7”“ "' 1”” 4’2, g/ (’1‘, ",5! , r1” ,7”, .[ qr: ”Hr/L, ”Vi/9,.” ”If; II“! In" A ". :[,[:[.[n[r[,[’, 3‘ [Ir 9’)”, u‘;',’,’[" ’J, {W [9 fly}, ,"l (f I , 7695:, "f 3’”.- “I [/3 95?}, [[r [‘r" Iii; 'lll ' 9+ Jl’ .mrfl‘ V N- r! I "I .","‘..( [ [ /9:n[,p;1h’y‘ f: _' 1:91:99 rig/Ll] I [265:5 "’4‘” 97"" H LL “VHS?” 2;,- r 'ng/ ’f’w #9?” ”1%; In, 7;? {IV/N flvh {de " 249' L, 0’94; I} {AI/"Jggg’fi 9 :1wa [L.[ , pf, 91.’ 9' . I , z, rr’r /,’.‘f,‘¢'»‘)l/KI ,[r ' .J. {9! I 9911299999219, ’r’WJ I silk/4.1:! 1’ [1951/19.” ,1 j I 2 . , “((5 1‘01"]? 2"), 2', ”I” 1'. UL: ,., 1:92: [9” Eng?” r [ J \Ill Well WW lllfllfllllll\l\| 3 1293 00107 go 0 ‘5 2%‘365 LIBRARY I -Michigan State1 University This is to certify that the dissertation entitled A Study of Spreadsheet Problem Solving and Testing for Problem Solving Ability presented by Edgar Rommel Leon—Ayala has been accepted towards fulfillment of the requirements for Ph . D . degree m Educational Administration Major professor Datey’Z/g g MS U is an Affirmative Action/Equal Opportunity Institution 0-12771 MSU LIBRARIES RETURNING MATERIALS: Place in book drop to remove this checkout from your record. FINES will be charged if book is returned after the date stamped below. A STUDY OF SPREADSHEET PROBLEM SOLVING AND TESTING FOR PROBLEM SOLVING ABILITY BY EDGAR ROMMEL LEON-AYALA A DISSERTATION Submitted to MICHIGAN STATE UNIVERSITY in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY College of Education 1988 5&7-é??23 ABSTRACT A STUDY OF SPREADSHEET PROBLEM SOLVING AND TESTING FOR PROBLEM SOLVING ABILITY BY Edgar Rommel Leon-Ayala The purpose of this study was three-fold; to develop a definition of dependency table problem solving; to state a theory about how individuals solve problems using the dependency table approach, and to develop a standardized test to measure an individual's ability related to spreadsheet problem solving. One hundred test items were created and then they were reviewed by ten experts in the field of computer science. These experts were selected on the basis of reputation, experience, and prestige in the field of spreadsheet and general problem solving. After the test item review, the resulting twenty-two test items were administered to another ten experts and twenty novices in order to obtain scores which would differentiate between the two groups. A selection and validation of the test followed the process. Finally the test was given to a sample group of college students and experts to develop the norms. Appropriate statistics were used to analyze the data. These included measures of central tendency] variation, correlation, discrimination and reliability statistics and test norm in statistics. Various inferential statistics were Edgar Rommel Leon-Ayala used to test differences among groups and conditions. It was discovered that there were two independent variables; these were abstractness and information processing. Spreadsheet problem solving ability' may be related to ‘the amount of abstractness and information processing a problem contains. Relationships between variables were presented by means, of statistical analysis and using a standardized test developed by the researcher. You can measure spreadsheet problem solving in the multiple choice test. Although distributions for this test were negatively skewed, satisfactory reliability and validity coefficients based upon expert versus novice performance were achieved. There is evidence for content independent versus content dependent factors in this spreadsheet test. Content independent measuring knowledge of spreadsheet content dependent particular algorithm necessary. Complexity of problem solving tasks in terms of method presented and type of response required were directly related to the difficulty of the solution. Copyright by Edgar Rommel Leon-Ayala 1988 Dedicated to My wife Herlinda ACKNOWLEDGEMENTS This dissertation is the product of many experiences some of which leave unforgettable impressions. Of all the multiple interactions I have experienced in the recent past, some stand out. Those contacts with Dr. John Vinsonhaler, my thesis director who made this study worthwhile and helped me during the long hours of research frustrations, Dr. Eldon Nonnamaker my committee chairman for his help and support, and with members of my committee, Dr. Henry Kennedy and Dr. Louis Hekhuis are particularly indelible. To them I owe much. To my beloved wife, Herlinda, my debt is infinite. A listing of contributions she has made would fill many pages. To my parents who have always fostered a love for learning and have given me support during this time. To Dr..Robert Sage of the Sage foundation for their financial support. To Mr. Adolfo Navarro from the Office of Research and Development of Michigan State University and Dr. Normal Bell for their valuable contributions to this research. Finally, to all the people-who had confidence in me and offered to help in the study, I express my gratitude. vi TABLE OF CONTENTS LIST OF TABLES. . . . . . . LIST OF FIGURES . . . . . . CHAPTER 1 INTRODUCTION Overview of the Topic. Revolution in Business Brief History of Spreadsheets to the Statement of the Research Problem. Problem 1 . . . . Problem 2 (Part One). . Content Theory of Problem Solving . Problem 2 (Part Two). The Complexity Theory of Problem Solving. Independent Variable and Types of Problem Statements. Independent Variable -- Level Problem 3 . . . . Research Questions . . Hypotheses of the Study. . Definition of Terms. . Significance of the Study. Assumptions. . . . . . . Limitations. . . . . . CHAPTER 2 REVIEW OF LITERATURE Introduction and Overview. Definition of General Problem Solving. Relevance of Existing Theories to Problem Solving. 0 Present Simon' 3 Theory of Problem Solving . Simon's Theory of Information Processing. Primitive Symbols and Information Processing. . . Simon's Theory of Abstraction . Summary of Simon's Theories . General Computer Problem Solving Applications in Higher Education. Higher Education Administrator Application. The USe of Computers to Examine Impacts of Academic Program Plans of Faculty Staffing Levels vii of Abstraction O 23 23 24 24 26 27 28 30 30 31 33 TABLE OF CONTENTS, CONT'D. The Impact of Computer Training of College Administrators. . . . . Computer Training of Administrators. . General Spreadsheet Applications to Higher Education . . . . . . . . Spreadsheet Applications in College Settings . . . . . . . . . . . . . . . Grade Tally . . . . . . . . . . . . . . . Production Decisions. . . . . . Computer Spreadsheet Application for the Administrator in Higher Education . . . Use of Microcomputers in Business Education. . Personal Computers in the College Business Office Applications of Computers Aptitude Test. . . . . Summary. . . . . . . . . . . . . . . . CHAPTER 3 METHODOLOGY Introduction . . . . . . . . . . . . . . . . Population and Sample. . . . . . . . . . . . . . Instrumentation and Test Development . . . . . . Research Problem 1. . . . . . . . . . Research Problem 2 (Part One) . Problem 2 (Part Two) Determine if Complexity is Related to Difficulty. . . . . . . . . Research Problem 3. . . . . . . . . . . . . Pre-testing of the Items . . . . . . . . . . . . Data Collection. . . . . . . . . . . . . . . . . Method of Analysis . . . . . . . . . . . . . . . Question #1 . . . . . . . . . . . . . . . . Question #2 (Part 1). . . . . . . . . . . . Question #2 (Part 2). . . . . . . . . . . . Question #3 . . . . . . . . . . . . . CHAPTER IV FINDINGS OF THE STUDY Introduction . . . . . . . . . . . . . .,. . Problem 1: Create an Operational Definition of Spreadsheet Problem Solving . . . . . . . . Results . . . . . . . . . . . . . . . . Problem 2 (Part One). Determine Evidence for Content Independent and Dependent Components . Problem 2 (Part Two). Determine if Basic Information Processing Relationship of Complexity and Difficulty Holds for Spreadsheet Problem Solving. . . . . . . . . . . . . . Summary . . . . . . . . . . . . . . . . . Problem 3: Determine if a Reliable and Valid Test is Possible . . . . . . . . . . . . . . . viii 34 34 37 37 37 37 38 41 42 43 47 48 50 50 52 52 55 58 6O 63 63 63 64 65 66 66 67 70 81 83 TABLE OF CONTENTS, CONT'D. CHAPTER V SUMMARY, CONCLUSIONS AND RECOMMENDATIONS Introduction . . . . . . . . . . . . . . . . Population and Sample. . . . . . . . . . . . . Instrumentation and Test Development . . . . . Research Problem 1. . . . . . . . . . . . Research Problem 2 (Part One) . . . . . . Research Problem 2 (Part Two) . . . . . . Research Problem 3. . . . . . . . . . . . Data Collection. . . . . . . . . . . . . . . Method of Analysis . . . . . . . . . . . . . Research Problem 1. . . . . . . . . . . . Research Problem 2 (Part One) . . . . . . Research Problem 2 (Part Two) . . . . . . Research Problem 3. . . . . . . . . . . Recommendations for Further Research . . . . . Observations . . . . . . . . . . . . . . . APPENDIX A DEPENDENCY TABLE PROBLEM-SOLVING TEST. APPENDIX B THESIS DATA. . . . . . . . . . . . . . BIBLIOGRAPHY. . . . . . . . . . . . . . . . . . . . ix 100 106 109 110 111 118 122 4.10A 4.108 LIST OF TABLES Frequency distribution of total scores for all groups (n=74) . . . . . . . . Frequency distribution for dependency table problem solving test scores all subjects (n=74). . . . . . . . . . . . . . . . . Frequency distribution for spreadsheet subtest for all groups (n-74) . Complexity Levels of the Test Frequency distribution of the table value subtest scores all groups (n=74). . . . . Frequency distribution of the narrative value subtest scores of all groups (n=74) . . . . . Frequency distribution of the table formula subtest scores all groups (n=74). . . . Frequency distribution of the narrative formula subtest scores all groups (n=74). ANOVA summary table for the complexity conditions, table value, table formula, narrative value and narrative formula Validity analysis for total scores. Validity analysis for spreadsheet subtest scores. . . . . . . . . . . . . . . Validity analysis for dependency table subtest scores. . . . . . . . . . . . . Frequency distributions of the total test scores for the expert group (n=10). . . Frequency distributions of the total test scores for the novice group (n=64). . . 71 72 78 78 79 79 80 81 84 84 85 86 87 LIST OF TABLES , CONT ' D. Complexity levels of the test . xi LIST OF FIGURES Problem Statement . . . . . . . . . . . . . Dissertation Procedures . . . . . . . . . . Short Problem Statement and Multiple Choice Response that Represents an Operational Definition of Spreadsheet Problem Solving Overview of the Methods Used for Each Research Problem. . . . . . . . . . . . . Methods Used for Research Problem 2 (Part One). . . . . . . . . . . . . Methods Used for Research Problem 2 (Part Two). . . . . . . . . . . . . . . . . Methods Used for Research Problem 3 . . . Total Test Score Frequency Distribution Subtest--Dependency Table Problem Solving Subtest-—Spreadsheet Language . . . . . . Frequency Distribution of Each of the Subtests TV, TF, NV, NF . . . . . . . . . . Total Test Scores Distribution Novices. . . 12 51 53 56 59 69 74 74 82 88 CHAPTER 1 INTRODUCTION Overview of the Topic Since Bricklin (1978) invented them, electronic spreadsheets have been important tools for helping business people organize and manipulate data to solve problems. As a result spreadsheets have changed the way business is done throughout the world. Millions of people in business and education are now using spreadsheets to make important decisions that involve analysis of large amounts of data. Administrators at public as well as private educational institutions, hospitals, businesses and many other fields use spreadsheets to make important management decisions. The following section is an attempt to introduce the nature and impact of spreadsheets. Revolution in Business A variety of computer programs for analyzing data already existed before 1978, but these programs required a skilled programmer to work out all the data entry and computational steps. This was a very difficult and time Consuming activity. It could take days or weeks to get results with these programs. With the invention of the 2 Visicalc electronic spreadsheet program, the same calculations could be done almost instantly. The Visicalc program was the first to put the entire analytical process . back in the hands of the decision makers. This way, the business administrator was able to control and perform all the analyses without having to rely on an intermediary from the data processing division to create the model and run the job on a mainframe computer (Williams, 1984). The invention of the Lotus 1-2-3 electronic spreadsheet software made decisions even easier and faster (Lotus Corp., 1983). Lotus 1-2-3 offers user-friendly, menu-based spreadsheet software. A menu-based program enables the user to communicate with the computer by choosing one of the clearly described options presented on the computer" menu screen. Just by choosing one option and pressing one key of the computer keyboard the person can do difficult operations. This feature eliminates the need for a time consuming programming language specialist. Now the content specialist does his or her own programming with the help of the Lotus 1-2-3 electronic spreadsheet software package. The Lotus 1-2-3 spreadsheet program does not require the user to be a technical expert. In applying a spreadsheet, the user types in the data as a table, and after all the data are typed in, the user carl manipulate the tables, recalculate, and have immediate results for business decision making (Harris, 1983). By having access to mathematical and statistical analyses of data, the administrator has a better understanding of his/her working environment (Thomas, 1984). An electronic spreadsheet can be used to store and manipulate any type of numerical data. In business, electronic spreadsheets are being used increasingly by managers as an aid in decision making. By constructing a simulated model of a business activity where formula inputs and outputs are identified, the manager can use the spreadsheet to understand the present situation and to predict the outcome of particular actions. No matter how complex, every worksheet is built from the same three basic components. The basic components of a spreadsheet are words, numbers and formulas. Words, numbers and formulas can be inserted in cells referenced by rows and columns. Given this information, a spreadsheet system performs calculations on rows and columns, stores the results, saves, retrieves and prints out worksheets or representative graphics of the results. Electronic spreadsheets have offered many benefits to business, education and other fields. Berst (1983) gives three benefits of spreadsheets 1. Spreadsheet programs replace the laborious time consuming pencil and calculator method of computing since it can almost instantly recalculate the entire worksheet every time a new or updated entry is made. 2. Spreadsheets eliminate arithmetical errors that would otherwise force people to spend extra hours proofing, correcting and reproofing, and so on. 3. Decisions will benefit from the improved speed and accuracy described above. With spreadsheets, different "what-ifs" (i.e. projections) can be examined before the decisions are made. Spreadsheet uses mentioned by Berst (1983) are: case history studies for classroom, research, private consulting projects, profit projections for new software products, financial forecasting, consulting, negotiating, inter- polating bank exchange rates for foreign currency, cash reports, telecommunications with mainframe computers and tracking corporate accounts. Brief History of Spreadsheets to the Present A major event in the history of computers was the creation of the electronic spreadsheet, the software that almost single-handedly took the micro-computer out of the hobbyist's garage and placed it in the center of the executive's desk. The electronic spreadsheet was invented by a student at fiatward Business School named David Bricklin who watched his VT professor write a spreadsheet on a blackboard one day. The example involved numerous "what-if" exercises. The process of setting up a hypothetical company, (using figures from a real one, and asking what would happen if the company sells this division or markets that product or makes the labor settlement) is valuable for decision making. Bricklin had an idea for an electronic spreadsheet and he took it to Bob Frankston, a computer sciences student at M.I.T. In October 1978, the two men coded the first version of what was to become "Visicalc" (for visible calculation). Their professors and other businessmen were not impressed with the idea, but an M.B.A. from Harvard, David Fikstra, was impressed. Together, they created a company called Visicorp which subsequently became one of the world's largest software suppliers. This electronic spreadsheet changed the way business is done throughout the world. Based on the Visicalc idea another program was invented. An article about the Lotus 1—2-3 program, published in the Wall Street Journal late in 1982, was headed "New Program for the Personal Computers Termed Most Significant Since Visicalc. "Lotus 1—2-3 started a revolution in microcomputer software. It became the best selling program of any type for the IBM PC. Both InfoWorld, a ndcrocomputer trade publication, and Fortune, the widely k read business magazine, named Lotus 1-2-3 Software Product of the Year for 1983. Lotus 1-2-3 does two things to deserve this praise. The first is to succesSfully integrate an electronic spreadsheet, database functions, and graphics in one package . Integrated spreadsheets al low the user to move back and forth between these modes -- to analyze information both numerically and graphically, and to use the database functions to sort the rows of a worksheet into a new order or to search for entries that meet a particular criterion. There are other integrated products on the market, but they have not met with Lotus 1-2~3 success. There are three major areas in which Lotus surpasses the competition. First, Lotus introduced an "expanding cursor." Many spreadsheet functions -- copying, moving, printing etc., require the user to designate a :range of cells for action. Lotus displays selected cells in reverse video (dark characters on a light background). The expanding (cursor makes the task of selecting ranges particularly easy (Williams, 1984). Next, Lotus makes extensive use of full English words common names, and long, informative prompt messages. By reading the screen, you always know what choices are available and what response Lotus 1-2—3 expects. ‘— Finally, 1-2-3 introduced a tremendous library of help screens. These screens display help that is keyed to the user's particular needs at a particular point in the program. The best method of clarifying present uses of spreadsheet problem solving is with a case study. The following is an example of how spreadsheets were used by a college business professor. Eric Rosenfeld (1986), was a professor of finance at Harvard School of Business. The type of spreadsheet used by Rosenfeld was Lotus 1-2-3 with an IBM personal computer and a dot matrix printer. He used spreadsheets for his students to study case histories of corporations to learn to use financial ratios and other numerical tools for determining how well a firm is doing, projecting where it is headed, and choosing its best investment options. The questions Rosenfeld presented to his students, are the following: would a bank loan money to this company? How severe are its problems respect to inflation? What does its cash flow projection look like? To answer questions like these and gain an understanding of a case study, Rosenfeld used Lotus 1-2-3. He plugged the key numbers into templates he had previously prepared. Templates are pre-recorded worksheets. One such template gave the opportunity to take a quick look at key Iiinancial ratios by filling in the blanks with the new numbers. Likewise he had a template for a simple balance sheet, which he modified slightly to fit each new case and then used to make projections. Rosenfeld also used Lotus 1-2-3 for problem solving. He had several on going projects, including one on corporate risk management (how large companies can minimize financial risks). To do this research Rosenfeld down-loaded data from large computers into his IBM personal computer by using a modem and a telephone. From Harvard's mainframe computer and from outside data banks he got information on securities, interest rates, stock prices, and so on. He stored data as a file on the personal computer disk drive. The information was then read into Lotus 1-2-3 for manipulation. Rosenfeld views the value of spreadsheet software to the businessman as the following: "Using a spreadsheet program, you can do in seconds what would take many minutes by hand. Computers also decrease arithmetical errors and increase the time available for analyzing issues" (Berst, 1983). Other uses of spreadsheets are profit projections for new software products, financial forecasting, consulting, negotiations, interpolating bank exchange rates from foreign currency, cash reports, telecommunications with mainframe computers and tracking corporate accounts. Statement of the Research Problem Much research has been done using spreadsheets to train or problem solve on content areas. For example: using spreadsheets to monitor a portfolio (Meiyers, 1986), or using the personal computer "spreadsheets" to improve personal productivity (Ludlow, 1987). Spreadsheets have also been used to train people to solve content problems such as real estate problems and stock market problems. This dissertation research is different from the above. This study relates spreadsheet problem solving to psychological research in three ways. The first way was to relate spreadsheet problem solving to the psychology of problem solving by providing an operational definition of spreadsheet problem solving. The second way was to relate spreadsheet problem solving to cognitive theory. The ppirg was to relate spreadsheet problem solving to traditional educational measurement methodology. Figure 1.1 illustrates the research problem. The Figure 1.2 explains the procedure used in the dissertation. Moving from the top to the bottom of the figure are procedures associated with problem one, two and three. At the top center, procedures are shown for Problem 1: providing the operational definition of spreadsheet problem solving. Next, at the center, is the procedure for studying Problem 2: the content and complexity problem. At 10 the left center of the figure, procedures are shown for the creation of the content dependent subtest. At the right center of the figure, are shown the procedures for the content independent subtest. At the bottom center of the figure, is shown the procedure for Problem 3: standardized testing of Spreadsheet Problem Solving. The problem is to investigate Spreadsheet Problem Solving from three points of view: (1) as a problem solving paradigm, (2) as a new area to apply cognitive psychology theory, and (3) a new ability on which to apply education measurement methods. Problem 1: Create an Operational Definition of Spreadsheet Problem Solving. Problem 2: Part 1 Determine evidence for content dependent and content independent components of Spreadsheet Problem Solving. Problem 2: Part 2 Determine if the basic Information Processing relationship of complexity and difficulty holds for Spreadsheet Problem Solving. Problem 3: Determine if a reliable and valid test of Spreadsheet Problem Solving is possible. .Figure 1.1 - Problem Statement PROBLZH l PROBLEM 2 11 Selecticn of Problem Format for the Operational. Definition of Spreadsheet <7"' PROM-EH A Problem Solving ( 1 ) Cantata Context Indepaidait and Depaidmt Subtests(2) \<--' P1031“)! 2 Gmtait Depaxdait Cmtait Indepaidmt (Depmdmcy Table Problem (Spreadsheet Software Solving) 9.th (3) Language) Subtat (8) Carplex or Sixple Statmt of the problem (5) PIOILEH 3 rimmelj. Generate Problem with H131 m In: Camladty _—__.J Select item Eran text book: and mnuals or spreadsieet Software (b) (9 41 (Jamie: or Smple I F Pre-tested item 1 lype of Responses 1 selected item ‘ (6) , prepared scoring key 2 l (10) v Gambit! into ( 12) Spreadsheet Problan Solving Test < ----- PROBLEM 3 Com: humus & Recmerxdaticm Dissertation Procedures 12 A. PROBLEM 1: An operational definition for spreadsheet problem solving An operational definition is an observable set of steps one takes to explain a given procedure or subject area (Alexander, 1987). The operational definition of spreadsheet problem solving consisted of three parts. The first part is a set of problem statements. The second part is a set of multiple choice responses for each problem. The third part is the selection of the correct solution. In this study, spreadsheet problem solving is represented by a score on a set of problems like the one presented in Figure 1.3. 1 2 3 4 1 NAME PAYl PAYZ TOTAL PAY 2 SMITH 10.00 10.00 20.00 Problem Statement 3 TAYLOR 12.00 12.00 ? (represented as a table) What is the total pay for Mr. Taylor? a) 20.00 Multiple Choice b) 24.00 Response c) 25.00 (represented as d) none values) As can be seen the correct answer is b. Figure 1.3. Short Problem Statement and Multiple Choice Response that Represents an Operational Definition of Spreadsheet Problem Solving. Note that the critical characteristic is that the entries in some of these cells depend upon the entries in other cells. A cell is the specific row and column location 13 of a quantity in the table problem or spreadsheet, eg. for Figure 1.3 --> Cell 4-3 = Cell 2-3 + Cell 3-3. There are many independent and dependent variables suggested by the paradigm. For example, one variable that could be used is the problem statement, e.g. Figure 1.3. The problem statement could be similar to the one presented in Figure 1.3 or it could consist of narrative or word problem. Another independent variable which could be used is the type of response for each problem statement. The response set could be represented as a selected set of values as in Figure 1.3 or as a set of formulas. There are many other potential independent and dependent variables for the operational definition of spreadsheet problem solving. These potential variables include (a) the number of operations needed in solving the problem itself, in other words, how many rows and columns of information are needed in the spreadsheet to get the correct solution or (b) what kinds of mathematical operators (multiplication, division, substraction, or any combination of these) are being used. B. Problem 2 (Part One): Generating Content Dependent and Independent Subtests. As shown in Figure 1.2 (Box 2), part of the second research problem was the examination of content dependent and independent components of Spreadsheet Problem Solving. 14 Content Theory of Problem Solving Cognitive Theory suggest that there are two basic types of knowledge one needs to solve problems: content dependent knowledge and content independent knowledge. Content dependent knowledge is knowledge which helps only with problems from a particular field of study. Thus, in business management, preparing budgets require the knowledge of certain algorithms or procedures. These are different from the algorithms needed in other fields. Content independent knowledge is knowledge which helps solve problems in many fields. In this study, content independent knowledge includes knowledge about the use of the Lotus 1-2-3 spreadsheet software. The administrator will have to know the specific computer language, procedures and commands for the Lotus 1-2—3 spreadsheet program. This means the administrator will need computer skills and software knowledge in order to solve any particular problem with a ndcrocomputer. Content independent knowledge does not require the administrator to be an expert in any specific subject area such as accounting, finance or education. Many psychologists argue that complex problem solving really involves only content dependent knowledge. The computer industry by contrast regularly assumes that content independent computer knowledge is also necessary. 15 The procedures for developing a content independent subtest are shown at the right side of figure 1.2 (Boxes 9, 10, 11). They are procedures that are explained in Chapter III. The procedures for developing the content dependent subtest are shown in Figure 1.2 (Boxes 4, 5, 6, 7). Since thus procedure evolve a separate theoretical issue, it is considered in the next section. Problem 2 (Part Two): Generating the Content Dependent Subtests by Varying Problem Complexity. The content dependent subtest was created by generating dependency table problem solving items using the format described previously. The item content was common business and administrative problems (e.g. budgets, payroll, etc.) The complexity was varied with problem statement and type of response as shown in Figure 1.2 (Box 3, 4, 5, and 6). The Complexity Theory of Problem Solving Herbert Simon (1959) is considered one of the outstanding scholars on information processing theory. In his studies, he views a human as a processor of information. Thus, problem solving requires information processing in terms of creating solutions to the problem (e.g. by modifying a known solution to fit the new situation‘ or problem presented). Simon's theory involves objects like 16 the problem" a problem space or cognitive structure, a problem solution selection, and its application. Problem complexity is an important aspect to the theory. For example, let us suppose that in one problem the. solution requires one step and in another problem the solution requires the performance of two steps. Simon's theory suggest that the second problem is more complex than the first problem because it requires more information processing. Information Processing Theory assumes that the more complex the problem, the more difficult it will be to solve. That is, implicit in Simon's Information Processing Theory is the notion that the more complex the problem, the more information processing is required and therefore the more difficult the problem. One method of varying complexity is to add a "Translation" or "Comprehension" step to the problem solution. A good example is word problems in arithmetic. Thus a problem stated as a paragraph (e.g. "Tom has 50 marbles and Sue has 60. How many more marbles does Sue have than Tom") is more difficult than solving the same problem stated as an arithmetic table. For example: 60 +50 ? 17 The reason is that the "word problem" requires the additional step of comprehending the "meaning" of the problem. In this study, complexity in terms of multiple tasks was introduced by varying the type of problem statement as follows. Independent Variable and Types of Problem Statements Problem statements were represented first with a table format and second with a narrative format. The table format simulates an accountant's spreadsheet where it includes rows and columns of a given length. The narrative format required any given problem to be read as a story. This means that the problem solver was required to read a paragraph or two in order to understand a given problem. Thus, in theory at least, in narrative statement form the subject was faced with two tasks: (a) reading the story and creating the table, and (b) using the table to solve the problem. In the table—statement form the subject was faced only with one problem. There are many other posible problem statement formats that could be used to vary complexity, but this researcher identified these two as the most relevant to the area of Higher Education Administration. 18 Independent Variable -- Level of Abstraction Another aspect of complexity is response abstraction. Some psychologist argue that complexity is also related to the concreteness or abstractness of responses required by a problem. Thus, the processing of simple numerical values (e.g. 100+200=300) is less complex than processing of formula (e.g. A1=100 A2=200, A1+A2=300). In the present study, problem abstraction was represented by varying the type of response. In the concrete condition the subject had to recognize a value as the correct answer to a given problem. In the abstract condition, the subject. had. to recognize a formula as the correct answer to a given problem. In the present experiment, it is assumed ‘that recognizing or writing a ‘value requires less information processing than recognizing or writing a formula. Problem 3: Determine if a Reliable and Valid Test is Possible The methodology of standardized testing requires standardized instructions, objective methods of scoring, and known reliability and validity (Meherns, 1987). The question is whether such methodology could be used with spreadsheet problem solving. The procedure used in studying this problem is summarized in figure 1.2 (Boxes 12, 13, 14, and 15). As may be seen, the two subtests already discussed were combined into a single test with standardized 19 instructions and scoring keys. The test was administered to expert and novice subjects. Research Questions Question #1: Can a useful operational definition of Spreadsheet Problem Solving be developed using low cost multiple choice problem formats? Question #2 (Part One): Does the content dependent (dependency table problem solving) subtest measure different abilities from the content independent (spreadsheet language) subtest? Question #2 (Part Two): Are the more complex problems (using narrative presentation with alternative formulas) more difficult than the simpler problem (using table presentation and numerical value alternatives). Question #3: Will total test scores show evidence of reliability and validity? Hypotheses of the Study Hypothesis #1: Scores on the content independent subtest are not highly correlated (correlation is less than the reliability of either of the subtests) to content dependent subtest scores. Hypothesis #2: As complexity increases, difficulty increases. 20 Hypothesis #3: Spreadsheet problem solving test scores will show evidence of reliability and validity. Definition of Terms In order to understand some of the problems and explanations that are included in this dissertation, a collection of definitions are presented. Boot -- To start a computer by loading in part of the operating systems. Cell -- In a spreadsheet processing, the intersection of a row and a column on a worksheet. Cells are identified by their row and column coordinates. File -- A collection of related records. Hardware -- The physical equipment in a computer system. Lotus 1—2-3 —- Spreadsheet software for the IBM micro— computer systems. Problem Solving -- Demonstrate the ability to analyze problems and select and or develop appropriate procedures for discovering the correct solution. Software -- The generic term for any program or programs that control the operation of the computer. Spreadsheet -- A business application program designed for budgeting or other problems involving computations using rows and columns. 21 Mode -- Method used to answer" a question. It could. be either manual (paper and pencil) or computer mode (using a computer keyboard). Subtest -- Part of a test, scored separately. Spreadsheet Problem Solving Test (SSPST) -- A test composed of the spreadsheet language subtest and the Dependency Table Problem Solving subtest. Problem Formats -- Structure of a problem as presented to a subject. In the study, format includes a problem statement, and a set of multiple choice responses. Significance of the Study The researcher was unable to find in the literature any similar studies of spreadsheet problem solving. This is probably the first study of its kind in which an attempt is made to (a) relate basic psychological theory (Simon, 1957) to spreadsheet problem solving and also (b) to study the application of standardized measurement to spreadsheet problem solving. Assumptions The assumptions of this study were the following: 1. The capacity of information processing for each individual determines his or her problem solving ability. 2. 3. Some following: 1. 2. 4. 22 Narrative problems require a greater amount of information processing than tabular problems. Abstract symbols require a greater amount of information processing than concrete referents. Limitations of the limitations of this study were the The range of difficulty of the test items was limited. The data respondents were college students and faculty or workers in private industry. There was no attempt to include other user groups. The data was limited to that supplied by the examinee and the questionnaire given to each subject. Only Lotus 1—2-3 computer spreadsheet language was used in one of the subtests because it is the most used in the field of Higher Education. CHAPTER II REVIEW OF LITERATURE Introduction and Overview The purpose of this chapter' was (1) to examine the theories of problem solving, information processing, abstraction and the application of spreadsheets in higher education, and (2) to present an analysis of previous research findings in these areas in general and specific applications. Finally, applications (of computer aptitude tests were presented to rationalize the need of the test proposed for this research. Definition of General Problem Solving Problem solving, a category labeled "higher—order rules" in Gagne's (1962) writings, refers to the "thinking out" of a solution to a problem by combining old rules in order to form new ones. It is the main reason for learning rules in the first place. Numerous examples of problem solving may be drawn from the daily activities of ordinary people. Whenever no previously learned rule is appropriate for the solution of a problem, problem solving may be said to take place (providing, of course, that the problem is solved) (Lefrancois, 1982). 24 A condition clearly necessary for problem solving is the presence of the appropriate rules in the learner's repertoire. Gagne (1970) also lists three external conditions that appear to be necessary for problem solving to take place. 1. The rules required for the solution of the problenl must be active at the same ‘time or in close succession. 2. Verbal instructions or questions may be used to elicit the recall of relevant rules. 3. The direction of thought processes may also be determined by verbal instructions. Relevance of Existing Theories to Problem Solving Simon's Theorv of Problem Solving Simon (1972) describes problem solving within a framework that views the process as selecting from a large set of possibilities an element having certain properties, or as traversing a large space to find one of a rare set of paths with preferred properties. He concludes that the problem solver is given a description of the solution in a language called the state language, and is required to find an alternative process description that will specify how to generate it. Problem Solving difficulty arises usually from the size of the set of possibilities which are many, even in 25 relatively simple problem situations. He illustrated some ways of solving problems. First, to generate only elements of the set that already assure to process at least some of the properties that define a solution. Second, to make use of information obtained sequentially in the course of generating possible solutions in order to guide the continuing search. Third, to abstract from the detail of the problem expressions and to work in terms of the simpler abstractions. He finally concludes that, as problem formulation and method are specified, the notion of demands of the task is transformed into a description of the problem space that will, be explored by generators or search operators, and into estimates of the size of this space. In this way we arrive at techniques for' predicting problem difficulty and, more specifically, for making many predictions about specific paths that problem soiution will take. One of the purposes of this research was to discover any relationship between problem solving and the use of electronic spreadsheets. A description of the instrument that was developed for this purpose is presented in the following chapter. 26 Simon's Theory of Information Processing The present theory views a human as a processor of information. Both these notions of information and processing are long established, and highly general concepts. Thus, the label could be thought vacuous unless the phrase "information processing" took on additional technical meaning. One may try to provide this meaning by saying that a computer is an information processor. This would suggest that the phrase is a metaphor: that man is to be modeled as a digital computer. Metaphor have their own good place in science, though there is neither terminology nor meta theory of science to explicate the roles of metaphors, analogs, models, theories and descriptions, or the passage from one category to another (Simon and Newell, 1956). Something ceases to be metaphor when detailed calculations can be made from it; it remains metaphor when it is rich in features in its own right, whose relevance to the object of comparison is problematic. Thus, a computer can indeed be a metaphor for man, then it becomes relevant to discover whether man is all bits on the inside. An information processing theory is not restricted to stating generalities about man. With a model of an information processing system, it becomes meaningful to try specific aspects of the man's problem solving behavior can be calculated. This model of symbol manipulation remains very much an approximation, of course, hypothesizing in an extreme for the neatness of discrete symbols and a small set of elementary process, each with precisely defined and limited behavior. 1. Symbols that designate specific events or structures in the external environment of the Information Processing System. Such symbols may be evoked with the Information Processing System by the recurrence of events or structures externally (the result of a receptor process), or they may cause the Information Processing System to create such events or structures externally (the input to an effector process). 2. Symbols that designate elementary information —¥ 28 processes, so that these can be executed by means of these symbols. What collection of symbols is primitive for a specific information processing system will vary with the particular application. For example, to describe sensory and perceptual processes in visual pattern recognition, the primitive symbols might be set up to correspond, more or less approximately, to the elementary discrimination of which the retina is capable. Thus, in information processing theories of visual pattern recognition it is usual to describe the sensory input as a two-dimension array of zeros and ones. Similarly, an information processing theory of speech recognition might take as primitive symbols the elementary characteristics (features) into which phonemes (a member of the set of smallest units of speech sounds that serve to distinguish ways of speaking) are assumed to be analyzed by the auditory perceptual aparatus. Simon's Theory of Abstraction When a subject is confronted with a pmoblem solving task, these possibilities are narrowed down to a particular problem space of abstraction and a particular problem solving program. Associated. with the question of what mechanisms are used to determine actual problem spaces and programs is the 29 question of individual differences. Yet we know that large individual differences exist, which must be reflected in some way in the actual problem space and program adopted by an individual problem solver on a given occasion. Simon (1967) states that this part of the theory is (7 both tentative and incomplete. Most of the behavior that would be relevant to verifying, amending, and completing, takes place long before the subject arrives to the test or the problem solving situation. Much of the remembrance occurs, especially if the task is unfamiliar to him, during the first minutes, or even seconds, when we are reading the task instructions to him. on the basic experience in problem solving. Having examined the problem, we consider the thesis that the structure of the space is largely determined by the structure of task environment-more precisely, that the task environment delimits the set of possible structures of the \ problem space. For nothing forces a problem space to incorporate all aspects of the structure of the task environment, and all problem spaces are abstractions from the environment. — 30 Summary of Simon's Theories The past sections were prepared for the purpose of relating Simon's Theories to this research. It is necesary to mention that there are other authors like Alexander (1987) and Sloan (1986) who have done research in problem solving areas, but this researcher chose Simon as the author who covers more in depth the areas discussed in this dissertation. General Computer Problem Solving Applications in Higher Education Many agencies and organizations face budget cuts which foster a greater demand to account for the effective use of resources and allocations. These demands have led to the increased interest in quantitative research and evaluation. Also, in recent years, declining costs and technical advancements have made microcomputers accessible to most organizations. In addition, sophisticated software packages which address a variety of consumer needs are becoming increasingly available to the public. One type of packaged program is the electronic spreadsheet. It is the electronic spreadsheet which has been most helpful in evaluation research as a tool to accomplish a variety of tasks, and to cut costs in evaluation (Young, 1984). 31 Like any aspect of the Computer Age, there is a proliferation of electronic spreadsheets on the market. This study indicated the Lotus 1-2-3 spreadsheet program as one of many available spreadsheet packages. It is extremely versatile and is currently the best-known, best-selling electronic spreadsheet on the market. It is available for use on most major brands of microcomputers, and costs approximately $500. This does not mean that Lotus is necessarily the best package in the market but it is one of the most versatile (Young and Steele, 1984). Higher Education Administrator Application (Decision Support Systems) Cortland and Madison (1984) reported that an information decision support center has been designed and implemented at Western Michigan University. The center's design was in response to the outcomes of a long range planning process, which helped identify user needs. Three types of support systems were found to be required: education and consulting, decision support, and microcomputing support. Specific goals of the center included the following: reduce the number of user training hours by coordinating and optimizing the educational offerings and techniques; reduce the backlog of user report -writer requests by helping users to be more 32 self-sufficient; educate users in the use of appropriate software tools to augment their decision making activities; and provide a comprehensive resource for microcomputing support. Major outcomes in implementing the center include the following: seminars are being offered throughout. the year including areas such as data processing services; video library for educational materials have been developed; staff members are instructed on how to use microcomputers so that they can provide user demonstrations; and. diskette- based self-instructional products for spreadsheet, word processing, and data management are available. The center's experience to date suggests that one item be added to the list of things to be planned for when an organization is developimg a center. No matter how much effort goes into planning for workload requirements and staff resources, it would appear that workload growth for the center is relatively unpredictable. A partial solution to this problem is to return to the technique of managing user expectation; the best time to begin this process is during the planning period. Several problems existed. (a) Assumptions that were made relative to user needs remained confined to the center's needs list or the products endorsed by the center were incorrect. (b) Control and response to divergent needs 33 has been and will continue to be a problem. The idea behind having an expert in report writing was that this individual would provide decision support services by actually preparing a limited number of reports and spend the remainder of the time educating the users to prepare their own reports. But this has not happened. (c) A final problem, similar to that forced by user services group in any academic computing center, is that of managing the situation where some users require a great deal of computer support while others are so advanced that they desire more access to hardware and to software databases than the computing center is prepared to provide. Staffing is and will remain a problem. Alternate solutions such as levering user groups and/or the skills of other computer center employees will have to be found. The Use of Computers to Examine Impacts of Academic Program Plans of Facultv Staffing Levels In addition to considering market demands for current and proposed programs, decision makers need to consider how program development, modification, and elimination affect the total college faculty resource base. The application of computer technology, specifically spreadsheet analysis, is demonstrated as a means of simulating the outcomes of a variety of academic program decisions. This type of 34 application is important because it provides immediate answers to complex programmatic interactions that are otherwise not recognized. The end result is a significant decrease in the amount of knowledge of calculating required in analysis, and a major increase in the quality of information available for Vinformed decision making. The three components of the computerized faculty staffing model are as follows; the number of majors registered within each of the undergraduate and graduate programs at the college; an impact matrix of the average student credit hours taken by each major within each of the academic diciplines by course level; and translation of full-time-equivalent (FTE) students into FTE faculty required within each discipline. The model is related to the formula budgeting approach, known as the 40 Cell Matrix, that is applied to the State University of New York (Spiro and Campbell, 1984). The Impact of Computer Training of College Administrators Computer Training of Administrators Comeford (1985) indicates that a one-day workshop on using a spreadsheet program could meet the perceived needs of participants and would be sufficient to get them started in using a spreadsheet for a predetermined purpose. Workshop success is dependent upon the availability of at least one 35 computer per two participants, a small teacher-student ratio, and a perceived need of the participants to use the software being taught. Equally important is designing the workshop instruction in such a fashion that. participants learn to use the software by working on a relevant task. The data collected suggest, however, that long term success in using a spreadsheet program for alternative tasks is dependent upon the availability of further instruction through either an individual, or a follow-up workshop. Comeford also reported some positive and negative aspects of using computer spreadsheets. These are the following: Positive Aspects 1. Savings in time and money. Recalculation of tables and formulas is done automatically by the microcomputer without having to repeat the process for every change made in the spreadsheet. 2. Improved accuracy in working with numbers. It is very difficult to lose a nethematical procedure once it is specified to the computer. 3. Microcomputers and software may be nmre readily available to agencies/organizations with limited resources . 36 Electronic spreadsheets open the door to more sophisticated research/evaluation methodologies. Microcomputers and spreadsheets can be used to communicate with mainframes by using a modem which greatly expands research/evaluation capabilities. Negative Aspects 1. A fairly substantial investment is necessary, especially if initial cost of the computer is included. Considerable training is necessary to use the spreadsheet to its full potential. Electronic spreadsheet applications may not be frequently used. Use depends upon the necessity to provide data and the availability of both computer and software to use. Spreadsheets do not allow for sophisticated statistical analyses. Graphic quality is currently not very good in most spreadsheets. The need for expanded memory capacity of the microcomputer may be an added expense. As one becomes more familiar with the capabilities of spreadsheets, more and more time may be taken up in order to learn its capabilities. 37 There are yet other options to investigate, and perhaps many more still to cover. General Spreadsheet Applications to Higher Education Spreadsheet Applications in College Settings This section is an overview of a specific spreadsheet. Microsoft's Multiplan for the Apple Macintosh microcomputer, emphasizing specific features that are important to the academic community, includes the mathematical functions of algebra, trigonometry, and statistical analysis. Grade Tally The grade tally sheet is a straightfoward sheet for computing grades. Student names are followed by their grades on tests. The average is computed using the AVERAGE function, which takes the average of a set of numbers. The letter grade is computed by a function called LOOKUP in the column labeled "GRADE". This function will look up a number in the first column of a table in the spreadsheet and return whatever the value is in the corresponding last column of the same table. Production Decisions The production decisions present a decision analysis tool in which the costs of four different methods of making 38 the same quantity of product are compared to determine which method will cost the least. All four methods require tools, men, material, and machines, but they require different amounts of each. The production methods section of the sheet lays out the four methods and the component requirements of each. This sheet is mathematically simple. The major calculations are multiplication and addition. Yet it provides a powerful analytic tool because as the cost per unit or the number of components needed for any method is changed, the decision on which production method to use is also changed. gpmputer Spreadsheet Application for the Administrator in Higher Education Many spreadsheet programs such as Lotus 1-2-3 allow the user to change, insert or delete titles, numbers and formulas. The existing Lotus 1-2-3 table is instantly restructured with all of the columns, rows and other formulas edited to reflect the user's changes. If the user has entered a formula at one position, the Lotus 1-2-3 program allows the person to replicate it at other locations. The program will also add, average and manipulate rows, columns and other ranges of numbers. Once the format for a particular application has been establish- ed, the user enters or changes numbers at will. The elec- 39 tronic worksheet can be stored on a diskette and can also be printed out in part or in its entirety on a printer. There are many uses for the electronic spreadsheet; some of these uses are as follows: 1. Justify collective bargaining claims. 2. Budgeting 3. Finance 4. Record keeping 5. Future projections 6. Reports The microcomputer software can be used to build models to support administrative decision-making in the area of finance, budget and physical plant maintenance. IBM-PC and Apple microcomputers are the most used in Higher Education and there is ample software for these microcomputers that could be used for this purpose. The survey by Schneider (1984) of the administrative applications of microcomputers identifies word processing, database management, spreadsheet functions, and graphics as four areas in which microcomputer use will reduce repeti- tion, improve cost efficiency, minimize paperwork, enhance filing and retrieval systems, and save time. This will allow administrators and teachers to more effectively channel 40 their energies toward the improvement of curricular programs and instructional strategies. Trends affecting information systems and decisions of college administrators are traced, and specific types of technologies currently available have been reviewed by Bailey (1982). Information systems support routine opera- tions decisions as well as planning and policy decisions. The primary advantage of computerized information systems is rapid access to data and rapid manipulation and comparison of these data. Computerized systems can perform many differ- ent applications and functions, such as automated spread- sheet programs, financial modeling and planning programs. The three sizes of computers commonly used in academic administration consist of: large-scale or mainframe computers, minicomputers, and small business or micro- computers. Another important aspect of microcomputer usage is the utility of wordprocessing, zerox copy machines, mimeographics, and the three major forms of equipment communication (electronic mail, telecommunications, and networks). Cost benefits factors of computing equipment include the application of computer logic to the gathering of primary information and the usefulness of computers to the professional staff. Computerized systems are especially efficient for registration and transcript operations. 41 Use of Microcomputers in Business Education Ruby (1983) indicated to some extent that high school business education is the answer to the computer advocate's needs. Because business education on the secondary level deals in part with numbers, measurement, word processing, and transfer of data by electronic means, it could utilize computer-aided instruction (CAI) and computer—managed instruction (CMI). The first requirement for the introduction of CAI is to develop faculty understanding of the computer. Inservice courses, and training programs are needed in keyboarding, word processing, spreadsheet, and database management. It is often asked, "What is business education doing regarding use of microcomputers?" It seems as though business education is not emphasizing the need for good keyboarding competence; it has often neglected English language instruction and it often uses software designed for business as part of curricula. Business education needs more development of computer-assisted diagnostic (CAD) and prescriptive instruction (PI) and courseware that contains teacher management programs. The leaders in business education need to design a new curriculum using electronic spreadsheets to reflect the infusion of the microcomputers. New course titles and outlines are needed. Courseware needs to 'be developed to incorporate CAD, PI, and teacher 42 management programs, and teachers need to be retrained in the use of the microcomputer and its appropriate courseware (Ruby, 1983). Personal Computers in the College Business Office Uses of personal computers, or microcomputers, by college business offices are discussed by Keith (1985), with attention to safeguards. An advantage of using micro- computers is that business officers can develop their own applications with the purchase of the right software and need not depend on the data processing staff. The micros, which could be located in the business office and are under its control, can be used for wordprocessing or to develop and. maintain work schedules, critical path analyses, and personal appointment calendars. Microcomputers can also be used by administrators for budget modeling to create budget guidelines, and for special reports, including gift campaign summaries, unrestricted gifts and pledges, construction projects, debt service requirements, and present-value calculations for capital budgeting. Although a college should maintain most of its database on its mainframe computer or in a distributed data system network, administrators can use personal computers for selected data files. 43 Certain safeguards are necessary with personal computer use. Accountants should use predetermined control totals and balancing' procedures as they do in all their ’work. Backup diskette copies are also necessary. Users should store data on diskette every half hour or less so they do not lose several hours of work through error or machine failure. Hardware should be purchased from only one or two manufacturers, both to get better prices and to facilitate repairs and shared usage when breakdowns occur (Vinsonhaler, 1987). Applications of Computers Aptitude Test Many tests have been used to measure general problem solving skills which included the ability to break down a problem into component tasks, develop systematic procedures to address these tasks and to measure applications such as word processing and database management to solve problems (Gabriel, 1985). Other tests have been used to measure CAI effectiveness in raising student achievement scores, but different classes of students and different forms of CAI produce different results (Niemiec and Walberg, 1985). The Computer Aptitude, Literacy, and Interest Profile (CALIP) (Poplin, Drew and Gable, 1984) is a standardized test battery with six subtests that is designed to assess 44 the computer-related abilities of individuals between 12 and 60 years of age. Its authors noted that computer aptitude tests generally focus on an assessment of the potential skills of adult programmers and authors noted that computer aptitude tests generally focus on an assessment of the potential and that the appropriateness of these existing instruments in predicting success for younger students or for college students in computer classes is rather doubtful. Consequently, the CALIP was developed to assess the computer-related aptitudes of younger or inexperienced populations, including talented minority persons and individuals with reading disabilities, and to assess the potential to function in "less traditional computer-related careers". Although the CALIP apparently is distinctive because it can be used to assess the aptitudes for a wide range of computer related careers, a review of the CALIP manual suggests some inconsistency on this point. Despite such an inference, the authors state that their main focus in developing the CALIP was "to identify individuals with high potential for programming" and that the principal normed vocational group of the CALIP is designed to assess aptitudes of such job categories as computer service technician computer systems analyst, and 45 computer sales person, as is inferred. The authors also infer that use of the test with handicapped individuals, especially those with reading disabilities or learning disabilities, is appropriate. Little clarification is provided as to the types of handicapping conditions or learning disabilities that can be accomodated by the test. The CALIP is an instrument designed to do what no instrument currently on the market is capable of doing: provide measures of interest, aptitude, and literacy in a wide variety of computer-related occupations for a: broad population. The author reports that 'this is an extremely formidable task and one that the CALIP, in its present form, seems unable to accomplish. Questions exist regarding the actual constructs measured by the instrument, the reliability of some subtests and CAQ's at various age levels, and the ability of the battery to differentiate between high and low aptitude individuals of varying ages in a variety of computer—related occupations. Although Computer Literacy curricula are being implemented in schools and districts across the country, Gabriel (1985) suggests there is much work to be done in the assessment of those curricular goals. The test development and validation activities reported represent one start in that effort. The relative infancy of this 46 content are is abundantly evidenced in the data presented. Average test performance was lower than in most other achievement areas—50 percent correct at 'the total test level, substantially less on some of the subtests. Implementation of curriculum activities was also low, particularly at the elementary and junior high school levels. Gabriel asks, "if students are not being exposed to the curriculum in a uniform, articulated way, how can they be expected to acquire the skills and concepts tested?" The validity of the instruments in this study rested on student experience, both hands on and class discussion, rather than on the strength of the curriculum in the school. Future efforts must carefully chronicle the implementation of instructional activities to determine their fidelity to the curriculum on which the assessment is based. A final recommendation from this research deals with the assessment approach undertaken. Variability in Computer Literacy curricula must be matched by flexibility in assessment tools. The current approach utilizes an item bank of nearly 500 items. Specific tests may be constructed through a selection of items matching the curriculum objectives in place in a «given classroonl grade level or school. This type of tailored test construction will ensure the content validity of the assessment. Structured 47 observations specified tasks on microcomputers can effectively supplement more traditional testing methods. Summary This chapter has presented analysis of concepts associated with the major variables and topics used in this study. That was followed by an examination of previous research dealing with those variables individually and as they interact with each other. Finally, the theory of spreadsheet (dependency table) application; as an explanation of the many research findings regarding computer spreadsheets was discussed. In Chapter III an account of the procedures used to conduct the study will be presented. CHAPTER III METHODOLOGY Introduction As described in Chapter 1, the fipgp purpose of this study was to provide an operational definition for spreadsheet problem solving. This was accomplished by using paper and pencil test items developed by the researcher. The second purpose was to investigate two problems in spreadsheet problem solving: (a) evidence for content independent and content dependent components and (b) the relationship between complexity and difficulty. For this purpose, two subtest were created: the Dependency Table Problem Solving Subtest and the Spreadsheet Language Subtest. For the creation of the dependency table problem solving subtest, items were varied on complexity. The complexity was varied by varying the method of stating the test problem and the type of response required. The test problem statements had a narrative format or table format. The type of response was represented by values or formulas. The Dependency Table Problem Solving Subtest included a set of three items for each of the four different test item categories mentioned above. There was a total of 12 test items selected for this subtest. The Spreadsheet Language 48 49 Subtest was created by collecting items from lotus 1-2-3 software manuals, from reviewing the literature, and from other spreadsheet software manuals. Items were pretested and evaluated by experts. There was a total of 10 items selected for this subtest. The third purpose of the study was to apply measurement technique to Spreadsheet Problem Solving. This was accomplished by creating a Spreadsheet Problem Solving Test containing the two subtests. Traditional methods of measuring reliability and validity were applied to the total test. This chapter represents in detail the methodology employed in this study. Included are a listing of the research population and sample, the test development process, data collection techniques and the method of analysis. The major research questions considered were as follows: Question 1. Can a useful operational definition of spreadsheet problem solving be developed using inexpensive multiple choice formats? Question 2 (Part One). Does the content dependent (dependency table problem solving) subtest measure different abilities from the content independent (spreadsheet language) subtest? 50 Question 2 (Part two). Are the more complex problems (narrative presentation with alternative formulas) more difficult than the simpler problems (using table presentation and numerical value alternatives)? Question 3. Will total test scores show evidence or reliability and validity? Population and Samplg The population for the study included experts, (college instructors who teach computer application courses) and novices (college students who were registered in the CEP 434 introductory computing courses of the College of Education, Michigan State University). The sample included sixty four college students and ten college instructors who were considered experts by their peers. All of the participants were volunteers and not required to participate in this experiment for extra grade points. Permission to conduct this study with these subjects was granted by the University Committee on Research Involving Human Subjects, Room 238 Administration Building, Michigan State University. Instrumentation and Test Development The methods used in the study are summarized in Figure 3.1. Methods will be discussed in the order given in that figure. These topics are treated in this section since they Figure 3.1. v r . outpatients. 51 Problen 1: Created an operational definition of spreadsheet problan solving. Used paper and pmcil test format as the operational definition Problan 2: (Part I) Determined evidence for content indepaidam: and context dependent Problem 2: (Part 2) Determined if basic information processing relationship of caipledty and ' ficult holds prob lea solving . Overview Created a subtest for table I Created a aibtest for spreadsheet language. Problan 3: Determined if a reliable 5. valid test of spreadsheet problem solving is possible. l expert and novice y Canbined the subtest 8 admnistered test to subjects. of the Research Problem. Methods Used for Each 52 all concern the instrumentation and test development activities associated with the three research problems summarized previously. Research Problem 1: Creating an Operational Definition of Spreadsheet Problem Solving The test item format selected was a nmltiple choice paper and pencil version of typical spreadsheet problems. Since this approach has proved so efficient and effective in measuring diverse psychological traits, it was the obvious choice. For purposes of this study, problems or test items were formulated from two sources: (a) ten experts in the field of computer science and spreadsheeting and (b) spreadsheet software packages and books available to the college administrator, such as Lotus 1-2-3, by Lotus Development Corp. (1987) The development of the test items is discussed below. Examples of the paradigm have already been discussed and will not be repeated here. Research Problem 2 (Part One): Determine Evidence for Content Independent and Dependent Components Figure 3.2 summarizes the methods used to investigate the problem. As shown, two subtests were created: one for content area independent problem solving abilities and one for content dependent problem solving. By content area is meant the field of application, e.g. engineering, business, 53 1 Problem 2: (Part One) Determined evidence for conteit independent & conteit dependent ‘camponeits . Content dependent (dependency table problem solving) subtest. Generated test problem item with high and low complexity. Pre-tested items. Selected items. Prepared scoring key. Figure 3.2. Created dependency table problem solving subtest. 1 Content independemt (spreadsheet software language)subtest . Selected items from text books and marmals for spreadsheet software. Pie-tests? items. Selected items. Prepared scoring key. Created spreadsheet language subtest . Problem 3 Methods Used for Research Problem 2 (Part One). 54 education, etc. We will mainly consider the methods used in creating the content independent subtest in this section and go on to the other subtest in the next section. Description of the Creation of the Spreadsheet Software Language Subtest: Item Generation. Number of Items, General Context. The spreadsheet software subtest is composed of ten multiple choice questions. The content used in these items were taken from spreadsheet language (Lotus 1-2-3) books and manuals. The following are two test item examples used in the spreadsheet subtest. Spreadsheet Language Sample Test Item #1: 1. How do you move from Cell to Cell? A. Pressing the arrow Keys. B. Pressing the escape Key. C. Pressing the control Key. D. Pressing the space bar. The correct answer for this question is the letter "A". Spreadsheet Language Sample Test Item #2: 1. To add the first three cells in a row of a spreadsheet you type: A. B2 + J5 + Cl B. C3 + C5 - D7 C. A1 + B1 + C1 D. Bl (@B3) The correct answer for this question is the letter "C". The specific procedures used in pretesting, selecting items, and preparing the scoring key are discussed for both subtests in section IV, below. Test booklets and scoring keys are given in Appendix A. 55 Problem II (Part II) Determine if Complexity is Related to Difficulty Figure 3.3 summarizes the methods used to investigate this problem. As shown, complexity was varied by using two types of problem statement (table versus narrative) and two types of problem response. (value versus formula). The following is a presentation of one example for each type of dependency table problem item used in this study. All other items, test booklets of Version 1 an 2, answersheets and instructions are in Appendix A. The dependency table problem solving subtest is composed of four subtests of three questions each. The four subtests are: table value, table formula, narrative value, and narrative formula. The entire subtest was developed and used as the primary instrument for collecting data necessary for this aspect of the study. An explanation of each subtest is as follows: 1. Table value subtest - This subtest of three questions was designed to measure achievement on spreadsheet problem solving requiring small amounts of information processing and low levels of abstraction. 56 Problem 2 PartIL: Determine if Basic Information Processing relationship of complexity & difficulty holds. Creation of Problem for Dependency Table Problem Solving Subtest by varying complexity of items. I Complexity of Test Item Problems \ l, ‘ Stated as a Table l Stated as a Narrative i I l. J. l L Numerical Value Response Nurerical Formula Response Numerical Formula Response 3 item of 3 items of medium complexity, medium complexity [Numerical value} Response 3 items of hi complexity 3 item of low complexity Dependency Table Problem Solving Subtest (12 items) Figure 3.3. Methods Used for Research Problem 2 Part II. 57 Table Value Subtest Sample Problem (Low Complexity): Given the following grade record table, What is the average exam score for Jim? NAME EXAM 1 EXAM 2 EXAM 3 EXAM 4 AVERAGE 1. Jim 97 73 81 91 ? 2. Bob 83 87 89 82 85.25 3. Mary 100 96 78 89 90.75 A. (97 + 73 + 81 + 91) / 4 B. (97 + 83 + 100 + 73) / 4 c. (97 + 37 + 81 + 91) / 4 D. (91 + 73 + 81) / 3 The correct answer for this question is the letter "A". 2. Table formula subtest — This subtest of three questions was designed to measure achievement on spreadsheet problem solving requiring small amounts of information processing and high levels of abstraction. Table Formula Subtest Sample Problem (Medium Complexity): Given the following payroll, calculate the total pay for Levine including the percentage increase. A B C D 1. FACULTY SALARY INCREASE TOTAL PAY 2. Levine 21,875 10% ? 3. Stevens 23,971 5% - 4. Kernel 19,782 18% — A. (B2 * C2) + B2 B. (B3 * C2) — B2 C. (82 - C2) * B3 D. (B4 + 83) - BZ The answer for this question is letter "A". 3. Narrative value subtest - This subtest of three questions was designed to measure achievement on spreadsheet problem solving requiring large amounts of informatidn processing and low levels of abstraction. 58 Narrative Value Subtest Sample Problem (Medium Complexity): Edward Thompson worked for 2 hours on Monday, 7 hours on Thursday and 20 hours on Friday. Mrs. Peters worked for 10 hours on Tuesday and 20 hours on Thursday. They both have been paid for 10 hours. How would the manager compute the correct number of hours owed to Edward Thompson? A. 7 + 20 + 20 + 10 B. 7 + 2 + 20 + 10 C. 2 + 7 + 20 r 10 D. 2 - 7 - 20 + 10 The correct answer for this problem is the letter "C". 4. Narrative formula subtest - This subtest of three questions was designed to measure achievement on spreadsheet problem solving requiring large amounts of information processing and high levels of abstraction. Narrative Formula Subtest Sample Problem (High Complexity): A college professor has a computer table report with some students' test Scores. The CELL A1 contains the name Tracy Willis. The CELL Bl contains 82. The CELL C1 contains 85. The CELL D1 contains 95. State the formula that correctly computes the average test score for Tracy Willis. A. (c1 + Bl + D1 - Al) / 4 B. (c1 + Bl + A1 + Dl) / 3 c. (Bl + Al + D1 + c1) / 3 D. (81 + c1 + D1) / 3 The correct answer for this problem is the letter "D". Research Problem 3: Determine if a Reliable and Valid Test of Spreadsheet Problem Solving is Possible. Figure 3.4 summarizes the methods used to investigate this problem. As shown, a test of spreadsheet problem solving was developed by combining the two (dependency table problem solving and spreadsheet language) subtests. 59 Problem 3: Determined I if a reliable & valid test of spreadsheet problem solving is I possible. Developed a test of spreadsheet problem solving by combining the two subtests. l Developed instructions and test booklets. Randomly ordered item in test booklets. l Administered to novices and experts in spreadsheet problem solving. 1 Data analysis to evaluate reliability and validity. Figure 3.4. Methods Used for Research Problem 3. 60 The score on each subtest was obtained by assigning a score of () = incorrect and 1 = correct for each problem using the scoring key. The subtest score was the sum of the item scores (ie. number of correct responses). The total score was the sum of the two subtest scores. Pre-testing of the Items Pilot testing offers the opportunity to try the instrument with the kind of respondents anticipated in the main test. (Moser, 1958) In the first pilot testing, sixty college students and two experts in spreadsheeting and computer science were used. The panel of experts, given their professional experience, were deemed qualified to judge the face validity of the "Spreadsheet Problem Solving Achievement Test". These reviewers were Dr. Norman Bell, and Dr. Henry Kennedy, Professors of Educational Systems Development and Instructors of the course CEP-434 (Computer Applications in Education), both of Michigan State University. The sixty college students included both the computer naive and persons with some computer experience. The task of both groups was to take the test and then to examine the test critically, and make comments and suggestions to improve both the content and clarity of the instrument. The results of the pilot testing were incorporated into the version given to the expert reviewers. The final version of the test contained the collective 61 suggestions of both groups. Procedures and results were as follow. The first pilot test indicated that the average length of the time required to complete the total test was sixty- minutes. In order to provide sufficient time for instructions, questions, and completion of the test, a reduction on the number of items was necessary. This reduction proved to be helpful. In a second pilot testing, a revised test of fifty questions was administered to sixty college students and ten experts. Feedback from the respondents suggested the need to reduce its length, difficulty and eliminate ambiguous items. From the results of this second pilot test, the investigator saw the need to decrease the number of test items from fifty to twenty two. These twenty two items were reviewed by' a panel of ten reputed experts in the field of computer science and spreadsheeting. In several instances, items were changed to eliminate logistical problems (such as poor English construction, and the excessively technical vocabulary) which caused attitudinal problems among the subjects. The content of the final items was as follows. Ten of the twenty two items were related to spreadsheet language and twelve were related to dependency table problem solving. The scoring key was developed using the alternatives selected by the pannel of experts. 62 mm The final 22 item test was then administered to sixty- four college students considered novices in the field of computer science and spreadsheeting and ten other experts in the field. During the Winter Quarter of 1987, the test was administered to the students in a classroom of the College of Education. The test for experts was given personally at their office. All tests were answered within the allowed one-hour time limit. Two versions of the test were used: version #1 or version #2. The test versions differed only in the random order of questions presented. Versions were assigned to subjects in a random order. The sixty—four students and ten expert subjects, received a packet (See Appendix A) which contained: 1. A cover letter from the researcher which explained the study and contained an assurance of confidentiality; 2. A copy of the "Spreadsheet Problem Solving Achievement Test; 3. A written permission release form required by the office of Human Subjects, Michigan State University; 4. An answersheet, instructions, test booklet; and 5. A number two pencil. 63 The instructions were presented orally and in written form, before the students began the test. There was time for questions before and during the test. The time limit was sixty minutes. The tests were given at random with alternating versions to sequential subjects. The return rate was 100 percent from the students. Data analyses were performed for each of the three research questions. Method of Analysis The Statistical Package for the Social Sciences—SPSS, (Nie et. al., 1975) was used for all statistical analysis. Question # . Can a useful operational definition of spreadsheet problem solving be developed using low cost multiple choice formats? There was no specific statistical analysis used to evaluate the operational definition beyond the pre-testing and elimination of items showing artifactual irregularities (eg. no clear correct answer, confusing problem statement, multiple correct answers). Evidence of the value of this operational definition was indirectly provided by the analysis of test score distribution, test score reliability and test score validity. Question #2 (Part 1). Does the content dependent (dependency table problem solving) subtest measure different abilities from the content independent (spreadSheet language) subtest? 64 The means, frequency distributions and standard deviations were calculated for both subtests and compared. To find out if the two subtests measured somewhat different abilities a Pearson product moment correlations was calculated between the spreadsheet problem solving subtest and the spreadsheet language subtest. Question 2 (Part 2). Are the more complex problems (narrative presentation with alternative formulas) more difficult than the simpler problems (using table presentation and numerical value alternatives). Means, standard deviations, Analysis of Variance Test of Significance for each of the four types of items was calculated to find any existing differences. ANALYSES OF SUBSCORES ON EACH THREE ITEM GROUP Complexity: Low Medium Medium High Group of Table Table Narrative Narrative Items: Value Formula Value Formula Statistics Means & Means & Means & Means & Used: Standard Standard Standard Standard Deviations Deviations Deviations Deviations for Table for Table for Narrative for Value items Formula Value items Narrative items Formula items Test of differences among groups was done with the ANOVA Statistical Test, using the number of correct responses for the four groups of items. 65 Question # . Will total test scores show evidence of validity and reliability.? Reliability was calculated for spreadsheet problem solving total test scores. The Corrected Split Half Technique (Kuder and Richardson, 1976) was used. Validity, was measured by correlating the amount of experience (Expert vs. Novice groups) with scores on the spreadsheet problem solving total test. All findings are reported in a tabular form. Data from the test answer sheets were transferred to coding sheets and keyed into a computer. Raw data were converted into numeric codes for each response to each item. Other statistical analyses was done which were not directly related to the three major research questions. These analyses were used as an exploration of posible variables for further research. In Chapter IV which follows, the researcher describes in detail these secondary analyses of data. CHAPTER IV FINDINGS OF THE STUDY Introduction The reader will recall that the dissertation involved three basic problems. The first was to create an operational definition of spreadsheet problem solving. The second problem was divided in two parts. Problem two, part one, was to determine evidence for content independent and dependent components. Problem two, part two, was to determine if basic information processing relationships of complexity and difficulty holds for spreadsheet problem solving. The third problem was to determine if a reliable and valid test of Spreadsheet Problem Solving is possible. In the following, will be presented the results and statistical analyses associated with each of the problems. Problem 1: Create an Operational Definition of Spreadsheet Problem Solving Research Question 1: Can a useful operational definition of spreadsheet problem solving be developed using low cost multiple choice problem formats? The establishment of the validity of an. operational definition admits to no specific statistical test. Rather, validity -is established by: (a) the similarity of the 66 67 operational definition to the situation or phenomena in question and (b) the research products yielded using the definition, i.e. whether or not the results are useful and - valid. Assuredly, the paper and pencil spreadsheet problem does show face validity-in terms of the similarity of the actual task of creating the spreadsheet. However, in the test, the response is one of recognition rather than actual production of the correct answer. This might yield a somewhat more easily solved problem and some degree of correct response by simple guessing. With regard to results, there are two sources of data bearing on the operational definition: (a) the evidence of validity in-terms of scores by experts and novices and (b) the similarity of the distribution and statistics for the test scores to those of proven operational definitions, e.g. normal distribution with no obvious artifacts or irregularities. The validity statistics are given in a subsequent section. Here, only the distribution will be examined. B§§EL£§ The results for the distribution of total scores are given in Table 4.1. 68 Table 4.1. Frequency distributions of total scores for all groups (n=74). Test Score Relative Frequency 3 4. 4 2.7 5 1.4 6 1.4 7 6.8 8 2.7 10 4.1 11 2.7 12 6.8 13 9.5 14 4.1 15 5.4 16 2.7 17 12.2 18 6.8 19 13.5 20 13.5 Mean Score = 14.17 Standard Deviation = 5.10 Maximum Possible Score = 22 The left column shows the test score and the right column shows the relative frequency percentage. As may be seen, 13.5% had a perfect score on the test. Table I showed that almost 42% of the subjects received high scores of 17 or more. The same data are shown in the Figure 4.1. The frequency distribution is clearly negatively skewed. (e.g. There are far more high scores than low scores.) Percentage of Rospon dents 69 Total Test Score Frequency Distribution Percentage of Respondents by Tat Score *\.\\\\\\\\\\\1 W). l l I r I .3 4 5 6 7 8 10 11 12 13 14 15 16 17 18 19 8 -\\\\\\\\\\\\\\\\R1 Figure 4.1. Total Test Score Frequency Distribution. 70 While the distribution shows no apparent irregularities (such as extreme scores), it does show that many of the items are too easy for the group tested. As will be seen in a' later section, part of the skewenwess is due to the presence of spreadsheet experts. But the distribution remains skewed even when the experts are removed. In short, the spreadsheet problem operational definition fails to yield a normal distribution. However, increasing the difficulty of items might well yield a normal distribution. The next section presents problem 2, part one. This section considers evidence on the question of the content independent and dependent components. Problem 2 (Part One): Determine Evidence for Content Independent and Dependent Components Question 2 (Part One): Does the content dependent (dependency table problem solving) subtest measure different abilities from the content independent (spreadsheet language) subtest? The evidence for content dependent or content independent components was obtained by calculating (a) means, frequency distributions and standard deviations for each of the subtests. (e.g. Dependency Table Problem Solving Subtest and the Spreadsheet Language Subtest) and (b) by calculating a Pearson Product Moment Correlation between the two subtests. 71 If both subtests are measuring the same abilities with no special differences due to content independent 'versus content dependent components, then no differences would be expected between the means, standard deviations and distributions of subtests scores. Further, the correlations between the two subtests would be nearly as high as the reliabilities of either of the subtests. Table 4.2 shows the means, standard deviations and frequency distributions for the dependency table problem solving subtest for all groups (n=74). Table 4.2. Frequency distribution for dependency table problem solving test scores all subjects(n=74). Test Score Frequency Percentage 2 2.7 3 2.7 4 4.1 5 1.4 6 5.4 7 4.1 8 5.4 9 4.1 10 13.5 11 24.3 12 32.9 Mean Score = 9.6. Standard Deviation = 2.8. The left column shows the test score. The right column represents the frequency percent. As could be seen 60% of the respondents received a score of 10 or more. 72 Table 4.3 examines the frequency distributions of the spreadsheet subtest scores for all subjects (n=74). The left column represents the test score. Table 4.3. Frequency distributions for spreadsheet subtest scores for all groups (n=74). Test Score Relative Frequency 16.2 1 9.5 2 9.5 3 4.1 4 6.8 5 14.9 6 8.1 7 10.8 3 8.1 9 12.2 Mean Score = 4 Standard Deviation = 3.1 The right column shows the relative frequency. As may be seen 46% received scores of 4 or less and 54% received scores of 5 or higher. To test the significance of the difference between the mean scores, a T-test for correlated non—independent group means was calculated as follows (Christensen & Stoup, 1986, page 293). s for DTPS = 2.8 s for SSL = 3.1 321-522 t= -—————— x1 for DTPS = 9.6 x2 for SSL = 4.0 8(21-22) r = .50 The standard error was calculated. with the formula from (Christensen & Stoup, 1986 p. 293): 73 S(§1-§2) = jsxf+st-2r sxlsX2 = /(2.8)2+(3.1)2—2(.50)(2.8)(3.1) = J17.45 - 8.68 S for DTPS=2.8 S for SSL =3.1 = 8.77 _ x for DTPS=9.6 = 2.96 _ x for SSL =4 r =.50 The value of the t-statistic was as follows: 9.6 - 4.0 2.96 The differences were significant at the .05 level of confidence (P=.03) (Christensen and Stoup, 1986, page 293). Thus, the mean scores between the two subtests are significantly different. Figure 4.2 and 4.3 show the dependency table problem solving subtest and the spreadsheet language subtest frequency distributions. As may be seen from the graphs the dependency table problem solving subtest frequency distribution is negatively skewed, while the the frequency distribution for the spredsheet subtest is rectangular. In short, (a) neither distribution is normal and (b) the distributions are different from each other. 74 Subtest—Dependency Tobie Prob. Solving Percentage 0! Respondent: by Test Score 3 2.- 8 r 22" 9 20 O l 18 16~ O 2 C O E k ‘07 \\\\\\\\\\\\\\\\\\\\\V :%@%m%%%@, 2 3 4 5 6 7 8 9 10 Test Score: N Figure 4.2. Subtest--Dependency Table Problem Solving Subtest— —Spreodsheet Longuoge Porto ntoqe of Re espouse-«spy Tat Score 30 28-1 26- u— : 22" C .9 20- C 9. 18- I C a: Is— 3 1 I O 2 12- S r 10— : 7 . a- / 6- / A— / 2— E2 / o l T' l T l l T l l l o 1 2 3 4 5 6 7 8 9 Test Score Figure 4.3. Subtest-—Spreadsheet Language 75 The correlation between spreadsheet subtest and dependency table problem solving subtest for all groups (n=74) was .50. The Reliability for the subtests were as follows: Dependency Table Problem Solving Subtest ---.87 Spreadsheet Language Subtests -------------- .72 Thus, the correlation between the subtests was lower than both the reliabilities. Fisher's — 2 Transformation statistical analysis was used to test the significance of the differences of the intercorrelation from the reliabilities. The formula for the statistic ( Weiss/ Hassett, 1982, p. 576) is the following: t =Zr1 ‘ Zr2 (1/ n - 3) where Zr = Fisher Z transformation of the correlation coefficient (Edwards, 1956, p. 425). rt = subtest reliability, and r2 = the intercorrelation. The results for the difference between the reliability of the Dependency Table Problem Solving Subtest (.87) and the intercorrelation for the subtests (.50) is as follows: 2.87 - 2.50 1.33 - .549 .781 t- = = = 6.58 .1186 .1186 1 / J 71 76 The difference was significant at the .01 level of confidence (p < .01). The results for the difference between the reliability of the Spreadsheet Language Subtest (.72) and the intercor- relation is as follows: Z.72 -Z.50 .908 - .508 .359 z—_ = _— = —= 3.03 l/Jr;;T——- .1186 .1186 The difference was significant at the .01 level of confidence (p < .01). (Edwards, 1956, table page 501). These results show that the inter-correlation of the subtests is significantly lower than the reliabilities for both subtests. Hence the hypothesis of a content independent and content de endent com onents of s readsheet roblem solving seems confirmed. The two tests show different means. different distributions (one skewed. one rectangular) and the intercorrelation are significantly less than the reliabilities. The definitional question remains, of course, of whether theoretical term "content independent" is properly applied to the spreadsheet language component. This theoretical issue will be dealt with in the next chapter. The next section presents problem 2, part two. This part concerns whether or not the basic information 77 processing relationship of complexity and difficulty holds for spreadsheet problem solving. Problem 2: (Part 2) Determine if basic Information Processing relationship of complexity and difficulty holds for Spreadsheet Problem Solving Research Question 2: (Part Two) Are the more complex problems (narrative presentation with alternative formulas) more difficult than simpler problems (using table presentation and numerical value alternatives?) If complexity is related to difficulty, then less complex problems should show 'more correct answers. Thus, according to the hypothesis, the table-value condition should be the least difficult (i.e., show the highest mean number correct). The table-formula and narrative-value conditions should be of medium difficulty, and the narrative-formula condition should be the most difficult (i.e., show the lowest mean number correct). The following is a description of the mean scores and standard deviations for the Low, Medium and High complexity levels of the test. As may be seen, the data verify the prediction. 78 Table 4.4. Complexity Levels of the Test Condition Mean Number Standard Deviation Correct of the Mean Number Correct Low complexity 2.60 .948 (Table-Value) Medium Complexity (Narrative Value) 2.55 .685 (Table-Formula) 2.39 .962 High Complexity (Narrative-Formula) 2.09 1.11 The following section shows the frequency distributions, standard deviations and other results for problem 2: part 2. Table 4.5 shows the frequency distribution of the table manual subtest for all groups (n=74). Table 4.5. Frequency distribution of the table value subtest scores all groups (n=74) Test Score Relative Frequency 0 9.5 l 4.1 2 2.7 3 83.8 Mean Score = 2.60 Standard Deviation = .948 The left column represents the test scores. The right column represents the relative frequency. As could be seen 83.8 percent had a perfect score. 79 Table 4.6 shows the frequency distribution of the narrative value subtest scores for all groups (n=74). Table 4.6. Frequency distribution of the narrative value subtest scores of all groups (n=74). Test Score Relative Frequency 0 9.5 1 5.4 2 20.3 3 64.9 Mean Score = 2.55 Standard Deviation = .68 The left column represents the test score. The right column shows the relative frequency. As could be seen 64.9 percent had a perfect score on the subtest. The next table shows the frequency distribution of the table formula subtest for all groups (n=74). Table 4.6. Frequency distribution of the table formula subtest scores all groups (n=74) Test Score Relative Frequency 1.5 6.8 25.7 66.2 WNHO Mean Score = 2.39 Standard Deviation = .96 The left column represents the test score. The right column represent the relative frequency. As could be seen 66.2 percent had a perfect score on the subtest. 80 Table 4.7 shows the frequency distribution of the narrative formula subtest for all groups (n=74). Table 4.7. Frequency distribution of the narrative formula subtest scores all groups (n=74) Test Score Relative Frequency 18.9 14.9 54.1 UNHO Mean Score = 2.09 Standard Deviation = 1.11 The left column shows the test score. The right column shows the relative frequency. As could be seen 54.1 percent had a perfect score. The following is a detailed description of the One-way Repeated-Measures Analysis of Variance (ANOVA) test which was used to determine if there was a significant difference among the means of the four complexity conditions (Table Value, Table Formula, Narrative Value, and Narrative Formula). This ANOVA analysis is as follows. (Christensen and Stoup, 1986, p. 416-429). The ANOVA summary table for the subtest complexity conditions Table Value, Table Formula, Narrative Value, and Narrative Formula is shown in Table 4.8. The summary table presents the relationships among the various sum of squares, degrees of freedom, mean squares and the F-ratio. 81 Table 4.8. ANOVA Summary Table for the complexity conditions Table Value, Table Formula, Narrative Value and Narrative Formula. Source of Variation SS df MS F Subjects 159.75 73 - - Conditions 12 3 4.00 8.88 Residual 99 219 .45 - Total 270.75 295 - - The one way repeated measures ANOVA indicated that there was a significant difference (F=8.88) among the means of the four subtest complexity conditions at the .01 level of confidence, F(3,219) = 3.88, p<.01. Thus we may conclude that the ability to solve the four subtests varied as a function of the complexity of the subtest conditions. Figure 4.4 shows a graphic comparison of the score frequency distribution of the four subtests. As may be seen the effects of complexity are apparent in the frequency distributions. Summary The hypothesis that complexity is related to difficulty is confirmed by the results, although the size of the effect is not large. The frequency distributions indicate that the effect may be attenuated by the low ceiling of the subtests, i.e. they do not include sufficiently difficult items. Hence the effect of complexity may be greater with a better test. 9O 80 70 Frequency b 01 O D 82 Frequency Distribution of TC,N:\/1,TM,TC Oeper'derxs/ Tobie Problem So‘rv. Subtest TV TF NV /NF Test Score Figure 4.4. Frequency Distribution of Each of the Subtests TV,TF,NV,NF 83 The next section shows the results of problem 3 in detail, including the reliability and validity of the test. Problem 3: Determine if a Reliable and Valid Test is possible Research Question 3: Will total test scores show evidence of reliability and validity? The expected characteristics of the tests are: (a) a normal distribution of total scores for novices (b) reason- ably high reliability for total and subtest scores (c) an acceptable high validity coefficient (i.e. the correlation between the level of expertise and the total scores). The analyses needed were (a) test distributions (b) reliabili- ties of each subtest and the total test (c) evidence of validity of the test. The reliabilities of the total scores, dependency table problem solving test score (DTPS) and the spreadsheet subtest scores (SS) are as follows. Test Split-Half Number of Items DTPS .87 88 L1; 10 Total Scores .86 22 As may be seen the DTPS subtest had an internal consistency reliability of .87 and the SS subtest had an internal consistency reliability of .72. The results indicate that there was a sufficiently high reliability for the test to be useful. 84 The purpose of Table 4.9A. was to examine the total score validity data in terms of the correlation between level-(experts versus novices) and total scores using all cases (n=74). Table 4.9A. Validity Analysis for Total Scores Total Scores Expert vs. Novice .4389 (74) P = .001 The results indicate that there was a significant correlation (.4389) between factors experts versus novice and total scores. The purpose of Table 4.9B was to examine the spreadsheet subtest validity data in terms of the correlation between level—(experts versus novices) and spreadsheet subtest scores using all cases (n=74). Table 4.9B. Validity Analysis for Spreadsheet Subtest Scores Spreadsheet Subtest Scores Experts vs. Novice .5641 (74) P = .001 85 The results indicate that there was a significant positive correlation (.5641) between factors experts versus novice and spreadsheet subtest scores. The purpose of Table 4.9C was to examine the Dependency Table subtest validity data in terms of the correlation between level-(experts versus novices) and the dependency table problem subtest scores using all cases (n=74). Table 4.9C. Validity Analysis for Dependency Table Subtest Scores Dependency Table Subtest Scores Experts vs. Novice (74) P > .05 The results indicate that there was no significant correlation (.1857) between the level experts versus novice and dependency table subtest scores. The failure to obtain a significant validity coefficient for the subtest is probably due to the lack of a sufficient number of difficult items in this subtest. The distribution of scores on the subtest (see Appendix D) shows a very high degree of negative skewness. Table 4.10A shows the distributions for the expert group. As expected,the distribution is negatively skewed. 86 Table 4.10A. Frequency distributions of the total test scores for the expert group (n=10) Test Score Frequency Percentage 19 2 20 20 8 80 Total 10 100 As may be seen 20% of the expert group had a score of 19 while the other 80% had a score of 20. Table 4.10A showed that the expert group received higher total test scores. This could have meant that the test material was easy for the expert group. Table 4.108 shows the frequency distributions of the total test scores for the novice group (n=64). The left column represents the test score. The prefered shape of the distribution is normal. The center column of Table 4.103 represents the score frequency. The right column represents the percentage of respondents. As may be seen almost 50% of the respondents had a score of 15 or higher on the test. This could have meant that some of the items were too easy even for the novice group. 87 Table 4.108. Frequency distributions of the total test scores for the novice group (n=64) Test Score Frequency Percentage 20 3 4.69 19 8 12.50 18 5 7.81 17 9 14.06 16 2 3.13 15 4 6.25 14 3 4.69 13 7 10.94 12 4 6.25 11 2 3.13 10 3 4.69 8 2 3.13 7 5 7.81 6 1 1.56 5 1 1.56 4 2 3.13 3 3 4.69 Total 64 100.02 Figure 4.5 shows the skewed distribution of scores for the novice group. The distribution shows a reasonable range of scores, but the skewness still remains. Probably, the test item difficulty could be increased to get; a normal distribution of scores. In general, the test shows reasonable reliability and validity. But, the distribution is too highly skewed to provide the basis for a standardized test for any purpose beyond a rough screening device. One of the subtests (Dependency' Table Problem Solving) is probably the ‘major source of skewness and has a non-significant validity coefficient. This subtest is the principle weakness in the orcen! of Respondents 88 Total Test Scores Distribution Novices Potential Norms for 0 Target Population '\\\\\\\\V ~x\\\\\\\\i\\\\\\\\\\\\\\W \\\\R\\\\\\\\\\\\\\\V "\\ ~\\\\\\\\\\\\\‘i -§X\\\\\\\\\\\\\\\N m -m m I to B i l i 3 4 5 6 7 8 1O 11 12 1.3 14 15 16 17 18 1 Test Sacra Figure 4.5. Total Test Scores Distribution Novices 89 study. Hence, further research will be necessary to make the level of difficulty of the test higher than in this version. We shall now turn to the conclusions, implications and recommendations based upon these results. CHAPTER V SUMMARY, CONCLUSIONS AND RECOMMENDATIONS Introduction As described in Chapter 1, the figs; purpose of this study was to provide an operational definition for spreadsheet problem solving. This was accomplished by using paper and pencil test items developed by the researcher. The second purpose was to investigate two problems in spreadsheet problem solving: (a) evidence for content independent and content dependent components and (b) the relationship between complexity and difficulty. For this purpose, two subtest were created: the Dependency Table Problem Solving Subtest and the Spreadsheet Language Subtest. For the creation of the dependency table problem solving subtest, items were arranged by complexity. The complexity was increased by varying the method of stating the test problem and the type of response required. The test problem statements had a narrative format or table format. The type of response was represented by values or formulas. The Dependency Table Problem Solving Subtest included a set of three items for each of the four different test item categories mentioned above. There was a total of 12 test items selected for this subtest. The Spreadsheet Language 90 91 Subtest was created by collecting items from LOTUS 1-2-3 software manuals, from reviewing the literature, and from other spreadsheet software manuals. Items were pretested and evaluated by experts. There was a total of 10 items selected for this subtest. The third pugpose of the study was to apply measurement technique to Spreadsheet Problem Solving. This was accom- plished by creating a Spreadsheet Problem Solving Test con- taining the two subtests. Traditional methods of measuring reliability and validity were applied to the total test. The major research questions considered were as follows: Question 1. Can a useful operational definition of spreadsheet problem solving be developed using inexpensive multiple choice formats? Question 2 (Part One). Does the content dependent (dependency table problem solving) subtest measure different abilities from the content independent (spreadsheet language) subtest? Question 2 (Part Two). Are the more complex problems (narrative presentation with alternative formulas) more difficult than the simpler problems (using table . presentation and numerical value alternatives)? Question 3. Will total test scores show evidence of reliability and validity. 92 Population and Sample The population for the study included experts, (college instructors who teach computer application courses) and novices (college students who were registered in the CEP 434 introductory computing courses of the College of Education, Michigan State University). The sample included sixty four college students and ten college instructors who were considered experts by their peers. All of the participants were volunteers and not required to participate in this experiment for extra grade points. PermiSsion to conduct this study with these subjects was granted by the University Committee on Research Involving Human Subjects, Room 238 Administration Building, Michigan State University. Instrumentation and Test Development The methods used in the study are summarized in Figure 3.1 (page 51). Methods will be discussed in the order given in that figure. These topics are treated in this section since they all concern the instrumentation and test development activities associated with the three research problems summarized previously. Research Problem 1: Create an Operational Definition of Spreadsheet Problem Solving. The test item format selected was a nmltiple choice paper and pencil version of typical spreadsheet problems. 93 Since this approach has proved so efficient and effective in measuring diverse psychological traits, it was the obvious choice. For purposes of this study, problems or test items were formulated from two sources: (a) ten experts in the field of computer science and spreadsheeting and (b) spreadsheet software packages and books available to the college administrator, such as Lotus 1-2-3, by Lotus Development Corp. (1987) The development of the test items is discussed below. Examples of the paradigm have already been discussed and will not be repeated here. Research Problem 2 (Part One): Determine Evidence For Content Independent And Dependent Components As shown, two subtests were created: one for content area independent problem solving abilities and one for content dependent problem solving. By content area is meant the field of application, e. g. engineering, business, education, etc. We will mainly consider the methods used in creating the content independent subtest in this section and go onto the other subtest in the next section. Description of the Creation of the Spreadsheet Software Language Subtest: Item Generation. Number of Items, General'Context. The spreadsheet software subtest is composed of ‘ten multiple choice questions. The content used in these items 94 was taken from spreadsheet language (Lotus 1-2-3) books and manuals. Beeearch Problem (Part 2): Determine if Complexity is Related to Difficulty As shown, complexity was varied by using two types of problem statement (table versus narrative) and two types of problem response (value versus formula). The following is a presentation of one definition for each type of dependency table problem item used in this study. All other items, test booklets of Version 1 an 2, answer sheets and instructions are in appendix A. The dependency table problem solving subtest is composed of four subtests of three questions each. The four subtests are: table value, table formula, narrative value, and narrative formula. The entire subtest was developed and used as the primary instrument for collecting data necessary for this aspect of the study. An explanation of each subtest is as follows: 1. Teple value subtest - This subtest of three questions was designed to measure achievement on spreadsheet problem solving requiring small amounts of information processing and low levels of abstraction. 2. Table formula subtest. - This subtest. of ‘three questions was designed to measure achievement on 95 spreadsheet problem solving requiring small amounts of information processing and high levels of abstraction. 3. Narrative value subtest - This subtest of three questions was designed to measure achievement on spreadsheet problem solving requiring large amounts of information processing and low levels of abstraction. 4. Narrative formula subtest - This subtest of three questions was designed to measure achievement on spreadsheet problem solving requiring large amounts of information processing and high levels of abstraction. Research Problem 3: Determine if a Reliable and Valid Test of Spreadsheet Problem Solving is Possible. A test of spreadsheet problem solving was developed by combining the two (dependency table jproblem. solving' and spreadsheet language) subtests. The score on each subtest was obtained by assigning a score of 0 = incorrect and 1 = correct for each problem using the scoring key. The subtest score was the sum of the item scores (i.e. number of correct responses). The total score was the sum of the two subtest scores. Pre-testing of the Items 96 Pilot testing offers the opportunity to try the instrument with the kind of respondents anticipated in the main test. (Moser, 1958) In the first pilot testing, sixty college students and two experts in spreadsheeting and computer science were used. The panel of experts, given their professional experience, were deemed qualified to judge the face validity of the "Spreadsheet Problem Solving Achievement Test". These reviewers were Dr. Norman Bell, and Dr. Henry Kennedy, Professors of Educational Systems Development and Instructors of the course CEP-434 (Computer Applications in Education), both of Michigan State University. The sixty college students included both. the computer naive and persons with some computer experience. The task of both groups was to take the test and then to examine the test critically, and make comments and suggestions to improve both the content and clarity of the instrument. The results of the pilot testing were incorporated into the version given to the expert reviewers. The final version of the test contained the collective suggestions of both groups. Procedures and results were as follow: The first pilot test indicated that the average length of the time required to complete the total test was sixty- minutes. In order to provide sufficient time for instructions, questions, and completion of the test, a 97 reduction on the number of items was necessary. This reduction proved to be helpful. In a second pilot testing, a revised test of fifty questions was administered to sixty college students and ten experts. Feedback from the respondents suggested the need to reduce its length, difficulty and eliminate ambiguous items. From the results of this second pilot test, the investigator saw the need to decrease the number of test items from fifty to twenty two. These twenty two items were reviewed by' a panel of ten reputed experts in the field of computer science and spreadsheeting. In several instances, items were changed to eliminate logistical problems (such as poor English construction, and the excessively technical vocabulary) which caused attitudinal problems among the subjects. The content of the final items was as follows. Ten of the twenty two items were related to spreadsheet language and twelve were related to dependency table problem solving. The scoring key was developed using the alternatives selected by the pannel of experts. Data Collection The final 22 item test was then administered to sixty- four college students considered novices in the field of computer science and spreadsheeting and ten other experts in the field. During the Winter Quarter of 1987, the test was administered to the students in a classroom of the College 98 of Education. The test for experts was given personally at their office. All tests were answered within the allowed one-hour time limit. Two versions of the test were used: version #1 or version #2. The test versions differed only in the. random order of questions presented. Versions were assigned to subjects in a random order. The sixty-four students and ten expert subjects, received a packet (See Appendix A) which contained: 1. A cover letter from the researcher which explained the study and contained an assurance of confidentiality: 2. A copy of the "Spreadsheet Problem Solving Achievement Test; 3. A written permission release form required by the office of Human Subjects, Michigan State University; 4. An answersheet, instructions, test booklet; and 5. A number two pencil. The instructions were presented orally and in written form, before the students began the test. There was time for questions before and during the test. The time limit was sixty minutes. The tests were given at random with alternating versions to sequential subjects. The return rate was 100 percent from. the students. 99 Data analyses were performed for each of the three research questions. Method of Analysis The Statistical Package for the Social Sciences-SPSS, (Nie et. al., 1975) was used for all statistical analysis. Research Problem 1: Creating an Operational Definition of Spreadsheet Problem Solving. Question i . Can a useful operational definition of spreadsheet problem solving be developed using low cost multiple choice formats? Hypothesis 1: Scores on the content independent subtest are not. highly correlated (correlation is less than ‘the reliability of either of the subtests) to content dependent subtest scores. There was no specific statistical analysis used to evaluate the operational definition beyond the pre-testing and elimination of items showing artifactual irregularities (e.g. no clear correct answer, confusing problem statement, multiple correct answers). Evidence of the value of this operational definition was indirectly provided by the analysis of test score distribution, test score reliability and test score validity. With regard to results, there are two sources of data bearing on the operational definition: (a) the evidence of validity in-terms of scores by experts and novices and (b) the similarity of the distribution and statistics for the test scores to those of proven operational definitions, e.g. 100 normal distribution with no obvious artifacts or irregularities. As may be seen, 13.5% had a perfect score on the test. Table 4.1 showed that almost 42% of the subjects received high scores of 17 or more. The frequency distribution is clearly negatively skewed. (e.g. There are far more high scores than low scores.) Thus, while the distribution shows no apparent irregularities (such as extreme scores), it does show that many of the items are too easy for the group tested. As will be seen in a later section, part of the skewenwess is due to the presence of spreadsheet experts. But the distribution remains skewed even when the experts are removed. In short, the spreadsheet problem operational definition fails to yield a normal distribution. However, increasing the difficulty of items might well yield a normal distribution. Conclusion There is an observable difference between the expert subject scores and the novice subject scores on the spreadsheet language subtest. Problem 2: (Part One): Determine Evidence For Content Independent And Dependent Components. Question 2 (Part One): Does the content dependent (dependency table problem solving) subtest measure different abilities from the content independent (spreadsheet language) subtest? 101 Hypothesis 2: As complexity of the dependency table problem solving subtest increases, difficulty increases. The means, frequency distributions and standard Ideviations were calculated for both subtests and compared. To find out if the two subtests measured somewhat different abilities a Pearson product moment correlation was calcu- lated between the spreadsheet problem solving subtest and the spreadsheet language subtest. The evidence for content dependent or content independent components was obtained by calculating (a) means, frequency distributions and standard deviations for each of the subtests. (e.g. Dependency Table Problem Solving Subtest and the Spreadsheet Language Subtest) and (b) by calculating a Pearson Product Moment Correlation between the two subtests. If both subtests are measuring the same abilities with no special differences due to content independent ‘versus content dependent components, then no differences would be expected between the means, standard deviations and distributions of subtests scores. Further, the correlations between the two subtests would be nearly as high as the reliabilities of either of the subtests. The results show that 60% of the respondents received a score of 10 or more on the Dependency Table Problem Solving subtest. i _— 102 In addition, 46% received scores of 4 or less and 54% received scores of 5 or higher on the Spreadsheet Language subtest. To test the significance of the difference between the mean scores, a T-test for correlated non-independent group means was calculated (Christensen & Stoup 1986, p. 293). Test results show that the differences were significant at the .05 level of confidence (P=.03) (Christensen & Stoup, 1986, p. 293). As may be seen from the graphs in Chapter 4, the dependency table problem solving subtest frequency distribution is negatively skewed, while the the frequency distribution for the spredsheet subtest is rectangular. In short, (a) neither distribution is normal and (b) the distributions are different from each other. The correlation between spreadsheet subtest and dependency table problem solving subtest for all groups (n=74) was .50. The Reliability for the subtests were as follows: Dependency Table Problem Solving Subtest —--.87 Spreadsheet Language Subtests -------------- .72 Thus, the correlation between the subtests was lower than both the reliabilities. Fisher's - 2 Transformation statistical analysis was used to test the significance of the differences of the intercorrelation from the 103 reliabilities. The results for the difference between the reliability of the Dependency Table Problem Solving Subtest (.87) and the intercorrelation for the subtests (.50) show that the difference was significant at the .01 level of confidence (p < .01). The results for the difference between the reliability of the Spreadsheet Language Subtest (.72) and the intercorrelation show that the difference was significant at the .01 level of confidence (p < .01 ). (Edwards 1956, t— table p. 501). These results show that the inter-correlation of the subtests is significantly lower than the reliabilities for both subtests. Hence the hypothesis of a content independent and content dependent components of spreadsheet problem solving seems confirmed. The two tests show different means, different distributions (one skewed, one rectangular) and the intercorrelation are significantly less than the reliabilities. The definitional question remains, of course, of whether theoretical term "content independent" is properly applied to the spreadsheet language component. The next section presents problem 2, part two. This part concerns whether or not the basic information processing relationship of complexity and difficulty holds for spreadsheet problem solving. 104 Problem 2 (Part 2): Determine if basic Information Processing relationship of complexity and. difficulty holds for Spreadsheet Problem Solving. Question 2 (Part Two): Are the more complex problems (narrative presentation with alternative formulas) more difficult than simpler problems (using table presentation and numerical value) alternatives? Hypothesis 2 (Part 2): As complexity’ of the spreadsheet language subtest increases, difficulty increases. If complexity is related to difficulty, then less complex problems should show' more correct answers. Thus, according to the hypothesis, the table-value condition should be the least difficult (i.e., show the highest mean number correct). The table—formula and narrative-value conditions should be of medium difficulty, and the narrative-formula condition should be the most difficult (i.e., show the lowest mean number correct). The following is a description of the mean scores and standard deviations for the Low, Medium and High complexity levels of the test. As may be seen, the data verify the prediction. The One-way Repeated-Measures Analysis of Variance (ANOVA) test was used to determine if there was a significant difference among the means of the four complexity conditions (Table Value, Table Formula, Narrative Value, and Narrative Formula). This ANOVA analysis is as follows. (Christensen & Stoup, 1986, p. 416-429). 105 The one way repeated measures ANOVA indicated that there was a significant difference (F=8.88) among the means of the four subtest complexity conditions at the .01 level of confidence, F(3,219) = 3.88, p<.01. Table 5.1. Complexity Levels of the Test Condition Mean Number Standard Deviation Correct of the Mean Number Correct Low complexity 2.60 .948 (Table-Value) Medium Complexity (Narrative Value) 2.55 .685 (Table-Formula) 2.39 .962 High Complexity (Narrative-Formula) 2.09 1.11 51mm The hypothesis that complexity is related to difficulty is confirmed by the results. Although the size of the effect is not large. The frequency distributions indicate that the effect may be attenuated by the low ceiling of the subtests, i.e. they do not include sufficiently difficult items. Hence the effect of complexity may be greater with a better test. Conclusions 1. More complex problems (narrative presentation with alternative formulas) are more difficult than 106 simpler problems (using table presentation and numerical value alternatives). 2. The ability to solve the four subtests varied as a function of the complexity of the subtest conditions. The next section shows the results of problem 3 in detail, including the reliability and validity of the test. Problem 3: Determine if a Reliable and Valid Test is possible. Question 3: Will total test scores show evidence of reliability and validity? Hypothesis 3: Spreadsheet problem solving test scores will show evidence of reliability and validity. Reliability was calculated for spreadsheet problem solving total test scores. The Corrected Split Half Technique (Kuder & Richardson, 1976)) was used. Validity, was measured by correlating the amount of experience (Expert vs Novice groups) with scores on the spreadsheet problem solving total test. All findings are reported in a tabular form. Data from the test answer sheets were transferred to coding sheets and keyed into a computer. Raw data were converted into numeric codes for each response to each item. Other statistical analyses were done which were not directly related to the three :major' research. questions. 107 These analyses were used as an exploration of posible variables for further research. The expected characteristics of the tests are: (a) a normal distribution of total scores for novices (b) reasonably high reliability for total and subtest scores (c) an acceptable high validity coefficient (i.e. the correlation between the level of expertise and the total scores). The analyses needed were (a) test distributions (b) reliabilities of each subtest and the total test (c) evidence of validity of the test. The results indicate that the DTPS subtest had an internal consistency reliability of .87 and the SS subtest had an internal consistency reliability of .72. The results also showed that there was a sufficiently high reliability for the test to be useful. The results also show that there was a significant correlation (.4389) between factors experts versus novice and total scores. The results indicate that there was a significant positive correlation (.5641) between factors experts versus novice and spreadsheet subtest scores. In addition, the results indicate that there was no significant correlation (.1857) between the level experts versus novice and dependency table subtest scores. The failure to obtain a significant validity coefficient for the V 108 subtest is probably due to the lack of a sufficient number of difficult items in this subtest. The distribution of scores on the subtest (see Appendix D) shows a very high degree of negative skewness. The distributions for the expert group were as expected, the distribution is negatively skewed. The frequency distribution indicated that 20% of the expert group had a score of 19 while the other 80% had a score of 20. Table I-A showed that the expert group received higher total test scores. This could have meant that the test material was easy for the expert group. Test results show that almost 50% of the respondents had a score of 15 or higher on the test. This could have meant that some of the items were too easy even for the novice group. Probably, the test item difficulty could be increased to get a normal distribution of scores. In general, the test shows reasonable reliability and validity. But, the. distribution is too highly skewed to provide the basis for a standardized test for any purpose beyond a rough screening device. One of the subtests (Dependency Table Problem Solving) is probably the ‘major source of skewness and has a non-significant validity coefficient. This subtest is the principle weakness in the study. Hence, further research will be necessary to make the 109 level of difficulty of the test higher than in this version. Conclusion There is a posibility of creating a reliable and valid test of spreadsheet problem solving. Summapy This Chapter presented a brief description of this study, its design, data analysis method and the conclusions and recommendations for further research. The assumption underlining this study was that spreadsheet problem solving could be defined in terms of a spreadsheet problem solving test. To test this assumption, four hypothesis were developed. Frequency distributions, mean scores, standard distributions, split-half analysis and one way repeated measures analisis of variance (ANOVA) were used to test the hypotheses. The majority of the results were significant. Further research is needed to determine if a stronger test of dependency table problem solving could be developed. Recommendations for Further Research 1. A study could be constructed to identify' other possible variables that influence spreadsheet problem solving ability. Examples of these could 110 be: Amount of questions, type of spreadsheet software, and type of microcomputer used. 2. A study could be constructed to describe an operational definition of other' problem solving content areas used in higher education. 3. Develop an instrument to measure spreadsheet problem solving that include more difficult items in all four complexity conditions used in this study. 4. A similar study should be conducted using a larger sample. 5. A study could be constructed to measure problem solving ability with multiple types of software and hardware used in Higher Education. Observations It was observed that problem solving ability seems to be a really important part of spreadsheet problem solving. Even though some of the subjects knew how to use a micro— computer spreadsheet package, they still needed some specific content dependent problem solving ability in order to solve a specific problem. In addition, this research shows that a test for spreadsheet problem solving does not require a microcomputer. APPENDIX A DEPENDENCY TABLE PROBLEM-SOLVING TEST Purpose: 111 CONSENT FORM The purpose of this study is to develop a standardized test to measure achievement related to dependency table problem solving (tabular problems). You are only required 30 minutes for the research and 5 minutes to complete the consent statement. 1. I am willing to participate in this research effort which has been explained to me and summarized in the paragraph above. I understand that I do not have to participate in this research if I do not want to. I understand that I am free to discontinue the participation at any time within the 30 minutes. I understand that my anonymous status by means of randomly assigned subject numbers which appear instead of names. I understand that my test results will be made available to me if I wish. *Data obtained from this test will be kept strictly confidential Name Date 112 TESTING DEPENDENCY PROBLEM SOLVING FOR ADMINISTRATORS IN HIGHER EDUCATION, BUSINESS AND INDUSTRY SPREADSHEET SUBTEST: 1. How do you move from Cell to Cell?- A. Pressing the arrow keys. B. Pressing the escape keys. C. Pressing the control key. D. Pressing the space bar. 2. What are the different types of cell entries? A. Cells, Rows and Numbers B. Numbers, Labels and Formulas C. Formulas Only D. Cells Only 3. Which of these formulas is correct? A. C2 + 38 + A2 B. (c3 + D3) * A2 C. A2 - A2 + 38 D. A2 + 82 @B4 4. To add the first 3 cells in a row of a spreadsheet you type: A. 82 + J5 + Cl B. C3 + C5 - D7 C. Al + Bl + C1 D. B1 (@B3) 5. To add a Column of numbers in a spreadsheet you type: A. @SUM (A1,A4) B. @SUM(C4) 0. @AVG (A1,D4) D. NONE OF THE ABOVE 6. Which three lines make up the spreadsheet's control screen? A. Entry Line, Prompt Line, Status Line B. Status Line, Command Line, Title Line C. Prompt Line, Macro Line, Title Line D. Shift Line, Micro Line, Prompt Line 7. To repeat the above value in a set of cells of a spreadsheet, which command is used? A. Search B. Replicate C. Move D. Insert 10. What Will in a A. B. C. D. 113 symbol do you type to get a menu from LOTUS? The / key The esc key The arrow key The \ key YES NO Depends on the version of LOTUS Only if you hit the \ key LOTUS 1-2-3 provide graphic output? ' LOTUS permit you to combine two LOTUS spreadsheets file? No Yes Only if you press the ESC key Depends on the version of LOTUS NARRATIVE FORMULA SUBTEST 11. 12. A college professor has a computer table report with some name C1 contains 85. The CELL Dl contains 95. State the formula that correctly computes the average test score for Tracy Willis. A. B. C. D. A Department chairperson has received a computer spreadsheet to make salary adjustments. CELL Al contains the label Faculty. The CELL B1 contains the label Salary. The CELL C1 contains the label percent, the CELL 01 contains the label TOTAL. The CELL A2 contains the name JONES. The CELI. BZ contains the Quantity 21,875. The CELL C2 contains the Quantity .10. The CELL A3 contains the name Stevens. The CELL B3 contains the quantity 20,752. The CELL C3 contains .05. A. (c1 + Bl + c1) B. (A3 * B3 + c3) c. (B3 * c3) + B3 D. (B3 * A3) + 03 students' test scores. The CELL A1 contains the Tracy Willis. The CELL Bl contains 82. The CELL (C1 + Bl + D1 - A1)/4 (C1 + B1 + A1 + D1)/3 (Bl + A1 + D1 + C1)/3 (Bl + C1 + 01)/3 13. 114 The administrator has been given a computed table to make projections of the operating expenses for the school. The CELL A1 contains the label "GAS". The CELL B3 has 150.00. The CELL C3 has 160.82. The CELL D3 has 190.83. The CELL A4 has the label "OIL". The CELL B4 has 14.27. The CELL C4 has 18.26. The CELL D4 has 22.87. Select the correct formulas to calculate the average cost of OIL. A. (B4 + c4 + D4)/3 B. (A4 + B4 + c4)/3 c. (83 + B4 + D3)/3 D. (83 + c3 + 01)/3 NARRATIVE VALUE SUBTEST: 14. 15. 16. Edgar Thompson worked for 2 hours on Monday, 7 hours on Thursday and 20 hours on Friday. Mrs. Peters worked for 10 hours on Tuesday and 20 hours on Thursday. They both have been paid for 10 hours. How would the manager compute the correct number of hours owed to Edgar Thompson? A. 7 + 20 + 20 + 10 .B. 7 + 2 + 20 + 10 C. 2 + 7 + 20 - 10 D. 2 - 7 - 20 + 10 Select the formula that calculates Mr. William's average car maintenance expenses. He spent $15.00 in the month of January, $17.29 in the month of February, $18.95 in the month of March and $12.73 in the month of April. A. (12.73 - 15.00 + 12.73 + 18.95) /4 B. (17.29 - 15.00 + 12.73 + 18.95) /4 c. (15.00 - 17.29 + 18.95 + 12.73) /4 D. (15.00 - 17.29 + 18.95 + 12.73) /3 The dean of a college has monthly supplies and service expenses of $12,571 for the first month, $17,521 for the second month and $18,253 for the third month. The fourth month is $20,000. For the next 8 months, it will be $20,000 plus 10 percent. How will the dean compute the budget needed for the next 8 months? A. (($20,000 * .10) + ($20,000)) * 8 B. ( $20,000 + .10 * $20,000) + 8 c. ( $20,000 + .10 * $10,000 * .10 D. ( $12,571 + $17,521 * $18,253 + $20,000) 115 TABLE FORMULA SUBTEST NOTE: THE QUESTION MARK SHOWS THE POSITION OF THE 17. An administrator considers that all employees 40 hours per week. The actual hours worked in the table. How could the administrator the DIFFERENCE for his projected value for JUDES? 1. 2. JUDES 40 3o ? 3. SMITH 4o 35 +5 4. BATES 40 4o 0 A. 82 + c2 8. @SUM (82,c2) c. @AVG (B2,C2) D. 82 - c2 18. Given the following payroll, for Levine including the percentage increase. A B C D 1. FACULTY SALARY INCREASE TOTAL PAY 2. LEVINE 21,875 10% ? 3. STEVENS 23,971 5% - 4. KERNEL 19,782 18% - A. (82 * c2) + 82 8. (83 * c2) - 82 c. (82 - c2) * 83 D. (84 + 83) - 82 A B C D STUDENT PROJECTED HOURS WORKED DIFFERENCE CURSOR. will work are shown calculate V calculate the total pay 116 19. Consider the following annual budget. During the period given. What is the average for maintenance spent in February. Select the correct formula to calculate the average for the month of FEBRUARY. A B C D 1. MAINTENANCE JANUARY FEBRUARY MARCH 2. TIRES 15.77 17.29 18.57 3. OIL 16.82 19.38 17.45 4. BRAKES 13.80 12.51 19.87 A. @AVG (B2, 83, B4) 8. (c2 + c3 + c4)/4 c. @AVG (c2, c3, c4) D. @SUM (c2, c3, c4) NOTE: THE QUESTION MARK SHOWS THE POSITION OF THE CURSOR. TABLE VALUE SUBTEST 20. Given the following grade record table. What is the average exam score for JIM RICE? NAME EXAM 1 EXAM 2 EXAM 3 EXAM 4 AVERAGE 1. JIM 97 73 81 91 ? 2. BOB 83 87 89 82 85.25 3. MARY 100 96 78 89 90.75 A. (97 + 73 + 81 + 91)/4 B. (97 + 83 + 100 + 73)/4 c. (97 + 37 + 81 + 91)/4 D. (91 + 73 + 81)/3 21. Mrs. Jones has received a pay increase of 3%. What is her new hourly rate of Day? EMPLOYEES HOURLY RATE PERCENT NEW HOURLY RATE 1. MRS. RAY 15.25 7% 16.31 2. MRS. CAYSE 17.14 5% 17.99 3. MRS. JONES 19.89 3% ? A. (17.14 + .05) + 17.14 8. (19.89 * .03) + 19.89 c. (15.25 * .07) + 15.25 D. (16.31 * .03) + 16.31 APPENDIX B DATA TABULATED FROM THE EXPERIMENT 118 THESIS DATA DATATESIS File: Subject Version DTPS 11 11 Tf NV Nf SS 3 3 2 8 TV 3 Level 1 1 1 12 12 12 11 12 10 13 10 14 15 16 10 10 0 9 17 18 20 21 10 10 24 25 11 12 27 28 10 30 31 32 33 34 12 12 12 36 37 38 10 39 11 11 42 119 THESIS DATA CONT'D. DTPS Tf NV Nf SS 1 TV Level Subject Version 8 5 2 2 3 1 2 43 44 45 46 47 48 12 12 ll 50 12 51 52 12 53 12 11 12 12 12 59 10 12 63 64 11 66 67 68 69 70 71 12 11 11 11 11 ll 11 11 72 73 74 Total 19 19 17 17 19 19 11 14 13 15 120 Total, Cont'd. 13 12 9 10 10 19 4 6 7 17 15 12 15 4 13 13 7 17 19 20 3 18 121 Total, Cont'd. 15 14 15 19 20 20 ) 20 20 20 20 20 20 19 BIBLIOGRAPHY 122 BIBLIOGRAPHY Adams, D.R., et al. Computer Information Systems: An Introduction. Palo Alto, CA.: South Western Publishing Co., 1983. Bailey, Robert L. Information Systems and Technical Decisions: A Guide for Nontechnical Administrators. American Association for Higher Education. Washington, D.C., 1982. Belluci, N., Liberatore, L., Sobel, G. and Gannon T. Spreadsheet Applications Using Visicalc and Lotus 123 Programs. Cortland Madison BOCES, N.Y., 1984. Bork, Alfred. Personal Computers For Education. New York: Harper and Row Publishers, 1985. Brown, K.C. The Administrator's Use of Microcomputer Systems. Washington D.C.: American Association of University Administrators and ERIC Clearinghouse on Higher Education, 1983, 8 pp. (ED 234 729). Cristensen, Larry B. and Charles M. Stoup. Introduction to Statistics for the Social and Behavioral Sciences. Monterey, CA: Brooks/Cole Publishing Company, 1986, pp. 416-439.. Comerford, J.P. and M. Carlson. "A Methodology for Training Administrators to Use Microcomputers in Education Administration." Paper presented at the Annual Meeting of the American Educational Research Association, Chicago, IL, 1935. Edwards, Allen. Statistical Methods for the Behavioral Sciences. New York: Rinehart & Company, Inc., 1956, pp. 304-307, Table 7, 507-503. Degautels, Edwuard J. Lotus 123 for the IBM Personal Computer and XT. Dubuque, Iowa: W.C. Brown Publishers, 1984, p. 1-250. Dellow, Donald A. and Laurence H. Poole. "Microcomputer Applications in Administration and Instruction." Jossey-Bass Inc., Publishers, #47, September 1984. Gabriel, R.M. "Computer Literacy Assessment and Validation: Empirical Relationships at Both Student and School Levels," Journal of Educational Computing Research, Vol. 1 (4) 1985. 123 Gagne, R.M. The Condition of learning, Second Edition, New York: Holt, Rinehart and Winston, 1970. Gilbert, Steven W. and Kenneth C. Green. "New Computing In Higher Education," CHANGE. May/June 1986. Gillespie, Robert G. Evaluating Campus Computing Services: Taming Technology. New Directions for Institutional Research. Jon F. Weigin & Larry Braskanys, Editors, #56, Winter 1987. Glass, G.V. and J.C. Stanley, J.C. Statistical Methods in Education and Psychology. New Jersey: Prentice Hall, 1970. Jones, Dennis P. Data and Information for Executive Decision in Higher Education. Colorado: NCHEMS, 1982. Levinson, E.M. "A Review of the Computer Aptitude Literacy, and Interest Profile (CALIP)," Journal of Counseling and Development, Vol. 64, 1986. Lefrancois, Guy R. Psychology.for Teaching, Fourth Edition. Belmont, CA: Wadsworth Publishing Company, 1982. Levinson, E.M. "A Review of the Computer Aptitude Literacy, and Interest Profile (CALIP)," Journal of Counseling and Development, Vol. 64, 1986. Mandel, Steven L. Computers and Data Processing Today with Basic, Second Edition. St. Paul, MN: West Publishing Company, 1986. Mathews, K. "Personal Computers in the Business Office," Business Officer, 1985, p. 35-38. Moskovis, L. and others. "The Information/Decision Support Center," CAUSE National Conference, San Francisco, CA., Dec. 11-14, 1983. Mosmann, C. Academic Computers and Service: Effective Uses for Higher Education. San Francisco: Jossey-Bass, 1973. Niemiec, R.P. and H.J. Walberg. "Computers and Achievement in the Elementary Schools," Journal of Educational Computing Reeearch, Vol 1(4), 1985. 124 Robbins, Martin IL, S. William Dorn and John E. Skelton. Who Runs the Computer? Strategies for the Management of Computers in Higher Education. Boulder, Colorado: Westview Press, 1975. Ross, Steven C. Understanding and Using Lotus 123. St. Paul, MN: West Publishing Company, 1986., p. 3-196 Ruby, R. "Increasing Use of Microcomputers in Business Education," Paper presented at the American Vocational Association Convention, Anaheim, CA, 1983. Sarapin, M.I. Computer Literacy for Teachers. Columbus, OH: American Industrial Arts Association, 1984. Schneider, G.T. "Making Work Easy: Administrative Applications of Microcomputers," Paper Presented at the Annual Meeting of the National council of State on Insevice Education, Florida, 1984. Sibalwa, D. M. "A descriptive study to determine the effect that training, experience and availability have on use of instructional media in the classroom by preservice teachers," Ph.D. Dissertation, Michigan State University, Michigan, 1983. Dissertation Abstracts International. Simon, Herbert and Allen Newell. Human Problem Solving. Englewood Cliffs, NJ: Prentice Hall, Inc., 1972, pp. 137-140 Spiro, L. and F.L. Campbell, F. "Utilizing Technology to Examine the Impacts of Academic Program Plans on Faculty Staffing Levels," Paper Presented at the Annual Forum of the Association for Institutional Research. TX, 1984. Sullivan, David R., T.G. Lewis and C.R. Cook. Computing Today Microcomputing concepts and Appications. Boston, MA: Houghton Mifflin Company, 1985. Staman, E. Michael, ed. "Examining New Trends in Administrative Computing," New Directions for Institutional Research, No. 22 Vol. VI. San Francisco: Jossey—Bass, 1987. Staman, E. Michael, ed. "Managing Information in Higher Education," New Directions for Institutional Research, No. 55. Vol. XIV, San Francisco: Jossey-Bass, 1987. 125 Vernot, D. "Spreadsheets: How to Calculate Almost Anything," Instructor, March, 1906. Vinsonhaler, John. People and Computers. East Lansing, MI: Michigan State University, 1987. Welzenbach, Ianora F., ed. College and University Business Administration. Washington, D.C.: NACUBO, 1982. Williams, Andrew T. Lotus 123 from A to Z. New York: Wiley Press Corporation, 1985, p. 2-318. Wozny, L. "The Spreadsheet in an Educational Setting," Microcomputing Working Paper Series, Philadelphia, PA: Drexel University, 1984. Young, R. and S. Steele. "Using an Electronic Spreadsheet to Cut Costs in Evaluation," Paper presented at the annual meeting of the Evaluation Research Society (San Francisco, CA, October 10-13, 1984). NIV. LIBRARIES 11111 1 1070261 MICHIGAN STQTE U 1|i11111111111111lil 3129300