liven... Av 5'. n £53 .. v a .30.. .4 4 1.: Andrlflrrnw )1.” .V. .frn‘ilh If... . «nut. : » 31352:” A: v§ .13. 5: 4%..) at. r . m if my... , . . in?! .Vfiig‘mh Ct} t; \ . . | ‘ .15! 5! I .3“. 2.1.73.1“: . ‘E I; xvii. i‘hflv‘ 3.; Bl". ..1)\II.K.‘!| b in in 215,-. 3 iii»; 5....» . {7.5.2.1. .: .v {inwdndwuiiixltitaiit it; . lint... ) r5341... 9!. >1)... viii“. 9.. 0:51.: u”! 5.3!... I’ldm... ‘98! .1... 5:35}; .8. t-.. $15.93 $1.573." In 3...: 3:93.)...» ).vs . {7‘}! a} \I‘ LIBRARY Michigan State University This is to certify that the dissertation entitled DYNAMIC USER EXPERIENCE OF INFORMATION TECHNOLOGY INNOVATIONS: A SELF- REGULATORY PERSPECTIVE presented by Chi Cong Dang has been accepted towards fulfillment of the requirements for the doctoral degree in psychology WWZM /7Major Professor’ fignature 1 1/18/2009 Date MSU is an Affirmative Action/Equal Opportunity Employer OI-I-u-u-n-----u-o—--o-u----a-I-—o--o--—o-n-.-.-.—--.—-.-o-I-o-o-.-I-ICO-I-c-u-o-u-I-o-n-c-u—o---—~—-— PLACE IN RETURN BOX to remove this checkout from your record. TO AVOID FINES return on or before date due. MAY BE RECALLED with earlier due date if requested. DATE DUE DATE DUE DATE DUE 5/08 K:IProj/Aoc&Pres/CIRCIDatoDue.indd DYNAMIC USER EXPERIENCE OF INFORMATION TECHNOLOGY INNOVATIONS: A SELF- REGULATORY PERSPECTIVE By Chi Cong Dang A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Psychology 2009 ABSTRACT DYNAMIC USER EXPERIENCE OF INFORMATION TECHNOLOGY INNOVATIONS: A SELF- REGULATORY PERSPECTIVE By Chi Cong Dang In this dissertation, a model of within-person dynamic experience of information technology (IT) innovations that was built on social cognitive theory was presented. Data from an eight-week study that sampled 58 users’ daily experience of Palm Pilot software as an IT innovation were analyzed by HLM and regression techniques. Results showed that innovation self-efficacy and outcome expectation affected and were affected by IT usage dynamically within persons. The effect of innovation self-efficacy and outcome expectation on usage operated through behavioral intention and implementation intention, whereas the influence of usage frequency on innovation self-efficacy and outcome expectation were partially mediated through efficacy disconfirmation and outcome disconfirmation, respectively. Significant individual differences were observed in within-person processes. Research and practical implications were discussed. ACKNOWLEDGEMENTS This dissertation was completed with the caring support and assistance of many individuals. I am very thankful for their contributions and encouragement over the years. My deep gratitude goes to Dr. Daniel Ilgen, my main advisor on this dissertation and throughout graduate school years, for his guidance, assistance, and encouragement. I greatly appreciate his caring advice, hard work, and financial support, without which this research would not have been possible. Dan has been a great model on both professional and personal levels. His teaching and caring will be with me for many years to come, and I. sincerely hope to use many learned lessons for greater purposes. I would like to thank Drs. Richard DeShon, Kevin Ford, Remus Ilies, and Linda Jackson for their valuable comments. Their feedback has been very valuable and helpful in producing this work on firmer theoretical and analytical grounds. Many thanks go to Jay Park, Gordon Schmidt, and numerous undergraduate students who have assisted in the data collection and coding process as well as Marcy Schafer and Mark Yampanis, who helped formating and editing this dissertation I would like to express my gratitude to Michigan Statement University for its financial support and the wonderful Main Library, where I have spent countless hours and considered a second home. I also want to thank all the faculty in the Organizational Psychology Program at Michigan State for their support, lessons, and gentleness that they have shared during my graduate school. On a more personal level, my heart-felt gratitude goes to my parents and siblings for their encouragement and steadfast support. They have given me roots and encouraged me to fly far and high with the knowledge that they are always there in time of needs. I iii am especially grateful to my wife, Maria Daane, for her unwavering support and unconditional love. She encouraged me to follow my calling and embarked on this journey with me at a great personal sacrifice. Her love and support have been strong and steadfast through all the ups and downs, as we went through our graduate school together, got married, and had our two young children, Julia and Jonathan. In this long, exciting journey, I find it very comforting and reassuring to have such a wonderful companion. Now as this joumey is drawing to a close, I sincerely hope that it will open up other journeys that we both will enjoy together with our two children. iv TABLE OF CONTENTS LIST OF TABLES ............................................................................. vi LIST OF FIGURES ........................................................................... vii INTRODUCTION .............................................................................. 1 Review of Extant IT Innovation Adoption Literature ................................. 4 Theory of Reasoned Action and Theory of Planned Behavior ....... 4 Technology Adoption Model ............................................. 11 Social Cognitive Theory .................................................... 19 Strengths of Extant Research ...................................................... 23 Limitations of Extant Research .......................................... 24 Self-Regulated Use ofIT Innovations: A Dynamic Perspective. . . . ........29 Behavioral Intention ........................................................ 33 Implementation Intention .................................................. 34 Self-Efficacy and Eflicacy Disconfirmation ............................. 38 Outcome Expectation and Outcome Disconfirmation ............... 47 METHODS .................................................................................... 53 Overview of the research .................................................................. 53 Sample ........................................................................................ 53 IT Innovation ................................................................................. 54 Training ...................................................................................... 55 Procedure ..................................................................................... 58 Measures ..................................................................................... 60 Time .......................................................................... 60 Outcome Expectation ...................................................... 60 Innovation Self-efi‘icacy .................................................. 60 Behavioral Intention ..................................................... 61 Implementation Intention ............................................... 61 Outcome Disconfirmation ................................................. 61 Efficacy Disconfirmation ................................................ . 61 IT Usage ...................................................................... 62 DATA ANALYSIS .......................................................................... 63 RESULTS ...................................................................................... 65 DISCUSSION .................................................................................. 90 Summary of Findings ....................................................................... 90 Research Contributions and Implications ................................................ 93 Practical Implications ....................................................................... 98 Boundary Conditions and Future Research .............................................. 101 Conclusions ................................................................................... 103 APPENDICES ................................................................................. 105 BIBLIOGRAPHY .............................................................................. 155 vi Table 1. Table 2. Table 3. Table 4. Table 5. Table 6. Table 7. Table 8. Table 9. Table 10. Table l 1. LIST OF TABLES Descriptive Statistics of Studied Variables ................................. 66 HLM Analyses of Effects of Behavioral Intention and Implementation Intention on Usage Frequency ................................................................. 70 HLM Analyses of Effects of Outcome Expectation and Innovation Self- Efficacy on Behavioral Intention .............................................. 73 HLM Analyses of Effects of Outcome Expectation and Innovation Self-Efficacy on Implementation Intention .................................. 75 HLM Analyses of Effects of Outcome Expectation and Innovation Self- Eflicacy on Usage Frequency .................................................. 77 HLM Analyses of Effects of Outcome Expectation and Innovation Self- Efficacy on Usage Frequency over Time ..................................... 79 HLM Analyses of Innovation Self-Efficacy Growth over Time ........... 81 HLM Analyses of Effects of Usage Frequency and Efficacy Disconfirmation on Innovation Self-Eflicacy ............................... 83 HLM Analyses of Effects of Usage Frequency on Efficacy Disconfirmation ....................................................... 85 HLM Analyses of Effects of Usage Frequency and Outcome Disconfirmation on Outcome expectation ...................................... 87 HLM Analyses of Effects of Usage Frequency on Outcome Disconfirmation .................................................................. .89 vii LIST OF FIGURES Figure 1. Theory of Planned Behavior ...................................................... 7 Figure 2. Technology Adoption Model .................................................... 12 Figure 3: Traditional Model of IT Innovation Adoption Research ...................... 25 Figure 4: Testable Self-Regulatory Model of Dynamic User Experience of IT Innovations ........................................................................................... 31 Figure 5. Hypothesized Self-Regulatory Model of Dynamic User Experience of IT Innovations ..................................................................... 67 Figure 6. Revised Self-Regulatory Model of Dynamic User Experience of IT Innovauons ........................................................................ 91 viii INTRODUCTION Organizations have increasingly invested in information technology (IT) in the past decade. American businesses are estimated to spend over 50% of their capital budget on IT investments (Rockart, Earl, & Ross, 1996). Organizations may invest in IT to improve workers’ productivity (Dewett & Jones, 2001), enhance the quality of work life (Curley & Pyburn, 1982)), improve decision-making (Dewett & Jones), or deliver superior products and customer services (Ives & Learmonth, 1984). Increasing investments in and implementation of IT innovations, however, often do not result in the degree of benefit for which they are intended (Stratopoulos & Dehning, 2000). The failure has been known as the “productivity paradox” (Landauer, 1995; Sichel, I997; Stratopoulos & Dehning, 2000), a phenomenon in which increased implementation of IT innovations in the workplace is not associated with increased organizational performance. For example, it is estimated that $300 billion were invested in implementing enterprise resource planning (ERP) worldwide in the 19905 (James & Wood, 2000). However, only about 50% of ERP implementations met organizations’ expectations (Adam & O’Doherty, 2003). A number of scholars (e.g., de Vries, Midden, & Bouwhuis, 2003; Muir, 1994; Will, 199!) suggested that the disappointing performance is not due to mechanical malfunctions or poor design. Often, it is users’ underutilization of innovations that are responsible for the lack of improvement (Johansen & Swigart, 1996; Moore, 1991; Norman, 1993; Weiner, 1993). Most users employ only a small number of features out of many available in IT innovations and use them infrequently (J asperson, Carter, & Zmud, 2005; Ross & Weill, 2002). As the benefits of IT innovations are contingent on the frequency and duration of their usage, it is important to understand what influences the frequency and duration with which individuals use IT innovations. Given the importance of IT in organizations, a significant amount of research effort has been devoted to understanding the psychological processes that explain IT innovation usage (Legris, Ingham, & Collerette, 2003). Most of the existing studies have relied on the Theory of Planned Behavior (TPB; Ajzen, 1991; Ajzen & F ishbein, 1977), the Technology Acceptance Model (TAM; Davis, 1989), and the Social Cognitive Theory (SCT; Bandura, 1997), with their belief and expectation constructs to understand users’ behaviors. Existing research has made significant contributions in explaining intentions to use IT innovations. For example, Venkatesh, Morris, Davis, and Davis, (2003) reported the TAM-based model accounted for 70% of variance in usage intention. Yet researchers have predicted usage behavior less successfully, explaining around 30% of self-reported usage. Despite its contributions, existing IT adoption research has a number of important limitations. First, most studies have used weak criteria, examining general behavioral intention to use IT innovations and/or self-reported usage frequency and duration. In a review, Legris et a1. (2003) reported that only one out of 22 studies measured usage frequency objectively. The second limitation is that most studies hold the view that behavioral intention is the only proximal determinant of IT usage. This assumption is incompatible with our current knowledge. Although behavioral intention is an important precursor to behaviors, it is often not automatically translated into actions. The path from intention to behavior is often long. People need implementation plans and cognitive resources to enact their behavioral intention (Gollwitzer, 1999). Additionally, they often face many distractions and competing thoughts. They need to shield themselves from distracting alternatives and focus attention on intended activities. It is thus not surprising that behavioral intention predicts self-reported IT usage only modestly. The third limitation, perhaps most importantly, is that past research has largely i gnorcd the temporal dimension of IT usage. Beliefs and expectations are not fixed; they change over time. Beliefs and expectations regarding IT innovation usage are maybe even more malleable (MeLone, 1990), as they are attached to IT innovations, with which users are unfamiliar and may not have prior expectations. Additionally, usage behaviors change over time (Rogers, 1995). They wax and wane. Yet, existing research has not addressed the dynamic change in IT expectations and usage despite their realities in user expenence. In the present study, I drew on self-regulation theory from the social cognitive perspective (Bandura, 1997; Zimmerman, 2000) to build a model to explain the dynamic usage of IT innovations and evaluate its validity. The model examined how beliefs influence and are influenced by usage behavior iteratively over time. Additionally, the model incorporates implementation intention as a second proximal determinant of usage behaviors, beside behavioral intention. The present work focused on predicting actual usage behaviors by using objective measures of IT usage frequency and duration. To examine the dynamic nature of IT usage, this research employed a longitudinal design in which data were collected at multiple points in time. Palm Pilots were used as an IT innovation as well as a measuring device through which longitudinal research data were collected. Palm Pilots were chosen for a number of reasons. First, because Palm Pilots are relatively small and increasingly becoming a more integrated device that can perform many functions (email, phone, voice-commanded computing, etc. . .), it is a computing device that will. be used more often in the future. Second, Palm Pilots have a log application that is able to record usage behavior objectively over time. Finally, Palm Pilots can administer and collect process measures such as intentions and beliefs on a daily basis. In the remainder of the presentation, IT innovation adoption literature is reviewed. A model of dynamic user experience of IT innovation based on the self-regulation literature (Bandura 1997; Zimmerman, 2000) is then presented. The methods for conducting an empirical test of the model are discussed next. Finally, the research results are presented and discussed. Review of Extant I T Innovation Adoption Literature Research has largely relied on the Theory of Reasoned Actions (TRA; Ajzen & Fishbein, 1977), the Theory of Planned Behavior (TPB; Ajzen, 1991), the Technology Adoption Model (TAM; Davis, 1989), and the Social Cognitive Theory (SCT; Bandura, 1997) to investigate the psychological process that explains IT innovation adoption. The following sections reviewed this body of research. I briefly describe each theory and highlighted some key studies from each perspective. I then evaluate the strengths and limitations of the existing research. T heory of Reasoned Action and Theory of Planned Behavior Theory of Reasoned Action (Ajzen, 1991; Ajzen & Fishbein, 1977) is a well- validated theory of human behavior in social psychology. Its central tenet is that behaviors are governed by the actors’ conscious intentions and beliefs about behaviors’ consequences. Two sets of beliefs—attitude toward and subjective norm regarding a behavior—are assumed to be key forces that drive the behavior in question. Attitude toward a behavior is defined as a favorable or unfavorable evaluation of the behavior, reflecting the actor’s subjective perception of consequences of the behavior. The theory assumes that humans are motivated to seek pleasure and avoid pain. It, thus, predicts that positive attitude toward a behavior leads to a higher likelihood of exhibiting that behavior, whereas negative attitude is associated with a lower likelihood. In turn, attitude toward a behavior is determined by the actor’s salient beliefs about the probability that the behavior is associated with specific consequences and the subjective evaluations of those consequences. Subjective norm regarding a behavior reflects the social nature of beliefs. It refers to perceived social pressure to perform the behavior and thus reflect the actor’s subjective beliefs of behavioral consequences imposed by the social environment. Approving norm is predicted to lead to the intentions to perform the behavior, whereas disapproving one tends to inhibit it. The theory also assumes that subjective norm is influenced by the actor’s salient normative beliefs and their motivation to comply with those beliefs. Of all key expectations, the TRA assumes that the actor’s intention toward a behavior, behavioral intention, is the most proximal determinant of the behavior of interest. Behavioral intention is viewed as the actor’s motivation to exhibit the behavior. It influences the behavioral choice, the level of effort, and the persistence that an actor expresses. Strong intention toward displaying a behavior is predicted to lead to a higher likelihood of engaging in the behavior. Behavioral intention is assumed to be jointly determined by attitude toward and subjective norm regarding the behavior. It is also expected to fully mediate the effects of these two variables on actual behaviors. The TRA works well to explain behaviors when the behaviors in question are under the actor’s volitional control, the condition in which the actor is reasonably able to execute his or her intentions (Ajzen, 1991). However, when behaviors are beyond the actor’s volitional control, the relationships between behavioral intention and behaviors are weakened because the actor’s ability to execute the intended behaviors is constrained. To extend the TRA to explain behaviors in situations in which volitional control is limited, Ajzen (1991) added a new construct, called perceived behavioral control, to the model and changed the name of the theory to the Theory of Planned Behavior (See Figure l ). Perceived behavioral control (PBC) closely resembles Bandura’s (1986) self— efficacv construct (Ajzen, 1991). In fact, Ajzen suggested that much of what we know about perceived behavioral control is from Bandura’s research on self—efi’icacy. According to the TPB, perceived behavioral control (defined as perception of ease or difficulty of performing a given behavior) influences the behavior directly and through behavioral intention. Ajzen offered two rationales for the direct effect of perceived behavioral control on actual behavior. First, perceived behavioral control often reflects actual behavioral control and thus will explain actual behavior over and above behavioral intention. Second, if intention or motivation to perform a behavior is held constant, greater confidence in one’s ability or perception of a high level of behavioral control is more likely to bring the effort of performing the behavior to successful ends. The theory further assumes that perceived behavioral control is determined by more salient control beliefs and perceived probabilities that those beliefs control the behavior. A number of studies have used the TRA and the TPB framework to study IT innovation adoption. Davis, Bagozzi, and Warshaw (1989) were some of the first STE—Em .5555 he E85. 3 2:3,.— SSS—om 83:85 ESSEom 35:00 _Eo_>mnom 838qu A Eo Z 958 Sam whom—on 35:8 mo mcoumfigm flozon SEED Sauce. bmfioo 8 839502 £23m “Sanchez e28 assess mo mcocmfigm e23 353:2 researchers to apply the TRA in IT innovation adoption research. In the study, the authors examined usage of a word processor among 107 MBA students. At the beginning of a semester, students were given one-hour of training on how to use the word processor. They were then given a questionnaire measuring the TRA variables. The same questionnaire with the addition of a two-item self-reported measure of the frequency of their usage during the semester was administered at the end of the semester (14 weeks later). Behavioral intention, attitude, and subjective norm regarding the use of the word processor were operationalized in accordance to Ajzen and Fishbein’s (1977) recommendations. This entailed probing users of the word processor to provide salient beliefs regarding the benefits and norms associated with use of the word processor. Scale items were then framed with reference to a specific target (the word processor), action (usage), and context (the MBA program). Noticeably, subjective norm was measured with only one item. Results showed that attitude toward using the word processor was independently related to behavioral intention at both times ([3 = .55 at Time I and .48 at Time 2), but subjective norm was not (B = .07 at Time 1 and .10 at Time 2). Behavioral intention at Time 1 was moderately related to self-reported usage frequency at Time 2 (r = .35), and behavioral intention measured at Time 2 was strongly related to usage frequency at Time 2 (r = .63). In another study that relied on the TRA/TPB framework, Taylor and Todd (1995) assessed computer usage in a computer resource center at a university among 786 potential student users. They followed Ajzen and Fishbein’s (1977) guidelines in designing scales and measured attitude, subjective norm, perceived behavioral control, and behavioral intention regarding the use of the center a month after the semester started. The frequency and duration of usage by student users, who also reported the number of projects they worked on in every visit to the computer resource center, were tracked and recorded by the center attendants. The authors used structural equation modeling to analyze the data. Frequency, duration, and number of worked projects were used to estimate usage as a latent variable, although no rationale was offered to explain why the number of worked projects was viewed as an indicator of computer usage (My guess is that they needed a third indicator in order to estimate a latent factor.) The rest of the constructs were modeled as manifest variables (no reason was offered to explain why they were not modeled as latent variables as well.) Results showed that attitude, subjective norm, and perceived behavioral control had independent positive effects on behavioral intention, explaining 57% of its variance. Behavioral intention and perceived behavioral control independently affected usage behavior, accounting for 36% of usage variance. In another study, Harrison (1997) studied how well the TPB predicted behavioral intention to adopt new IT innovations among 97 managers who played substantial roles in making adoption decisions for their own firms. IT innovations in this study were considered as new IT systems that their firms would like to adopt to help them compete in the market. Following recommendations by Ajzen and Fishbein (1977), the authors developed scale items and measured attitude, subjective norm, and perceived behavioral control in an elaborate process. Like other studies, results showed that attitude, subjective norm, and perceived behavioral control had independent effects on behavioral intention to adopt the IT innovation. Attitude accounted for an increment of 27% of variance; subjective norm, 9%; and perceived behavioral control, 4%. In a field study, Chau and Hu (2001) examined the effects ofthe TPB constructs on behavioral intention to use telemedicine technology in a sample of 408 physicians in Hong Kong. Telemedicine was defined as the use ofinformation and telecommunication technology to deliver healthcare service of needed expertise and information among geographically disperse populations of patients and physicians. The researchers used the measures developed by Taylor and Todd (1995) to assess the TPB constructs. They found that attitude (B = .63) and perceived behavioral control (13 = .22) had independent positive relationships with behavioral intention, whereas subjective norm did not. Together, the TPB constructs accounted for 32% of variance in behavioral intention. Taken together, evidence from previously discussed studies suggests that constructs in the TRA and the TPB explain IT adoption and self-reported time usage reasonably well. The independent effects of attitude and perceived behavioral control are consistent across studies. The independent effect of subjective norm, however, was less consistent. Perhaps, the nature of social settings may explain the presence or absence of the independent, direct effect of subjective norm. In some technology adoption situations, workers are required to adopt new innovations. They may use innovations in order to comply with mandates from their supervisors, rather than because of their own feelings and beliefs. In those settings, subjective norm is likely to have a direct effect on innovation usage, independent of the effect of attitude. In other situations, users adopted IT innovations on a voluntary basis, as they could choose whether or not to adopt the innovations (Davis et al., 1989). In those settings, the effect of subjective norm is more likely to be fully channeled through attitude as attitude represents a set of beliefs and feelings users have about innovations. 10 T echnologv Adoption Model Widely used in technology adoption research, the TAM (Davis, 1989) is derived from the TRA and is considered as an application of the TRA to explain adoption behavior (Figure 2). The TRA explains behaviors reasonably well, but it requires researchers to construct customized scales to measure salient beliefs that are specific to the IT innovations in question. To get around this problem, Davis developed the TAM with the goal of providing a small set of internal beliefs, attitude, and intentions and a set of common generic measures that can be used across technologies and settings to explain adoption behaviors. Currently, the TAM is the dominant and most popular theory in IT innovation research, as the majority of existing studies have relied on the TAM framework (Legris et al., 2003; Venkatesh, 2000; Venkatesh, Davis, & Morris, 2007). According to the TAM, perceived usefulness and ease of use are two key beliefs that influence adoption behaviors. Perceived usefulness refers to the degree to which using an IT innovation would enhance users’ outcomes (Davis, 1989; Davis et al., 1989). It is predicted to influence adoption behavior directly and through attitude toward adoption behavior. According to Davis et a1.(1989), perceived usefulness has a positive influence on attitude toward adoption behaviors because valued outcomes (perceived usefulness), through associative Ieaming and cognitive-affective consistency mechanisms, become associated with positive affect toward the means that users utilize to obtain those outcomes (attitude toward adoption behaviors). In addition to its effect on attitude, perceived usefulness also affects adoption behaviors directly. Davis et a1. (1989) 11 .232 gamete. hue—258a. "N «.53...— xocozcoum omamD :oEBE _So_>mnom 33 mo ommm bungee. a $0533: wofiooaom 12 suggests that the direct effect happens because the cognitive decision to enhance performance and outcomes is the key force that drives user’s intentions to engage in certain behaviors. This influencing process sometimes occurs without fully activating the affect associated with the behavior-reward association (Davis et al., 1989). Specifically, in the context of technology adoption, people make decisions to carry out usage behaviors by recognizing the association between behaviors and outcomes. They do not always evaluate how these outcomes meet their hierarchy of goals and needs and thus do not fully activate the subjective affect associated with such outcomes every time. In other words, people form intentions toward adoption behaviors they believe will increase their perfomiance, and the effect of such belief is over-and-above users’ feelings or attitudes that are associated with the behaviors (Davis et al., 1989). Ease of use is defined as the degree to which users perceive a particular IT innovation to be free of effort (Davis et al., 1989). By definition, it is similar to perceived behavioral control in the TPB. Yet, at the time of its conception, Davis et a1. were not aware of the relationship, because F ishbein and Ajzen had not introduced the concept of perceived behavioral control. Davis et al. assumed that ease of use influences intention and behavior through two mechanisms. First, ease of use has an instrumental effect. When a product is easy to use, user’s effort saved due to increased ease of use may be redeployed, and it enables a person to accomplish more work. Ease of use thus contributes directly to perceived usefulness (Davis et al., 1989). Second, ease of use influences the sense of self-efficacy regarding a user’s ability to carry out behaviors needed to operate the innovation in question (Davis et al., 1989). High levels of ease of use tends to lead to a heightened sense of self-efficacy. Because self-efficacy is thought to 13 operate autonomously from instrumental determinants of behaviors due to inborn drives for competence and self-determination (Bandura, 1986), ease of use is thus posited to influence behavioral intention directly, over and above perceived usefulness. Differing from the TRA, subjective norm is not included in the TAM. Subjective norm is thought to influence behaviors directly through compliance processes or indirectly via attitudes through the internalization and identification process. In the context of IT adoption, decisions to adopt and use IT innovations by managers and professionals is voluntary in many cases. In those cases, subjective norm is thought to influence adoption behaviors through the process of internalization and identification. Only in a small number of cases, workers may use IT innovations in order to comply with mandates from their supervisors; and the compliance process is thought to be the main mechanism to drive social norm’s effect. Yet, standard measures of subjective norm do not differentiate compliance from internalization and identification (Davis et al., 1989). It is, thus. difficult to disentangle the direct effects of subjective norm on behavioral intention from indirect effects via attitude. Because of subjective norm’s uncertain theoretical and psychometric status, it is excluded from the TAM (Davis et al., 1989). In the initial formulation of the TAM, attitude was thought to mediate the effect of perceived usefulness partially and ease of use fully. However, it was dropped from the model later for a number of reasons (Davis et al., 1989; Venkatesh & Davis, 2000). First, excluding attitude improved the parsimony of the model. Second, attitude only partially mediated of the effect of perceived usefulness on behavioral intention. Third, the independent relationship between attitude and behavioral intention was not consistent when perceived usefulness was included in the model (Davis et al., 1989; Taylor & Todd, 14 1995; Venkatesh & Davis, 2000). Finally, there was a strong link between perceived usefulness and behavioral intention (Davis, 1989; Davis et al., 1989; Venkatesh, 2000; Venkatesh & Davis, 2000). To help evaluate the TAM, Davis (1989) developed generic measures of ease of use and perceived usefulness that can be used across a variety of IT. The measures contained four items each and are found to be psychometrically sound. Factor analysis of scale items invariably yielded two factor solutions (e. g., Venkatesh, 2000; Venkatesh & Davis, 2000). Yet these two factors were not independent. In a series of studies, Davis (1989) reported their correlation ranged from .10 to .60. The scales of ease of use and perceived usefiilness were internally reliable (Chin & Gopal, 1995; Davis; Davis et al., 1989; Mathieson, 1991). Legris et al. (2003) reported that coefficient alphas for perceived useflilness exceeded .83 in 22 studies they reviewed, whereas coefficient alphas for ease of use were greater than 0.79. The availability of valid generic measures is an important contribution of the TAM because it allows researchers to compare results across studies and supports cumulative theoretical development. In addition, the availability of the scales may also explain why the TAM has become increasingly popular among researchers. The majority of existing studies have used the TAM to examine IT innovation usage and adoption. Most evaluated the effects of ease of use and usefulness on behavioral intention. For example, Venkatesh (2000) examined the validity of the TAM in four samples of 246 workers from three different organizations in which new IT innovations were recently introduced (58 sales clerks using online help desk software in a retail electronic store, 41 workers and 104 workers of two different branches in the same 15 real estate company using new property management software, 43 employees switching to the Windows Operating System in a financial service firm). All research participants were trained to use the software after the innovations were adopted by the organizations. Perceived usefulness, ease of use, and behavioral intention regarding the use of the new software were assessed shortly after the training (time 1), one month (time 2), and three months after the training (time 3). In his analysis, Venkatesh pooled data across samples and found that perceived usefulness (13 = .49, .52, .54, at time 1, 2, and 3, respectively) and ease ofuse (B = .29, .20, .17, respectively) independently predicted behavioral intention at each of the three measurements points. Together, ease of use and perceived usefulness accounted for about 35% of variance in behavioral intention. The relationships of ease of use and perceived usefulness with behavioral intention were also found to be consistent across many other studies. In a review, Legris et al. ( 2003) examined 22 studies that yielded 28 possible relations between each of the TAM constructs and behavioral intention. Out of 19 reported relations, perceived usefulness had a positive relation to behavioral intention 16 times and a non-significant relation three times. Ease of use was not related to behavioral intention 3 times and was significantly related 10 times out of the 13 reported times. It should be noted these relations refer to partial correlations that studies reported in the results of their multiple regression and structural equation modeling analyses. Meta-analysis for estimating overall effect sizes of the relations between the TAM constructs and behavioral intention could not be conducted because most studies in IT adoption research did not provide the zero-order correlation matrix. Out of 22 studies reviewed, correlation matrix were provided in only three studies (Legris et al., 2003). 16 In studies where IT usage was investigated (e. g., Taylor & Todd, 1995; Venkatesh & Davis, 2000), the vast majority of the studies assessed usage through self-report measures with one to three items asking users for the frequency and/or duration of their IT usage. Generally, the effect of perceived usefulness on self-reported usage was found to be more consistent, whereas the influence of ease of use was not. For example, in Davis et al. (1989)’s study discussed earlier, where MBA students’ use of word processing was studied, results showed that self-reported usage at the end of the semester was related to perceived usefulness (r = .65) and ease of use (r = .27) measured at the beginning of the semester. Perceived usefulness and ease of use also independently predicted self-reported usage frequency and together accounted for 45% of its variance. However, when measured together to predict self-reported usage at the end of the semester, perceived usefulness was correlated and independently accounted for variance in self-reported usage frequency, whereas ease of use was neither related nor accounted for unique variance. In another study, Davis (1989) surveyed 112 Canadian IBM workers to assess their beliefs and self-reported past usage of an email software and a text editor for which they had a six month usage experience. One hundred nine workers were users of the email software, and 75 were users of the text editor. Results showed that perceived usefulness and ease of use were correlated with self-reported usage in both samples. However, only perceived usefulness accounted for unique variance of self-reported past usage. Together they accounted for 31% of variance in email usage and 46% of variance in text editor usage. In another study, Davis (1989) also found similar results. In this research, 40 evening MBA students were trained to use two different graphic software 17 applications, C hart-Master and Pendraw, for an hour. After the training, the students answered questions regarding their perceptions of the perceived usefulness and ease of use of the applications. They were then asked to predict their future use of the two software packages. Results showed that both perceived usefulness and ease of use were correlated with reported usage of either software, but only perceived usefulness accounted for unique variance. In a review, Legris et al. (2003) reported that out of 28 possible relations, the relations between perceived usefulness and self-reported usage were reported 13 times. Perceived usefulness had a positive relation with self-reported usage eight times and a non-significant relationship five times. Out of nine reported relations, ease of use was not related to self-reported usage five times and positively related four times. The overall effect sizes could not be estimated as most studies in IT adoption literature did not provide correlation matrix. Attempting to extend TAM, a number of studies have examined contingency factors that moderate the effects of TAM constructs. Most studies reported the moderating effect of age, gender, and experience on relationships between TAM constructs and behavioral intention (e.g., Venkatesh, Morris, Davis, & Davis, 2003; Morris, Venkatesh, & Ackerrnan, 2005). The effect of perceived usefulness on behavioral intention was found to be stronger among men and young workers, whereas the effect of ease of use was stronger among women, older workers, and those research participants with little experience (e. g, Venkatesh, Morris, Davis, & Davis, 2003; Morris, Venkatesh, & Ackerrnan, 2005). Most studies found non-significant moderating effects when IT usage was used as the criterion (e. g., Venkatesh, Morris, Davis, & Davis, 2003; 18 Morris, Venkatesh, & Ackerman, 2005). Only one study by Venkatesh, Brown, Maruping, and Bala (2008) reported that experience moderated the effect of behavioral intention on IT usage such that the effect was stronger with increased level of experience. In sum, TAM predicts behavioral intention to adopt well. Ease of use and perceived usefulness were independently related to behavioral intention to use IT innovations, and together they accounted for a large portion of its variance. However, TAM were less successful in predicting IT innovation usage. The effect of perceived usefulness on self-reported usage was consistent, whereas the influence of ease of use was not (Legris et al., 2003). As most studies have examined IT usage using subjective measures, the validity of TAM in explaining actual usage is largely unknown. Social Cognitive Theory At the present time, the SCT (Bandura, 1986) is a popular model that is used to explain individual behavior. It is a broad theory that includes many factors to explain the dynamic interactions between individuals and their environment. The theory holds that individuals’ internal processes such as cognition and affect interact reciprocally and dynamically with overt behaviors and the external environment. Individuals choose the environment in which to reside and are influenced by their environment. Behaviors in a given situation are affected by situational characteristics, which are, in turn, influenced by behavior. Behaviors are also influenced by cognition and affect, which, in turn, are also affected by them. The detailed processes of the dynamic, reciprocal interactions between individuals and their environment are complex. Instead of articulating and applying the SCT in its entirety, researchers in IT adoption literature (e.g., Compeau & Higgins, 1995; Compeau, Higgins, & Huff, 1999; Hill et al., 1987) took a narrow l9 approach by focusing on key cognitive constructs in the SCT to explain adoption behaxiors. According to the SCT, two key beliefs drive human behaviors. The first, outcome expectation, is expectations regarding outcomes associated with behaviors. Individuals are more likely to undertake behaviors they believe will result in valued or favorable outcomes and refrain from those that they think will lead to unfavorable consequences. Outcome expectation is similar to the construct of pereived usefulness in the TAM (Davis, 1989; Davis et al., 1989) and is highly related to the concept of attitude in the TPB framework (Taylor & Todd, 1995). The second belief, self-efficacy, refers to peoples’ judgment of their capabilities to organize and execute courses of action required to attain designated types of performance (Bandura, 1986). It is domain specific and strongly tied to the behavior of interest. Self- ef/icaev is concerned not with the skills one has but also with the confidence with which one uses the skills. Self-efficacy is thought to influence behavioral choices, effort and persistence, and task strategy that people employ. As a construct, self-efficacy closely resembles other constructs encountered in the TPB and the TAM. It is similar to perceived behavioral control in the TPB (Ajzen, 1991), as Aj zen incorporated perceived behavioral control into the TPB on the basis of Bandura’s work on self-efficacy. Self-efficacy is also highly related to ease of use in the TAM (Davis et al., 1989). Davis et al.(1989) assumed that case of use has a direct effect on attitude toward behaviors on the rationale that it influences self-efficacy. Although self-efficacy is related to ease of use, Compeau and colleagues, (Compeau & Higgins, 1995; Compeau et al.,1999) argued that conceptually it is a distinct 20 construct. Self-efficacy places a stronger focus on individuals as users of technology. It is thought to be largely driven by individual differences and their experience of technology usage. In contrast, ease of use is more technology focused and largely determined by objective technology characteristics. Ease of use represents a psychological cost of using new technology (Compeau & Higgins; Compeau et al.), and thus it could be considered as a subset of outcomes associated with technology use. In addition to the distinctive nature of self-efficacy, Compeau and Higgins (1995) argued for its role in explaining IT innovation usage on theoretical grounds. If one doubts one’s capability to successfully use IT innovations, expectations about positive outcomes of IT usage will be meaningless. The inclusion of self-efficacy should, therefore, explain additional variance in usage behavior. This theoretical position has important practical implications. It suggests that IT innovation adoption is not just about convincing people of the benefits to be harnessed from a technology or improving its quality to make it very easy to use. It is important to coach, train, and motivate users so that they have requisite skills and confidence in their ability to use IT innovations successfully. In IT adoption research, only a few studies have used the SCT belief constructs to explain 1T innovation usage. Hill et al. (1987) studied four samples of students totaling 157 males and 147 females who learned to use computers. They assessed self-eflicacy regarding learning to use a computer by a four-item measure and outcome expectation by another four-item measure. They then asked students to indicate their intentions to use a computer in the future by answering two questions. In the following semester, 12 weeks after the initial survey, the students were contacted and asked to report whether they actually enrolled in computer or computer-related courses. In all four samples, structural 21 equation modeling analyses showed that behavioral intention was independently related to outcome expectation and self-efficacy. In turn, behavioral intention predicted participants’ actual enrollment in computer or computer-related courses. In a cross-sectional survey of 1020 Canadian workers, Compeau and Higgins (1995) developed and validated a 10-item measure of self-efficacy. They found that self- reported frequency of computer usage was related to self-efficacy (r = .45) and outcome expectation (r = .41). Path analysis indicated that outcome expectation influenced self— reported usage directly and indirectly through satisfaction with computer use, whereas self-efficaev affected usage directly and indirectly through computer satisfaction and anxiety. Because the survey was cross sectional and thus does not allow inference about causality, Compeau et a1. (1999) re-measured self-reported usage, computer satisfaction and anxiety among people in the same sample a year later in a follow-up study. They then used self-efficacy and outcome expectation assessed the preceding year as predictors and found a similar set of relationships between self-efficacy, outcome expectation, satisfaction, anxiety, and self-reported usage in a sample of 394 respondents. The results are consistent with a causal interpretation but in a necessary, not sufficient sense. In sum, although a smaller number of studies have relied self-efficacy and outcome expectation constructs in social cognitive theory to explain IT innovation adoption, results appear promising. Results showed outcome expectation and self-efficacy were independently related to behavioral intention and self-reported usage. Yet, these few studies have exclusively relied on self reports of IT usage. Given the small number of studies and the common method variance due to self-report measures, more research with 22 more rigorous measurements and research design is needed to ascertain the validity of self-efficacy and outcome expectation in predicting IT usage behaviors. Strengths of Extant Research The existing research has relied on well-supported theories of human behaviors, namely the theory of planned behavior and social cognitive theory to study IT innovation adoption. Studies based on the TPB and its derivative, the TAM, have shown that perceived usefulness is consistently related to behavioral intention to use IT innovations and self-reported usage frequency (e.g., Davis, 1989; Davis et al., 1989; Taylor & Todd, 1995). Similarly, a smaller number of studies that have relied on the SCT have found that outcome expectation, the same construct as perceived usefulness but with a different label, was related to behavioral intention (Hill et al., 1987) and self—reported IT usage (C ompeau & Higgins, 1995; Compeau et al.,1999). Thus, research findings have converged on the idea that instrumental beliefs, namely perceived usefulness or outcome expectation, matter in people’s adoption and use of IT innovations. Researchers also agree that controlling beliefs such as self-efficacy, perceived behavioral control, or ease of use also plays an important role. Studies that relied on SCT have found that self-efficacy had a positive relation with behavioral intention (Hill et al., 198 7) and self-reported IT usage (Compeau & Higgins, 1995; Compeau et al., 1999). Research that relied on TPB framework, used a similar construct, labeled perceived he’llaI-‘lOl‘lll control, and similarly reported that perceived behavioral control had positive effects on behavioral intention and self-reported usage behavior (Taylor & Todd, 1995). Work that followed TAM showed ease of use, a related construct, may have an independent effect on behavioral intention and IT usage. However, its effect has been 23 rather inconsistent (Legris et al., 2003). Conceptually, self—efficacy and ease of use, although related, are distinct constructs (Compeau & Higgins; Compeau et al.; Davis et al ., 1989). Selfcfllcacy is more rooted in individual differences and user experience with IT innovation usage, whereas ease of use is more driven by the characteristics of IT innovations (Compeau & Higgins; Compeau et al.). L inzitations of Extant Research Although existing studies ofIT adoption place emphasis on different sets of be] icfs that are responsible for IT usage, they generally follow the model outlined in F i gure 3. Users’ beliefs regarding usage ofIT innovations are key psychological factors th a t explain adoption and usage ofIT innovations. They are viewed as the bases on which u s ers form their behavioral intention, which, in turn, determines frequency and duration in vvhich they use IT innovations. With such a model, extant research reveals a number of limitations. The first r Cga rds the weak criteria that they have investigated. Many studies (e. g., Harrison, 1997; V e 11 katesh, 2000) in the literature have used behavioral intention as a key criterion to eva I Liate their research models (Legris et al., 2003), on the grounds that behavioral "’ 3' I 6” IMO]! is a strong correlate of usage behaviors. Yet, existing evidence shows that h (’11 czvioral intention typically explains only around 30% of variance in self-reported u -_ . . . . . . . , S :13 c behavrors. Much of variance in behavrors remains unexplained, maklng behavioral tn 1* q); ztion a poor substitute for IT usage. Of those studies that directly examined usage b ~ . . Q h :avrors, almost all have used self-reported measures to assess IT usage (Legns et al.). T ‘1Q employed self-report measures typically contained one to four items asking users to \nd i cate how frequent and how long they used IT innovations. They asked research 24 aohaomom 5:325 3556::— H— E .252 3.55th ”m 95”:— xocosvoi owme 55:35 ESSEom mconfioomxm Eozom wEoBoBé msocowoxm 25 participants to think retrospectively on how much time they have spent using IT innovations (e.g., Compeau et al., 1999; Davis, 1989; Davis et al., 1989) and were often administered at the same time as measures of beliefs (e. g., Compeau & Higgins, 1995; Hu et al., 1995). This practice of using self-report measures has a number of limitations. Self-report measures often do not capture actual behaviors accurately (Legris et al.; Straub, Limayem, & Karahanna-Evaristo, 1995). When measured together with beliefs, they produce only correlational data and tend to inflate the nature of relations due to close temporal proximity and common methods variance. The second limitation of existing research regards the assumption that behavioral intention is the only proximal predictor of IT usage. In the TRA and the TAM and the studies that were built on them, behavioral intention was treated as the only proximal determinant of IT usage. The effects of antecedents of IT usage were viewed through their influences solely on behavioral intention (e.g., Davis et al., 1989; Venkatesh, 2000). However, the assumption that behavioral intention is the sole proximal determinant of behaviors misses the fact that strong intention is not always expressed in behaviors. Research has shown that the road between intention and performance is rather long (Gollwitzer, 1999). Typically, behavioral intention is not automatically translated into actions. People need implementation plans to enact their intentions. They often face distractions and competing thoughts and at times cannot focus their attention on carrying out their intentions. It is thus not surprising that the explained variance was modest when behavioral intention was used to predict self-reported time usage, despite the inflated nature of their relations due to measurement problems. 26 Third, perhaps most importantly, extant research has largely ignored the temporal dimension of IT usage. Although researchers acknowledged that beliefs, intentions, and usage behaviors change over time, most studies (e.g., Compeau & Higgins, 1995; Harrison, Mykytyn, & Riemenschneider, 1997) have used cross-sectional designs to study the relationships among those variables. Although a number of studies (e. g., Davis et al., 1989; Venkatesh & Davis, 2000) have attempted to use longitudinal designs, they also had limitations and have not addressed the dynamic usage of IT innovations. For example, Davis et al.(1989) and Venkatesh and Davis (2000) measured beliefs and behaviors at several points in time. Yet their analyses have focused on how beliefs and behaviors were related at each time point. Such analyses served to validate and replicate the hypothesized relationships. However, they did not address how beliefs and IT usage change and influence one another dynamically over time. Beliefs and expectations are not fixed constructs. They change over time (Bandura, 1997). Beliefs and expectations regarding the use of IT innovations are even more susceptible to change (MeLone, 1990), as they are attached to usage of IT innovations which are new to and for which users may not have prior expectations and experience. IT usage behaviors also change over time. Users test out new IT innovations, use it to carry out tasks, routinize their behaviors, and at some point discontinue using them (Rogers, 1995). In other words, IT usage behaviors wax and wane. Despite the reality of dynamic change in beliefs and usage behaviors, existing research has not addressed this important phenomenon. The extant research has largely relied on a feed-forward loop framework, as seen in the model in Figure 3, and most 27 studies sampled data at one or two points in time, making inferences about dynamic change and reciprocal relationships between beliefs and usage behaviors impossible. To address dynamic change of IT usage and its underlying beliefs and expectations, we need to conceptualize and investigate change as occurring within persons over time. The questions we should ask are: (a) how a person’s key beliefs regarding IT innovations (self-eflicacy and outcome expectation) change over time, (b) what happens to a person's usage behavior as their key beliefs change, (c) what accounts for the changes in the key beliefs, and (d) whether these change processes are uniform across persons. To answer those questions empirically, a longitudinal study to sample key beliefs and usage behaviors within persons frequently over a long period of time is required. Data analyses should focus on both within- and between-person level so that they allow us to understand how dynamic process operates within each person as well as the variation between people. In order to capture users’ dynamic experience of IT innovation, the present study attempted to move beyond current models by considering the effects of time and the dynamic interplay between beliefs and behaviors. It presented a dynamic model in which innovation self-efficacy and outcome expectation influence and are influenced by IT usage dynamically over time. The study sampled users’ subjective beliefs and objective behaviors frequently over an extended duration. Data were analyzed within persons as well as between persons. A model of dynamic user experience of IT innovations was elaborated next. 28 Self—Regulated User Experience of IT Innovations: A Dynamic Perspective To address the dynamic changes of self—efficacy and outcome expectation and their influence, I drew on self-regulation theory from a social cognitive perspective (Bandura 1997; Zimmerman, 2000). Self-regulation theory (Zimmerman) can be viewed as a subset of the broader SCT. It focuses on the relationships between covert internal processes of cognitions and affect and overt behaviors. The central assumption is that behaviors affect and are affected by cognitions and affect in a cyclical and reciprocal manner. According to the theory, individuals regulate behaviors in a cycle of three phases: forethoughts, performance control, and self-reflection. During forethoughts, individuals engage in cognitive processes that precede efforts to act and set the stage for action, in our case IT usage. They form deliberate intentions and set strategic plans to achieve intended goals. Intention formation and strategic planning are influenced by beliefs regarding one’s confidence in his or her ability to execute the behaviors, which Bandura labeled self-efficacy, as well as the expected outcomes from doing so. In perfonnance control, individuals execute the strategic plans to enact their behavioral intention. They attempt to control their motoric efforts by focusing attention and engaging in different task strategies. The effectiveness of their actions is influenced by the ability to control attention and the use of verbal self-instruction, imagery, and task strategies. In self-reflection individuals react to executed behaviors. They evaluate their actions, make causal attributions, experience satisfaction, and adapt their behaviors in a systematic fashion. These self-reflections, in turn, influence forethoughts regarding subsequent motoric efforts—thus completing a self-regulatory cycle. 29 With the self-regulation framework, I have developed a model, presented in Figure 5, to explain users’ dynamic experience ofIT innovations. Consistent with the self-regulation framework and previous IT innovation research, users’ behavioral intention (Ajzen, 1991; Davis et al., 1989; Taylor & Todd, 1995) and implementation intention (Gollwitzer, 1999) are proposed to be the most proximal determinants of IT usage behaviors. Yet these conscious thoughts reflect more distal beliefs regarding the IT innovation. namely innovation self-efficacji’ (e.g., Compeau et al., 1999; Hill et al., 1987) and outcome expectation (Compeau et al.; Davis, 1989; Davis et al.; Hill et al.; Venkatesh & Davis, 2000). Under the influence of behavioral intention and implementation plans, individuals carry out actions in interacting with IT innovations. Their behaviors are manifested in the frequency and duration with which they use the innovations (Brancheau & Brown, 1993; Igbaria, Pauri, & Huff, 1989; Lee, 1986). According to the model, once users carry out their intentions and go through (or don’t go through) their interactions with IT innovations, they reflect on their experience. They evaluate their experience of the IT innovations to determine whether it meets their previous beliefs. Specifically, evaluations that one’s ability to use the innovations surpasses his or her self-efficacy (positive efficacy disconfirmation) encourage the upward revision of self-eflicacy (Bandura, 1997). In contrast, judgments that one’s ability to use the innovations falls short of his or her self—efficacy (negative eflicacy disconfirmation) leads to a downward revision of self-efficacy (Bandura). Similarly, evaluations that one’s experience of usage outcomes exceeds his or her outcome expectation (positive outcome disconfirmation) 30 33:95::— .: .8 3:3..oaxm Sm: 2:855 .3 .252 bows—=wom..=om 93588 "m 25w:— coumgccooflm t homo—mum i enmwoowmm 598:: 59358032: omme 8:305: {E :otcBE _82>§om cough—.3085 5:88me @8850 it 08830 31 encourage the upward revision of outcome expectation, whereas judgments that actual usage outcome is below his or her outcome expectation (negative outcome (liscon/irmation) drive future outcome expectation downward (Bhattacherj ee & Premkumar, 2004). The model described in Figure 5 is consistent with the growth notion that Bandura (1997) argued for in addressing the dynamic interaction between self-efficacy and performance. The idea is that self-efficacy and performance feed to one another in a positive feedback loop. When self-efficacy is high, it is likely to lead to higher performance. Successful performance (or the discrepancy between perceived performance and prior performance expectation) tends to raise subsequent self-efiicacv, which in turn tends to enhance future performance. Alternatively, when self-eflicacy is low, it is likely to induce performance. Subpar performance is likely to decrease self- efj'icaev, which in turn tends to lead to lower future performance. This type of model assumes that one has ample room to grow and change until one reaches his or her upper limit, which may happen when one hits his or her physical limits, has to deal with conflicting goals of equal or higher priority, or face other constraining factors. In the context of technology adoption, it is assumed that early in the adoption process when one is familiar to the innovations and still has much to learn, one has ample room to change and grow until his or her behaviors and beliefs stabilize. With this assumption, the model in Figure 5 does not include constraining factors that could impose ceilings on behavioral growth in the later stages of adoption process. The early stage of technology adoption process is thus viewed as a boundary condition for this model. 32 The nature of the constructs and their relationships in the model are elaborated further below and with the consideration ofthe nature ofthe relationships over time. Behavioral Intention According to the TPB (Ajzen, 1991) and the TAM (Davis et al., 1989), behavioral intention is a central factor that drives human behaviors. It represents the internal desire to reach certain end points such as performing valued behaviors or obtaining desirable outcomes. It influences how much effort people exert and focuses their attention on the behaviors to which the effort will be directed. Strong behavioral intention toward displaying a behavior is predicted to lead to a higher likelihood of engaging in the behavior (Ajzen; Davis et al.). Research has clearly shown that behavioral intention has a positive effect on self- report IT usage at between—person level of analysis (Legris et al., 2003). For example, Davis et al. (1989) found that behavioral intention explained 12% and 40% of variance of self-report usage of a word processor at two points of measurements in their study. Venkatesh and Davis (2000) reported that the correlations between behavioral intention and self-report usage ranged from .44 to .57 in four samples of workers using new software applications. In a review, Legris et al. found that out of 10 relationships between behavioral intention and self—report IT usage, nine were significant. Although no studies have examined the relationship between behavioral intention and usage behavior at the within-person level, I expect to find similar relationships within person. An individual is more likely to engage in a behavior if the individual has strong behavioral intention for that behavior than when the same person has a weaker behavioral intention. 33 Htpothesis I: Stronger behavioral intention will lead to a higher level of I T usage. Implementation Intention Forming a strong behavioral intention is understood as committing oneself to reaching desired outcomes or to perfomring desired behaviors. However, the time between forming behavioral intention to behavioral enactment may be long, even with strong intentions. Behavioral intention is frequently not enacted. The relationship between behavioral intention and action is not robust. Empirical evidence shows that at the between-person level of analysis behavioral intention typically accounts for about 25% of behavior variance (Ajzen, 1991) and about 30% of variance in self—reported usage frequency (Venkatesh, 2000). The modest link between behavior and behavioral intention is, in part, the result of people failing to act on their otherwise good intentions (Gollwitzer, 1999). People may simply forget their previous intentions. They may fail to recognize relevant situational cues that trigger thoughts and actions. They may not be able to recall an action plan. They may not have the physical, cognitive, and/or other abilities required to carry out actions. Or they fail to allocate sufficient cognitive resources because they are distracted by other competing thoughts and actions. Conscious implementation planning, expressed in implementation intention, may play a central role in the enactment of behavioral intention and explain frequent failure to carry out desirable behaviors. Implementation intention refers to pre-action decision of when, where, and how behavioral intention or goal is implemented (self-set goal and 34 behavioral intention are similar constructs and are used interchangeably by Gollwitzer, 1999). It is conceptually distinct from behavioral intention in a number of aspects. behavioral intention refers to a desire to reach certain end points such as perfomring a particular behavior or obtaining a desirable outcome. It has the structure of“I intend to reach A” where A is either a behavior or an outcome (Gollwitzer). In expressing their behavioral intention, people reveal their desire to carry out the behavior of interest. The consequence of having a strong behavioral intention is a sense of commitment that drives individuals to carry out the desirable behaviors. Whereas behavioral intention refers the commitment or desire to carry out a behavior, implementation intention concerns the way the intended behavior is enacted. It is subordinate to behavioral intention and instrumental in the execution of the behavior. Imp/ementation intention specifies when, where, and how intended behavior is enacted. It has the structure of “When situation X arises, I will respond Y” (Gollwitzer, 1999) and serves to link anticipated opportunities with appropriate responses that lead to desirable behaviors or outcomes. With implementation intention, the person commits himself or herself to respond to a certain situation in a specific manner in order to carry the desirable behavior. Thus, it serves the purpose of promoting the enactment of the behavior expressed in behavioral intention and thus is likely to explain behavioral variance independent of behavioral intention. Oollwitzer (1999) suggests implementation intention helps enactment of behavior because they make individuals more sensitive to situational cues. They automatically activate goal-directed behaviors, when individuals encounter critical situational cues. As a result, individuals with strong implementation intention are quick to select the 35 appropriate situations and seize critical opportunities that are short-lived to enact critical responses that lead to desirable behaviors. Their actions become swift, require little cognitive resources, and are protected against competing goals, distractions, and habitual responses. Gollwitzer (1999) further suggests that implementation intention is instrumental in the enactment of behaviors for two main reasons. First, implementation intention causes the representation of anticipated situations to become highly activated and thus easily accessible. Second, it helps automatize the situation-behavior response so that action initiation becomes immediate and efficient, and does not require conscious intent. With these functions, implementation intention allows individuals to quickly identify relevant situations and carry out appropriate responses automatically that allow behavioral intention to be enacted. Research suggests that at the between-person level of analysis, implementation intention helps individuals carry out their intended behaviors (Gollwitzer, 1999). In a series of studies, Gollwitzer and Brandstatter (1997) showed that behavioral intention that is coupled with implementation intention is more likely to be enacted than mere behavioral intention alone. In one study, student participants were asked prior to Christmas break to name a difficult-to-implement project they intended to achieve during the upcoming vacation. Participants indicated intended behaviors such as writing a seminar paper, settling an ongoing family conflict, or engaging in sports activities. When asked, two-thirds of the participants indicated that they had formed intentions on when and where to get started (i.e., implementation intention) for the behaviors. After Christmas, the participants were asked whether they executed their intended behaviors. 36 Two-thirds of the participants with implementation intention had carried them out, whereas only one-fourth of the participants without implementation intention had done so. In another study (Gollwitzer & Brandstatter), research participants were asked within two days after Christmas to write a report on how they spent Christmas Eve and send it to the experimenters. Half of the participants were told to form implementation intention by indicating on a paper exactly when and where they intended to write the report during the two days after Christmas. When the experimenters received the reports, three—fourths of the participants with implementation intention had written the reports in the requested time period, whereas only a third of the participants without implementation intention had done so. In other studies, Gollwitzer and colleagues have also found similar effects of implementation intention on other types of behaviors. Implementation intention has a positive effect on the execution of unpleasant behavior such as self-examination for evidence of breast cancer (Gollwitzer & Oettingen, 1998; Sheeran & Orbell, 2000) and taking vitamin supplements (Sheeran & Orbell, 1999). It helps people who have problems with action control such as addicts and schizophrenic patients (Gollwitzer, 1999) ward off unwanted habitual responses and enact desired behaviors (Gollwitzer). The work by Gollwitzer and colleagues suggests that implementation intention plays an important role in bringing about desirable actions. In light of the evidence, Ajzen (2001) acknowledged that it may influence behavioral expression above and beyond behavioral intention and recommended more research to study its effects. To my knowledge, no study has attempted to examine the role of implementation intention on the adoption and usage of IT innovations. Although no studies to date have examined the 37 effect of implenrentation intention at the within-person level of analysis, I expect that implententation intention will influence IT usage over-and-above the effect of behavioral intention. I It'pothesis 2: Stronger implementation intention will lead to higher levels of I T usage over-and-above the effect of behavioral intention. Sel/lefficacv and Eflicac'v Disconfirmation While behavioral intention and implementation intention represent the conscious thoughts of behavioral end points and the means to obtain them, they also reflect other types of beliefs that ultimately drive behaviors. A key belief is self-efficacy, which Bandura (1997) defines as beliefs in one's capabilities to organize and execute the courses of action required to produce given attainments. Self-efficacy is critical to motivation because people have little incentive to act unless they believe they can produce desired outcomes by their actions (Bandura). It is domain specific and strongly tied to behavior in question. In the context of IT innovation adoption, innovation self-efficacy (e.g., computer self-efficacy, Palm Pilot self-efficacy) is defined as beliefs in one’s capability to use a particular IT innovation (e. g., Compeau & Higgins, 1995; Compeau et al., 1999). It is similar to the notion of perceived behavioral control in the TPB (Ajzen, 1991) for it refers to one’s perception of one’s ability to perform a given behavior. Self-eflicacy is also related to the construct of ease of use in the TAM (Davis et al., 1989). Yet, conceptually it is a different construct (Compeau & Higgins; Compeau et al.; Davis et al.). Innovation self-efficacy is more rooted in individual differences and experience with 38 IT innovation usage, whereas ease of use is more driven by the characteristics of IT innovations (Compeau & Higgins; Compeau et al.; Davis et al.). Given this distinction and the findings that case of use was inconsistently related to self-report usage (Legris et al., 2003) and its effect on IT usage was fully mediated by outcome expectation (Davis et al.), innorraion self-efficacy instead to ease of use is included in the model to represent controlling beliefs. Bandura (I997) asserts that self-efficacy affects behavioral intention and actual behavior because people are motivated to carry out behaviors that they believe they could successfully enact in order to yield favorable outcomes. Specifically, self-efficacy influences (a) the courses of action that people choose, (b) the level of effort and persistence they put forward in a task, and (c) the task strategy they follow. The notion that self-efficacy influences behavioral choice or intention has received considerable empirical support at the between-person level of analysis. For example, research has shown that self-efficacy was positively related to behavioral intention to enroll in a computer course (Hill et al., 1987) and difficulty level of goals people adopt (Bandura, 1997; Locke & Latham, 1990). Although no studies have examined the effect of self- efficacy on innovation behavioral intention within person, I expect that: Hypothesis 3: All else equal, higher innovation self-efficacy will lead to stronger behavioral intention to use IT innovations. Besides affecting behavioral intention, self-efficacy is expected to influence implementation intention. Research has shown this linkage directly and indirectly at the 39 between-person level of analysis. For example, previous studies have found that higher self-efficacy is associated with more effective task strategy (Bandura, 1997; Seijts & Latham, 2001 ), a construct similar to implementation intention. In studies where self- efficaev and implementation intention were not examined together, self-efficacy (Ajzen, 1991; Bandura) and implementation intention (Gollwitzer, 1999; Gollwitzer & Brandstatter, 1997) were found to have positive effects on behaviors, over and above the influence of belu'tvioral intention. This suggests that self-efficacy and implementation intention may be related. Although no study has examined the relationship between self- ef/icacv and implementation intention at the within-personal level of analysis, I expected the same relationship within persons. IIrpothesis 4 .' Higher innovation self-efficacy will be associated with stronger implementation intention. Because self-eflicacy is likely to influence behavioral intention and implementation intention, it is expected to affect IT usage behaviors. A number of empirical studies supported this linkage at the between-person level. Compeau and Higgins (1995) reported that computer self-efficacy was positively related to self-reported frequency of computer usage in a sample of 1020 Canadian workers in a cross-sectional survey study (r = .45). In a follow-up survey administered a year later, Compeau et al. (1999) found a similar relationship between self-reported computer usage and computer self-ejffl’caev measured the preceding year (r = .43). The positive relationship between self-efficacy and IT usage was also found in a number of other studies using student 40 samples (e.g., Hill et al., 1987). In addition, research that relied on the TPB framework similarly reported that perceived behavioral control, a construct similar to self-efficacy, was positively related to behavioral intention and self-reported IT usage at the between- person level (e.g., Taylor & Todd, 1995). Although no study has looked at the effect of self-efficacy on IT usage at the within-person level, research in Industrial-Organizational Psychology has examined a similar relationship—the influence of self-efficacy on task performance‘and reported some interesting results. Generally, researchers (e. g., Bandura, 1997; Vancouver et al., 2001, 2002; Vancouver & Kendal, 2006) consider two key effects of self-efficacy on performance. The first regards the influence of self-efficacy on performance through its effect on goal level. More efficacious individuals tend to set higher goals, which in turn lead to more task effort and ultimately higher performance (Locke & Latham, 1990). The second effect is a direct one on behavior. When goal level is controlled or held constant, control theorists argued (e.g., Power, 1973, 1978) that self-efficacy has a negative relationship with performance because individuals are more motivated to reduce the discrepancy between their perceived capability and their desired level of performance. Social cognitive theorists (Banduara, 1997; Bandura & Locke, 2003) argued that self- efficaev effect is largely a positive one because individuals are unlikely to act unless they believe their action will produce desirable outcomes. Researchers (e. g., Bandura, 1997; Bandura & Locke, 2002; Vancouver et al., 2006) tend to agree on the first effect of self-efiicacy. The second effect, the impact of self-e/j'icaev on performance when goal level is controlled, is still an open question. Bandura argued that this effect is positive and reported numerous studies in many 41 different contexts to support his views. However, these studies were conducted to look at between-person level. Vancouver and colleagues (2001, 2002, 2006) relied on control theory to argue that self-efficacy would have a negative relationship with performance in limited context such as a Ieaming, preparatory stage. In a series ofthree studies in classroom Ieaming settings, they reported that self-efficacy had a weak negative effect on performance (2001, 2002, 2006) at the within-person level of analysis, but the effect became insignificant when goal level was not controlled (Vancouver et al., 2006). These results suggest that in the Ieaming context self-efficacy may have a negative effect on performance when goal level is controlled. When goal level is free to vary, the overall effect of selflefficaev may not be significant from zero, as in Vancouver et al.’s (2006) study where there was not much variance in goal levels among student participants, or be a positive one, as Bandura argued. The domain of IT innovation usage over an extended period of time is different. It is largely a performing context because an IT innovation serves as one means among several to help users accomplish one or more tasks. Within this context, consistent with Bandura (1997), innovation self-efficacy is expected to have a positive effect on IT usage. The reason is that a user is likely to use an IT innovation when he/she believes that he/she could operate it successfully. In addition, innovation self-efficacy reflects users’ actual ability to use an IT innovation. When a user has high innovation self-efficacy, he/she is likely to use it more efficiently and thus could use the innovation more frequently within the same time duration. Thus, I expect that: Ilvpothesis 5 .' Higher innovation self—efficacy will lead higher level of I T usage. 42 An important question is whether the effect of innovation self-efficacy on usage behavior is uniform over time within person. According to resource allocation theory (Kahneman, 1973; Kanfer & Ackerman, 1989) people have limited cognitive resources and have to allocate them to many different tasks in order to achieve desired goal states. When an individual has a higher level of cognitive resources, he/she is likely to be able to accomplish more tasks or reach higher goal levels. Yet the effect of cognitive resources on a given task tends to depend on the individual’s skill level for the task and thus change over time. Typically, individuals develop their skills sequentially through three stages: sequential, associative, and autonomous stages (Kanfer & Ackerman). In the sequential stage, individuals rely heavily on their cognitive resources to learn the correct sequence of steps in order to perform the task. Their behavior is deliberate, sequential, and still awkward. Over time, individuals learn the correct behavioral sequence and their actions require less effort. When individuals reach the autonomous stage, their performance becomes more or less automatic and is often triggered by external cues instead of deliberate internal thoughts. It becomes more fluent and requires much less cognitive resource. In other words, over time, individual performance of a given task depends less and less on effort and cognitive resources, and the locus of control of their actions moves from internal, deliberate thoughts to external, situational cues. Given self-efficacy is a subjective estimate of one’s cognitive resources, it is likely that self-efficacy’s influence on performance is moderated by time such that self—efficacy strongly influences perfomrance initially, but the effect weakens over time. 43 Although the idea that time moderates the relationship between self-efficacy and IT usage is a reasonable and important one, little research has attempted to examine the issue. To my knowledge, only three studies ofinnovation use (e.g., Davis, 1989; Venkatesh & Davis, 2000, Venkatesh et al. , 2003) have studied the effect of ease of use, a construct similar to self-efficacy, over time at the between-person level of analysis. These studies collected usage data at two time points (Davis et al., 1989; Venkatesh et al., 2003) and three time points (Venkatesh & Davis, 2000). They found that early on (time I), ease ofuse had an independent effect on IT usage, over-and-above the effect of outcome expectation. However, the independent effect became insignificant when users gained more experience (time 2 or 3). Although no research has examined the role of time as a moderator at the within-person level of analysis, consistent with resource allocation theory, I predict that: llvpothesis 6: The effect of innovation self-efficacy IT usage will weaken over time. According to Bandura (1997), self-efficacy for a given task is a malleable construct. It changes over time and is influenced by a number of factors. Initially, individuals develop a sense of self-efficacy regarding a new task by relating it to their relevant experience, observing others, and/or being verbally persuaded. Yet initial perception of self-efficacy is often inaccurate as the individuals may not know the task well, may not anticipate how their ability and constraints interact with the demands of the tasks, and how their skills develop. Over time, they have opportunities to perform the 44 task, gain first-hand experience, and develop their performance skills. Such first-hand experience should become the most important determinant of one’s self-efficacy. Experience of success or high levels of performance raises one’s self—efficacy, whereas failure or low levels of performance tends to lower it. The updated self—efficacy then affects future experience of the task, thus completing the cyclical process. Numerous studies (e.g., Gist & Mitchell, 1992; Lindsley, Brass, & Thomas, 1995; Bandura & Locke, 2003; Latham & Locke, 1991; Mitchell, Hopper, Daniels, George- F alvy, & James, 1994) have examined the effect of past performance on self-efficacy. They generally found that past performance of a task is a key antecedent of self-efficacy regarding the task. Successful performance of a task heightens self-efficacy and raises one’s confidence that, in turn, facilitates further successful performance of the behavior. Conversely, failing to perfonn a behavior decreases self-efficacy and inhibits future performance of the behavior, leading to a downward spiral. Because performance level tends to grow over time as individuals develop skills and because it is strongly determined by the amount of effort individuals exert, I expect that: I I vpothesis 7: Innovation self-efficacy will increase over time. IItpothesis 8: IT usage will have a positive effect on subsequent innovation self- efficacy. Bandura (1997) suggests that self-efficacy changes as a result of a cognitive evaluation mechanism. Once individuals carry out their intended behaviors, they reflect on their experience and evaluate whether their experience matches their prior 45 expectations. Evaluations that one’s current performance of a task such as using an IT innovation surpasses his or her self—efficacy encourage the upward revision of self- eff'icacv (Bandura, 1997). In contrast, judgments that one’s performance falls short of his or her self—efficacy motivate downward revision of self-efficacy (Bandura). It is the discrepancy between prior self-efficacy and currently perceived performance that compels users to adjust their future self-efficacy. Cognitive discrepancy is a strong motivating force because individuals are compelled to avoid cognitive dissonance and keep their beliefs and behaviors in alignment. In the model in Figure 5, the discrepancy between self-efficacv and perceived performance reflects in the construct of efficacy disconfirmation, an internal measure of personal performance. Positive efficacy disconfirmation occurs when users’ prior self—efficacy falls short their perceived perfomrance and is likely to lead to upward revision of future efficacy expectation. Conversely, negative efficacy disconfirmation happens when prior efficacy expectation exceeds experienced performance and is likely to result in downward adjustment of subsequent efficacy expectation. The process of evaluating one’s performance and adjusting self-eflicacy expectation repeats until users reach a state of equilibrium over time when self-eflicacy and perceived performance are in alignment. Although the idea that efficacy disconfirmation mediates the effect of behaviors on selflefficacj' is reasonable, to my knowledge no study has investigated the issue. Consistent with Bandura (1997), I expect that: Hypothesis 9: Efficacy disconfirmation will mediate the effect of I T usage on subsequent innovation self-efficacy. 46 Outcome Expectation and Outcome Disconfirmation In addition to self-efficacy, outcome expectation is another key belief that underlies behavioral intention and propels individuals to action (Bandura, 1997; Compeau et al.,l999; Davis, 1989; Davis et al., 1989; Zimmerman, 2000). Outcome expectation regarding an IT innovation refers to the anticipated outcomes that are believed to be the results of using the innovation (Davis, 1989; Davis et al., 1989; C ornpeau & Higgins, 1995; Compeau et al., 1999). It is related but conceptually distinct from selflefficaev. Outcome expectation concerns beliefs about the ultimate ends of using IT innovations, whereas self-efficacy refers to personal beliefs about having the means to use them. People expect many psychological and practical outcomes associated with using IT innovations. These outcomes include the joy of using IT innovations, development of IT skills, heightened social status, improved performance, and extrinsic rewards such as promotions or pay raises. Outcome expectation is also similar to perceived usefulness in the TAM, as they refer to the subjective outcomes of using IT innovafions. Outcome expectation influences behaviors because behaviors are largely regulated by the actors‘ subjective experience of their consequences through associative Ieaming and operant conditioning mechanisms. People are more likely to engage in behaviors they expect will be rewarded, but show little interest in activities that they do not value (Ajzen, 1991; Bandura, 1997; Davis, 1989; Davis et al., 1989; Vroom, 1964; Zimmerman, 2000). Individuals engage in activities and extend effort when they 47 anticipate that their behaviors will bring desirable outcomes (Ajzen, 1991; Bandura, 1997; Stajkovic & Luthans, 1997; Vroom, 1964). In IT adoption research, research has investigated the effect of outcome expectation using the same construct with a different label, perceived usefulness. Perceived usefulness was found to have a positive influence on behavioral intention to use IT innovations at the between-person level of analysis (Davis et al., 1989; Legris et al., 2003; Taylor & Todd, 1995; Venkatesh & Davis, 2000). Its effect was stronger than ease ofuse in many studies (Legris et al.; Vanketesh et al., 2000). In addition, the influence of outcome expectation was found to be consistent across different studies. For example, Legris et al. examined 22 studies that yielded 28 possible relations between each of the TAM constructs and behavioral intention. Out of 19 reported relations, perceived usefulness had a positive relation on behavioral intention 16 times and a non- significant relation three times. Although no studies have examined the effect of outcome expectation on behavioral intention at the within-person level of analysis, I expected the similar relationship because an individual is likely to have stronger behavioral intention to use IT innovations when he/she has more positive outcome expectation. I-Irpothesis l 0: All else equal, higher outcome expectation will be associated with stronger behavioral intention. IT adoption research also shows that higher outcome expectation is associated with higher usage of IT innovations at the between-person level of analysis. Compeau and Higgins (1995) reported that the correlation between outcome expectation and self- 48 reported time usage ofa computer was .41 in a survey of 1020 workers. When self-report usage was measured again in a follow-up survey a year later (Compeau et al., 1999), it was predicted outcome expectation was measured the year before (r = .40). The positive relationship between outcome expectation and IT usage was also found in other studies (e.g., Hill et al., 1987). In addition, existing studies have also demonstrated that perceived usefulness, the same construct but with a different label, predicts behavioral intention and self-reported usage (e.g., Davis, 1989; Davis et al., 1989; Jeyaraj, Rottman, & Lacity, 2006; Legris et al., 2003; Taylor & Todd, 1995). In their review, Legris et al. reported that perceived usefulness is the most consistent predictor of self-report usage among all the TAM and the TPB constructs. Although no research has studied the influence of outcome expectation at the within-person level of analysis, I predicted outcome expectation will have a positive influence on IT usage within person because an individual is likely to use IT innovations when heishe see more positive outcomes resulted from using them. Hrpothesis II .' Higher outcome expectation will lead to higher levels of I T usage. An important question is whether the effect of outcome expectation on usage behavior is consistent over time within persons. As argued previously from a resource allocation perspective, over time individual performance of a given task is likely to depend less and less on effort and cognitive resource, and the controls of their actions are moved from internal thoughts to situational cues. Consequently, the effect of one’s motivational force on behavior tends to weaken over time. Because outcome expectation 49 is a key construct that captures the motivational force, it is likely that outcome expectation’s influence on performance is moderated by time such that outcome expectation affects IT usage strongly initially, but influence becomes weaker over time. Although the idea that time moderates the relationship between self-efficacy and IT usage is a reasonable and important one, little research has attempted to examine the issue. Consistent with the resource allocation theory, I predicted that: Htpothesis 12: The effect of outcome expectation on IT usage will weaken over time. Similar to self-efficacy, outcome expectation is expected to have bidirectional relationships with IT usage. That is, outcome expectation influence IT usage, which in tum shapes and adjusts future outcome expectation. This process is driven by users’ desire to avoid cognitive dissonance and keep their beliefs and behaviors in alignment. In IT usage contexts, users initially form their outcome expectation by evaluating second- hand information, such as peers’ testimonials, vendors’ claims, or experts’ opinions. Such communicated information serves as the baseline from which users initially judge the merits of IT innovation. As users interact with IT innovations, they gain first-hand experience, which allows them to evaluate whether their initial judgment matches with their actual experience. Because people are motivated to keep their beliefs and behaviors in alignment, users’ evaluation of discrepancy between prior outcome expectation and actual experience often lead to revisions of outcome expectation. The updated outcome expectation then shapes future IT usage. The process reiterates until behavior and belief 50 reach a steady state of equilibrium where they are in alignment. Thus, this process suggests that the level of positive outcomes experienced by users has a positive influence on their subsequent outcome expectation. Because IT innovation usage typically is a strong correlate of actual usage-related outcomes, on the assumption that the innovation represents true improvement over past technologies, higher IT usage is likely to lead to perceptions ofrnore positive outcomes and thus leads to higher levels of outcome expectation. Although it is important to understand the impact of IT usage on outcome expectation, research has not investigated the issue because researchers generally follow traditional experimental and correlational designs where data were collected at one or few points in time. Yet, two survey studies by Compeau & Higgins (1995) and Compeau et al. (1999) found that self-report IT usage and outcome expectation are positively related. These results provide limited empirical support that IT usage may influence outcome expectation at the between-person level of analysis. Although no research has examined the relationship at the within-person level of analysis, consistent with the rationale described earlier, I expect that: Hrpothesis I 3: IT usage will have a positive effect on outcome expectation. The precise mechanism for change in outcome expectation is thought to occur through a cognitive evaluation process. Once users go through the experience of using IT innovations, they are likely to reflect on their experience and evaluate whether their outcome expectation match their perceived outcomes. The discrepancy between prior 51 outcome expectation and perceived outcomes often leads users to adjust their outcome expectation. The discrepancy of outcome expectation and perceived outcomes reflects in the construct of outcome disconfirmation, an internal measure of product performance, in the model in Figure 5. Positive outcome disconfirmation occurs when users’ prior outcome expectation falls short their actual experience of positive outcomes and is likely to lead to upward revision of future outcome expectation. Conversely, negative outcome tlisconfirnuition happens when prior outcome expectation exceeds of experienced outcomes and is likely to result in downward adjustment of subsequent outcome expectation. The process of evaluating one’s experience and adjusting outcome expectation repeats itself until users reach a state of equilibrium when outcome expectation and perceived experience of actual outcome are in alignment. Although no prior research has attempted to investigate the mechanism through which IT usage updates one’s outcome expectation, consistent with cognitive dissonance principal I expect that outcome discom’irmation is likely to mediate the influence of IT usage on outcome expectation within person. Hypothesis I4: Outcome disconfirmation will mediate the effect of IT usage on outcome expectation. 52 METHODS Overview ofthe Research In this study, college students, who volunteered for the research, were given Palm Pilots loaded with several software programs, and their behavioral and attitudinal data were recorded over an eight-week time period between October and December of 2006. None ofthe participants had previous experience with Palm Pilots or with the software programs that they learned to use. The study began with training the participants on each of three software programs, which were selected because they are seen as useful and interesting to college students. The Palm Pilots were set up to record the amount of time that the students used the Palm Pilot software packages. They were also programmed to sample attitudinal and behavioral data daily over the time period. This research method enabled the collection of objective adoption data in terms of the amount of time the participants spent using the IT innovation device as well as attitudinal and perceptual data over time that were related to adoption behaviors. S‘_dnlpl_€ Participants were 63 undergraduate students enrolled in Psychology 101 at Michigan State University. Undergraduate students in Psychology 101 were required to participate in psychology experiments as part of their class activities, and their participation was facilitated by the subject pool set up by the psychology department. The student participants signed up for participation on a first—come first-serve basis with the constraints that half were female. They were selected for the study if they met two conditions. First, they had not used any PDA device prior to the experiment. Second, they had not used software applications similar to the ones they were to use in study on their 53 personal computers. These conditions were set up to ensure that the Palm Pilot was perceived as new to the users and thus could be considered as an IT innovation. To screen participants, only students who indicated they had not used PDA before were invited to participate in the study. At the beginning of the study, selected participants were asked whether they have any experience to use PDAs, cell phone, or computers to schedule appointments, organize ideas, or track to-do lists (see Appendix 1). Two study participants indicated that they had some experience, and their data were not included in the analyses. Additionally, students were asked whether Palm Pilots were something new to them personally (see Appendix 2). Results showed Palm Pilots were viewed as new by all participants in the final sample. During the course of the study, data from three participants were lost and could not be recovered due to technical reasons. The final sample contained 58 participants, 31 females and 27 males, between the ages of 18 to 27 years (M = 19.7, SD = 1.7). The ethnic makeup of the sample was: 76% Caucasian American, 10% African American, 5% Hispanic American, 2% Indian American, and 7% Asian American. IT Innovation In this study, Palm Pilots were chosen as an IT innovation to investigate for a number of reasons. Because Palm Pilots were a relatively new product at the time of study (1996) and especially to the research participants (which is no longer the case at the time ofthe publication of this research), who were psychology students at Michigan State University, it was perceived as a new technology, as suggested by participants' answers to screening questions discussed at a later point. It was at the time and has been increasingly becoming a more integrated device that can perform many functions (email, 54 phone, voice-commanded computing, etc...) and thus could be adopted by the mass in longer temrs. Additionally, Palm Pilots could be programmed to record user behavior objectively and software applications for collecting cognitive and affective measures used in this study already existed from previous behavioral sampling research. The Palm Pilots, in this study, were preinstalled with the following applications/features. 1. Calendar with which users can record and keep track of their daily/weekly/monthly schedules and appointments. 2. Contacts which allows users to store and organize their contacts DJ 131$ enables users to record things to do and keep track of them. 4. Note Pad with which users can draw sketches and write their notes. 5. firgggg lets users outline their thoughts and rearrange them. 6. Hi Money which enables users to record and track incomes, expenses, financial transactions, and report the overall balance. 7. Diet Helper with which users record and track their daily diet. Given these features, the Palm Pilot used in the present study can be viewed as an IT innovation that performs seven different functions implemented by the seven apphcafions. Training The training was about one and a half hours long and was conducted at the first meeting with the participants. The goal was to provide the participants with a basic understanding of Palm Pilot features: navigation, data input and synchronization processes. Furthermore, the training was intended to show, in detail, how participants 55 could use the basic features of three main applications in Palm Pilot, namely Calendar, Contact, and Task. In the training, participants were given an instructional booklet each (see Appendix 3). The instructor then discussed the contents of each application via a PowerPoint presentation. Participants were asked to carry out hands-on exercises frequently at the end of each topic so that they learned to operate Palm Pilots through hands-on practice. The instructions were adapted from the Getting Started Guide that was provided with the purchase of the Palm Pilot Zire 31 model used in the experiment. As can be seen in Appendix 3, the instructions explained the operations of the main icons, controlling buttons (e. g., Home, Keyboards) and the workspace in Palm Pilot (e. g., Graffiti writing area, keyboards, and main screen) where data were entered and responses were viewed. The trainer then briefly described the main purpose of all seven preinstalled software applications. This was meant to inform users of the main functions of all preinstalled applications and to explain to the users what the Palm Pilot was capable of accomplishing. Participants were then shown how to open and close all the preinstalled applications using the main icons and controlling buttons, but they would be trained only on the detailed features of the three applications previously mentioned. This pattern of training is typical in technology training where new users are taught the basic features of the technology. It often leads to two ways in which innovation adaption could be manifested. First, users spend time working on the software they are trained on and practice basic operations. Second, users spend time to learn new applications on their own, for which they are informed of the applications’ general purpose, if they want to use them. 56 Instructions on how to access and operate Calendar, Contact, and Task applications were then explained. These instructions focused on the detailed operations of the basic features of these applications without discussing the more advanced features. For example, participants were showed how to schedule an appointment, edit, and review them by opening different views of the calendar (e.g., daily, weekly, and monthly views). However, more advanced operations such as how to set up alarms for appointments, how to repeat the same event over time, how to assign priority to event, how to color code events for visual effects, and how to attach notes to events were not discussed. The instructional strategy was intended to teach participants on the fundamentals and help them get started without being overwhelmed, an approach similar to the actual training process that typically takes place after the introduction of IT innovations in organizations. It should be noted that this type of training approach left the burden of Ieaming more advanced features and mastering IT innovations on the users. During the training, participants were frequently given time to carry out the discussed procedures during and at the end of the discussion of each application. The instructor would stop frequently to ask participants whether they had questions and had completed the hands-on exercises in order to make sure that they were on the same page and could operate the Palm Pilots. The instructions ended with a discussion on how Palm Pilots could be synchronized with computers so that the data could be updated and backed up. At the end of the presentation, participants were asked to explore Palm Pilot on their own for 20 minutes. They were instructed to again practice the steps presented in the presentation by using different buttons and icons to access different applications as well as performing detailed operations in Calendar, Contact, and Task that had been 57 discussed in the presentation. The investigator also showed the participants where they could access additional how-to instructions and obtain user manuals for all preinstalled applications. The additional instructions and manuals were posted on a web page so that participants could download them ifthey desired. Procedure After signing voluntary participation agreements to take part in an eight-week study using Palm Pilots (see Appendix 4), participants completed a paper-and-pencil information sheet asking for their gender, age, and ethnicity. They then filled out a survey measuring individual difference dimensions (see Appendix 5), which would be used for additional analyses that are not part of the present study. After the survey, each of the participants was given a Palm Pilot. The Palm Pilots were all permanently tagged with unique numbers to identify the users without revealing their personal infomration. Participants learned how to operate the Palm Pilots with the aid of the trainer as described in the preceding section. Participants were then shown how they could respond to a survey administered and prompted by Palm Pilots. The survey contained questions asking participants about their expectations, beliefs, intentions, and cognitive evaluations as described in the “Measure” section. PMAT, a Palm survey application designed by Weiss, Beal, Lucy, and McDermid (2004), was used to administer the survey and collect participants’ daily self-reported responses. Once the instructions on how to respond to the PMAT survey were completed, participants practiced answering a five-question survey. They were then asked to respond to the first full survey, which took less than five minutes to complete. The same survey was administered automatically by PMAT about a day later at the time 58 chosen by participants and was repeated at the same chosen time daily for eight weeks. In all cases, the administration of the survey was prompted by beeping signals. Survey items were presented in a random order at each time in order to minimize order effects. A log system in the Palm Pilot automatically recorded participants’ usage data (namely, the usage frequency and duration) for the whole bundle as well as each of the applications on a daily basis. After participants completed the training survey, they were asked to schedule a follow-up 30-minute meeting and four subsequent two—minute meetings. The 30-minute meeting occurred in the same location one week later to help users troubleshoot problems that arose during their first week using Palm Pilots. The short two-minute meetings were scheduled every two weeks after the training. In these meetings, the experimenter met the participants a few minutes before the beginning of their Psychology 101 lecture or other mutually-agreed times and locations and downloaded their survey responses onto his laptop and usage data so that the data would be secured. After scheduling the follow-up meetings, participants were reminded that each would have an opportunity to earn $70. Specifically, a participant could earn $15 during the first two weeks if he/she (a) met the experimenter to have his/her data downloaded AND (b) both answered at least 90% of survey items and used the Palm Pilot at least once a day (with usage duration at his/her discretion) for 12 days out of 14 days. Similarly, each participant could earn $15 during the second two weeks, $15 during the third two weeks, and $25 during the final two weeks, contingent on BOTH conditions (a) and (b) described above. Payment was given by check once the whole study was complete. Participants were then thanked for coming to the first meeting and dismissed. 59 At the follow-up meeting a week later, the experimenter asked the participants whether they ran into any problems using the Palm Pilots. A few, non-significant issues surfaced and were quickly addressed. At subsequent meetings, participants’ survey responses and usage data were downloaded to the investigator’s computer. The downloading process took about 30 seconds each time. At the final meeting at the end of the study, participants retumed the Palm Pilot, got paid for their participation, and were given a short debriefing (see Appendix 6) regarding the purpose ofthe study. llleasures The following measures were used in the study]. Time. Time is measured in cumulative days since the start of the study. Time is 1 for the first day, 2 for the second, and so forth. Outcome Expectation. Four items were adapted from Vanketesh et al. (2003) to assess outcome expectation (see Appendix 7). Each item was rated using a five-point scale that ranged from 1 (extremely unlikely) to 5 (extremely likely). A sample item was, “lfl use Palm Pilot, I would find Palm Pilot useful in my work.” The alpha coefficient for the scale was .92. Innovation Self-Efficacy. The seven-item measure of self—efficacy (see Appendix 8) followed the procedure recommended by Bandura (1997) and used by Zimmerman and Bandura (1994). The scales assessed users’ perceived capability to use various features of Palm Pilots. Users were asked to provide ratings to their perceived capability to use each of the features of Palm Pilots. For example, “How well can you use the Calendar I Personality, computer self-efficacy, and other individual difference factors were also measured for a separate study and their scales were described in Appendix 5. These factors are not needed as controlled variables for within-person design study such as the present one as they are constant for each person. 60 Application?” Each item was rated using a five-point scale that ranged from 1 (not well at all) to 5 (very well). The alpha coefficient for the scale was .89. Behavioral Intention. Three items were adapted from Vanketesh et al. (2003) to assess be/uivioral intention to use Palm Pilots (see Appendix 9). Each item was rated using a five-point scale that ranged from 1 (extremely unlikely) to 5 (extremely likely). A sample item was, “I intend to use Palm Pilot during the next 24 hours.” The alpha coefficient for the scale was .91. Implementation Intention. Five items assessed implementation intention (see Appendix 10). They were adapted from Miller’s (2003) study. Each item was rated using a five-point scale that ranged from 1 (strongly disagree) to 5 (strongly agree). A sample item was, “I have planned how I will use Palm Pilot when I see relevant opportunities.” The alpha coefficient was .94. Outcome Disconfirmation. Four items were adapted from Bhattacherjee and Prernkumar (2004) to assess users’ outcome disconfirmation (see Appendix 11). Each item was rated using a five-point scale that ranged from 1 (much worse than expected) to 5 (much better than expected). A sample item was, “Compared to my expectation yesterday, the ability of Palm Pilot to improve my performance was (much worse than expected much better than expected)” The alpha coefficient for the scale was .91. Efficacy Disconfirmation. Four items were adapted from Bhattacherj cc and Premkurnar (2004) to assess users’ efficacy disconfirmation (see Appendix 12). Each item was rated using a five-point scale that ranged from 1 (much worse than expected) to 5 (much better than expected). A sample item was, “Compared to my expectation 61 yesterday, my ability to use Palm Pilot was (much worse than expected much better than expected)” The alpha coefficient for the scale was .92 (Bhattacherj ee, 2004). IT Usage. Objective usage of Palm Pilot was automatically recorded by a log system on a daily basis. The log system monitored and recorded how often and how long each application in the Palm Pilots was used daily. I used two different measures of usage in the present study: usage frequency and usage duration. Usage frequency refers to the number of times participants used the Palm Pilots each day regardless of the nature of applications they used. It is not cumulative. For example, ifa participant used the Palm Pilot three times the first day, and five times the second day, usage frequency was recorded as three and five, respectively. Usage duration is the number of minutes participants spend using the Palm Pilots each day. It is also not cumulative. For example, ifa participant spent 20 minutes the first day and 15 minutes the second day, usage duration was recorded as 20 and 15 minutes, respectively. 62 DATA ANALYSIS As the purpose of the present study was to examine the dynamic relationships of key beliefs and usage behaviors-mot to differentiate user experience of multiple innovations in a comparative sensezul combined user experience data with the seven applications in analyzing data. In this way, the data may be viewed as user experience of the Palm Pilot with the preloaded set of applications or features. The combined data at the Palm Pilot level would provide more robust measures of user experience as they are measures ofthe means, minimizing measurement errors and other biases associated each individual application. To analyze repeated measures data, a number of analytic techniques exist (e. g., WABA, time-series regression, hierarchical linear modeling). Many of these techniques capture the relationship patterns among the data across persons. Yet, in doing so, the analyses fail to model relationships within each individual and, thus, may miss out on important individual differences. To provide a more comprehensive picture, the present study used both hierarchical linear modeling (HLM) and multiple regressions to model the data. HLM (Bryk & Raudenbush, 1997; Raudenbush & Bryk, 2002) was used to analyze data from all individuals in the sample. It works in two steps. First, it models data for each individual separately in order to estimate within-person effects. Then, it aggregates estimates of within-person effects across individuals. The resulting output provides estimates of within-person effects for an average individual as well as the variance of the effects between individuals. 3 For readers who are interested in users' behaviors at the application level, please see Appendix 13 for an averaged users' usage frequency over time broken out by Palm Pilot applications. 63 In this study, HLM was considered the primary method to evaluate hypotheses, as it allows us to evaluate whether the hypotheses hold for an average person. Yet this method has its limitations as it does not directly reveal whether hypotheses are supported for specific individuals. To work around this limitation, when HLM results show significant effect variance between individuals, multiple regressions were run to analyze data for each individual separately. These supplemental analyses help us determine whether hypotheses hold for specific individuals and estimate the proportion of sample participants for whom hypotheses are supported. In all analyses, outcome expectation and self-efficacy at time t were used to predict behavioral intention and implementation intention at time t and IT usage at time t +1. Behavioral and implementation intention at time t were entered as predictors of IT usage at time t + 1. Usage at time t was used to predict outcome disconfirmation, efficacy disconfirmation, outcome expectation, and innovation self-efficacy at time t + 1. Finally, outcome disconfirmation and self-efficacy disconfirmation at time t were entered as predictors of outcome expectation and innovation self-efficacy at time t + 1, respectively. 64 RESULTS In Table l, descriptive statistics of studied variables are presented. With the exceptions of the means and SDs, the numbers below the diagonal space represents zero- ordercd correlations that were calculated using data at all time points from all sample participants. These correlations were calculated with data at the most granular level (person by time) and thus ignored the dependency of data within person. It is noted that zero-ordered correlation between time and usage duration was significantly negative. However, the correlation was very small (r = -.05, p < .01) and could be significant due the large number of data points and their dependency--which tend to create deflated standard errors. An HLM analysis of this relationship, which addresses data dependency and biased standard error, showed usage duration did not significantly decrease over time (7 = -0.018,p > .10). Aside from the negative correlation between time and usage duration, other zero- ordercd correlations at the most granular level generally supported the hypothesized relationships summarized in Figure 5. Both behavioral intention and implementation intention were positively related to usage frequency and usage duration. Innovation self- efficacv and outcome expectation had positive correlations with behavioral intention, usage frequency, and usage duration. Usage frequency and usage duration were positively correlated with outcome disconfirmation and efficacy disconfirmation. Outcome disconfirmation and efficacy disconfirmation, in turn, had positive correlations with outcome expectation and innovation self-efficacy, respectively. The numbers above the diagonal space in Table 1 represent person-level zero- correlations. Variables for each participants were averaged across time. 65 mm H 2 5:3 macaw—2.50 337533 2m 83m :25ch 2: 95% 32:52 2 mm M Z 53» 3.392th 29:3 :m Soc .3an 2:: EN 8 9% mafia mecca—oboe 3:829: 85% :28me 2.: 3205 338:2 2. v a < _0. V Q “I. 8. v a .. *Qr. 3&0. **0_. 1.9% “tam. **VN. giwm. tumor 6M0 NW: cozy—5A— owNmD .0 “IKE. 3.50. **w0. flaw. .Ibv. from. 1an. .100. mm; VNM .mozozvohh owumD .0 v0. w0. “I; _. 330. 1.00. simm. **m0. ~0.- wvd N06 EQEQELQEQQMNQ Auuumkaw .5 ON. a. v0. .330. .12. 140. simm. m0.- 590 00.m zemuaatmxrauflQ 3.333% .0 #10. 10¢. 00.- V0. 3.3”. **Nm. :3... view... wwd mm.m flsxtmgu: RemnautmESEENfi *wm. :ov. 50. *ON. .1wm. 30¢. *iuv. .32. w.0 vwfi =¢-=3=~ Essa—33% .v ram. 3:: No. *2. (E. *2. 1mm. :9: were Em earsxmweem =§§§£ .m 1.3». “1.0m. v0: *NN. *om. ”Luv. ... _ m. N00 5&0 w0.m :SBUVQRN 3.393% .N gaw— _0.Nm 25,—. .— 0 w h c m w. M N _ Om 53—2 Snafu.) 833:3» 353m he muzmzfim 953.83: ._ 03.; 66 38:32:: .5 .3 3:2..oaxm .85 “Eggn— «o 7:52 regains—2% 95853»: "m 9.sz cough—magma + r ~83— - o + . 1+1 + 5:225 :osflcofio—aEH + owmmD cosm>occ~ ,5 20:52: + _§o_>m:om IT + cogccccoaa cosfiooaxm 2:850 + v 05850 67 Correlations were then calculated using the resulting averages across persons. The pattern of con‘elational relationships is similar to those ofthe other types of correlations described earlier. It should be noted that the magnitudes of correlations of outcome discrepancy to usage frequency and usage duration are relatively high (.20 and .21 ) but are not significant due to the small sample size of 58 persons. From a theoretical standpoint, behavioral intention and implementation intention are two related but distinct constructs. Behavioral intention refers the commitment or desire to carry out a behavior, whereas implementation intention concerns the way the intended behavior is enacted. It specifies when, where, and how intended behavior is enacted. To evaluate their relationship from a measurement standpoint, a principal factor analysis with Promax (oblique) was performed. Results of the rotated factor pattern matrix (See Appendix 14) showed that two factors were extracted from the common variance of eight scale items that were used to measure these two constructs. The eigen value for third factor was 0.36. The two extracted factors explained 83% of the common variance, and their correlation was .36. Five items with implementation intention content had loading factors of more than .78 on the first factor and less than .13 on the second factor. Three items with behavioral intention content had loading factors of more than .87 on the second factor and less than .01 on the first factor. Given these values, it was concluded that behavioral intention and implementation intention are two related but distinct constructs as measured by their relevant scale items used in this study. 68 To further analyze the data, HLM and multiple regression analyses were conducted to evaluate each hypothesis. It should be noted that two measures of IT usage, usage frequency and usage duration, were assessed in this study. Results showed a similar pattern of relationships when either of these two measures was used as criterion, which was not surprising given their intercorrelation of .77 (see Table 1). For simplicity, only results for usage frequency were presented. To test Hypotheses 1 (Stronger behavioral intention will lead to a higher level of IT usage) and 2 (Stronger implementation intention will lead to a higher level of IT usage over and above the effect of behavioral intention), I ran three different models in HLM. The results are presented in Table 2. In Model 1, usage frequency was entered without any predictor so that portions of between- and within-person variances of usage frequency could be determined. Results showed that the model deviance ratio was 12068.390. ICC was .44. Between-person variance was 1.658 (44% of total variance), and within-person variance was 2.082 (56% of total variance). To evaluate Hypothesis 1, behavioral intention was then entered as a level-l predictor of usage frequency with random slopes. As can be seen in Table 2, the model deviance ratio was 1 1275.272 or 793.118 lower than Model 1’s, indicating that Model 2 fit the data better than the null model. The slopes of behavioral intention effects were positive on average, (7 = l.047**, p < .01), but they varied significantly from person to person (0 = 0.048**, p < 01). The addition of behavioral intention reduced within-person variance to 1.635, explaining 22% of total within-person variance. It lowered between- person variance to 1.192, accounting for 28% of total between-person variance. 69 O~.VQ < #0. V D. “.1. m0. V A * mmoén 833...; -355» -5233— ,mogzcogm @933 .5 .5532: .3.:S§E&QE~ 6:“ aetzfis 33.55%.5 .3 Buotm .3 mug—«:6, 21—: .N 93.; 7O Regression analyses of Hypothesis 1 for each individual showed that behavioral intention had a significant positive effect on usage frequency for 96% of participants (see Appendix 15). As behavioral intention had a positive effect on usage frequency on average, Hypothesis 1 was supported. Stronger behavioral intention to use IT innovations led to higher level of usage frequency within individuals. Furthermore, this effect was found in almost all ofindividuals. To test Hypothesis 2, in HLM analyses implementation intention was added as a level-1 predictor of usage frequency with random slopes in Model 3. As shown in Table 2, the model deviance ratio was 10870641 or 404.632 lower than Model 2, indicating that Model 3 described the data better. The slopes of the implementation intention effect were positive on average, (y = 0.808**, p < .01), and they marginally varied from person to person (0 = 0.010“,p = .053). The addition of implementation intention reduced within-person variance to 1.443, explaining an additional 10% of total within-person variance. It also lowered between-person variance to 1.078, accounting for an additional 7% of total between- person variance. Regression analyses of Hypothesis 2 for each individual showed that implementation intention had an incremental positive effect on usage frequency over-and- above the effect of behavioral intention for 82% of participants (see Appendix 15). Because implementation intention had an independent positive effect on usage frequency on average, Hypothesis 2 was supported. Stronger implementation intention resulted in a higher level of usage frequency within persons. This effect was present in a large majority of the sample participants. 71 To evaluate Hypotheses 3 (Higher innovation self-efficacy will be related to stronger behavioral intention to use I T innovations) and 10 (Higher outcome expectation will be associated with stronger behavioral intention to use IT innovations), two models were run in HLM, and the results are presented in Table 3. In Model 1, behavioral intention to use IT innovation without any predictor was entered so that the portions of between- and within-person variance could be determined. Resulting deviance ratio for the model was 6613.779. Between-person variance was 0.247 (38% of total variance), and within-person variance was 0.404 (62% of total variance). ln Model 2, innovation self-efficacy and outcome expectation were added as level- 1 predictors of behavioral intention. As shown in Table 3, the model deviance ratio was 5509.942 or 1 103.837 lower than Model 1’s, indicating that Model 2 fit the data better. The average slopes of innovation self-eflicacy (y = 0.362**, p < .01) and outcome expectation (y = 0.912**, p < .01) were both significantly positive, but they varied significantly from person to person (0 = 0.057**, p < 01 and o = 0.079**, p < 01, respectively). Regression analyses of Hypothesis 3 for each individual showed that outcome expectation had a significant independent effect on behavioral intention for 93% of participants, whereas innovation self-eflicacy effect was significant for 53% of participants (see Appendix 17). Given that innovation self-efficacy and outcome expectation had independent positive effects with behavioral intention. Hypotheses 3 and 10 were supported. Higher levels of innovation self-efficacy and outcome expectation led to a stronger intention to 72 _0. Va ** AuuoRxNKNmm 2023625 tbzfiumeRm NEOUwBO N 6—52 :32 w .252 23...; =o.:§~=~ REFSSEM =o Auaugmubmh .SBSSEQ 6:.“ :etufimakm 2.593% .3 882m he mombw=< 5:: .m «Zak. 73 use IT innovations within individuals. The positive effect ofinnovation self-efficacy was observed in about half of the sample, whereas the impact of outcome expectation was found in almost all ofthe sample participants. To examine Hypothesis 4 (Higher innovation self-efficacy will be associated with stronger implementation intention), two models were analyzed in HLM, and the results are presented in Table 4. In Model 1, implementation intention was analyzed as a criterion without any predictor. The model deviance ratio was 7670.179. Between-person variance was 0.224 (29% of total variance), whereas within-person variance was 0.559 (71% of total variance). ln Model 2, innovation self-efficaqt and outcome expectation were entered as level-l predictors of implementation intention with random slopes. As shown in Table 4, the model’s deviance ratio was 7091.909 or 578.270 lower than Model 1’s, indicating that Model 2 fit the data better. The average slopes of innovation self-efficacy (y = 0.619***, p < .01) and outcome expectation (y = 0.388**,p < .01) were both significantly positive, and only innovation self-efficacy slopes varied significantly from person to person (0 = 0.029**,p < 01). Regression analyses of Hypothesis 4 showed that innovation self-efficacy had a significant independent effect on implementation intention for 91% of participants (see Appendix 18). As innovation self-efficacy had an independent positive effect on implementation intention on average, Hypothesis 4 was supported. Higher levels of innovatitm self-efficacy resulted in stronger implementation intention with persons. 74 —O.VQ** mmmdm ioNod wmod .3856 Agnewfimkfiw 223.825 2.9% 20.0 mvod iwwmd 29.....3mxm 3.8.3.5 moo. 50.. Need cow. _ mo **ONN.o N .252 :32 3.6%.. ammd wnvsom. VNNd . .252 3....25» Egam 3....25» Exam 3525» 3.53.5 :83.— EU .595.— EU 2.2m mm > 2925» -555» $3.53: .5532: 29.32.353.55 .5 .Noeogm3394 {$332.5 6.... =e......3mxm~ 2.38.5 he Euchum me 39:23. 24: .v 335.. 75 The effect was observed in almost all of the sample participants. It is also noted that outcome expectation had a significant independent effect on implementation intention, although not predicted, and its effect did not vary between persons. Two models were run in HLM to evaluate Hypotheses 5 (Higher innovation self- ef/icacr will lead a higher level of I T usage) and 11 (Higher outcome expectation will lead to a higher level of 1 T usage), and the results are presented in Table 5. In Model 1, usage frequency was entered as a criterion without any predictor. The model deviance ratio was 12068.390. ICC was .44. Between-person variance was 1.658 (44% oftotal variance in usage frequency), whereas within-person variance was 2.082 (56% of total variance). ln Model 2, innovation selfefj’icacy and outcome expectation were added as level- ] predictors of usage frequency with random slopes. The results of the analysis for this model is presented in Table 5. The model’s deviance ratio was 11415.858 or 652.532 lower than Model 1’s, indicating that Model 2 fit the data better. The average slopes of innovation self-efficacy (y = 0.607“, p < .01) and outcome expectation (y = 1.675**, p < .01) were both significantly positive, and only innovation self-efficacy slopes varied significantly between persons (0 = 0.167**, p < 01). Regression analyses of Hypothesis 5 showed the independent effect of innovation self-(jiicacv on usage frequency varies: positive for 54%, insignificant for 40%, and negative for 6% of participants (see Appendix 19). Because innovation self-eflicacy and outcome expectation had independent positive effects on usage frequency on average, Hypotheses 5 and 11 were supported. 76 Sun: 77 #0. V a as." .I- . mo. V Q * voméaz 3.5020 owed "3300.0 AbUQwRQKNmW .52....025 vvmfio mood mwod “InmhoA 20...».ummwm 3.59.39 www.2fl2 hwc; :tmwo: **mmm.~ NEUc—Z =32 oamwoom— Nwo.N mmwwvom **wm©._ — Etc—Z 3525» v.5..am 3.525» ouasdm 3525» I 3.532. :83.— EU 595.. EU 2.2m Hm > 3.525» -5555 u=oo>50m .3533...— 53: .8 .Ggthuhmu. =e.....ae....~ v.5 .32233N 3.32.5 .3 832m 2. woman—5.3. 5:: .m 2.5.. Higher levels of im'zovation self-eflicacy resulted in higher levels of usage frequency within individuals. The positive effect of innovation self-efficacy was found for about half of the sample, whereas the outcome expectation effect was consistent from individual to individual. To evaluate Hypotheses 6 (The effect of innovation self—eficacy on IT usage will weaken over time) and 12 (The effect of outcome expectation on IT usage will weaken over time), three models were run in HLM. The results are presented in Table 6. In Model 1. outcome expectation, innovation self-eflicacy, and time (number of days) were entered as predictors of usage frequency so that it could be used as a base model to evaluate interaction effects. Resulting deviance ratio for Model 1 was 11356.372. In Model 2, the interaction term for outcome expectation and time was added. As shown in Table 6, the model deviance ratio was 11361.621 or 5.249 higher than Model 1’s, indicating that Model 2 did not fit the data better than Model 1. The average slope of the interaction terms (7 = 0.003, p > .05) was not significantly different from zero. These results suggested the interaction effect of outcome expectation by time was not present. Hypothesis 12 was thus not supported. In Model 3, the interaction term for innovation self-efficacy and time was added to Model 1. As shown in Table 6, the model’s deviance ratio was 11358.463 or 7.909 lower than the Model 1’s, suggesting Model 3 fit the data better. The average slopes of the interaction tenns (y = -0.008**, p < .01) were significantly negative and varied from individual to individual (0 = 0.163**,p < 01). Regression analyses showed that innovation self-efficacy X time interaction term was significantly negative for only 12% of participants (see Appendix 20). 78 2.1: Eva: mova ._. 3.3 1.88... So... tweed- 2.5 3.38.5.5... 53.65.. 5.8. 3.88... So... So... as... .38 18.... as... :58... 38%....33 .2385: $25: 33 32.8 :3. £20 :3 Sod to... 223.83.338.30 22...: 832: £283 go... So... 0523223333.... 35836 83.2 1.88... 8o... 80.... as... $32 1.2... .8... 39%... 58.33.53 223.65.. 5.33: .83 333 1.382 .8... 1.2... 5o... 1%... 23.33333335830 2252 832 :88... 89o Sod- as... 2...; .18.... as... 3.83.0 38.33.35 22335.. .28.: 23 $352128... $2. *2; £0... ta... 22.38%... 358:6 :25: 3.525» 35....m 3.525» 35:5m 3.52.3 3533a :35.— EU :85.— EU 2.2m mm > 3352.3 -555» -505... 3E; .52. 3:353...— 352. .8 A333m§m~$3h .523325 6.5 =3......33S..m~ 353325 .3 83.3— ..o weak—5.2. 21—: .c 335,—. 79 Given that the interaction term of innovation self-efficacy by time was si gnificantl y negative on average, Hypothesis 6 was supported. This suggests that innovation soif-cf/iatev effect on usage frequency weakened over time within persons. However. this effect was not frequently observed, as it was found in a very small percent of sample participants. Two models were run in HLM to evaluate Hypothesis 7 (Innovation self—efficacy will grow overtime), and the results are presented in Table 7. In Model 1, innovation self- eflicaev was used as a criterion without any predictor. Resulting deviance ratio for the model was 3862.666. ICC was .48. Between-person variance was 0.160 (48% of total variance in behavioral intention), whereas within-person variance was 0.175 (52% of total variance). ln Model 2, time (number of days) was added as a level-l predictor of innovation sci/lef/it'ac')‘ with random slopes. As shown in Table 7, the model deviance ratio was 2580.782 or 1281.884 lower than Model 1’s, indicating that Model 2 fit the data better. The slopes were positive on average, (7 = 0.012**, p < .01), and they varied significantly from person to person (0 = 0.00003**, 1) < 01). Regression analyses showed that time had a significant positive effect on innovation self-eflicaQI for all participants (see Appendix 21). As time had a positive effect on innovation self-efficacy on average, Hypothesis 7 was supported. Innovation self-efficacy increased within individuals over time. This effect was observed in all sample participants. 80 —O.VQ** .3de 38256 nowoood .12 86 on...» $56me 3 .d 203? .1. 6.6 N .252 :22 www.mowm mid mmnw 5m 1.03.0 — .252 3.325» 3535 3.525» 3.25m 3.525» 353.69 :83.— EU :83.— EU 2.2m Mm > 2925» -553. $850.. 3...; .95 53.20 Auuumwmxmubmm. .3..§8==~ .3 392.22. 5:: .5 «35—. 81 To test Hypotheses 8 (IT usage will have a positive effect on innovation self- ef/icacv) and 9 (Emcacy disconfirmation will mediate the effect of I T usage on innovation sci/lclficacv), three models were run in HLM, and the results are shown in Table 8. In Model 1, innovation self-efficacv was used as criterion without any predictor. Resulting deviance ratio for the model was 3862.666. ICC was .48. Between-person variance was 0.160 (48% of total variance in innovation self-efficacy), whereas within-person variance was 0.175 (52% oftotal variance). In Model 2, usage frequency was added as a level-1 predictor of innovation self- effiaicv with random slopes. As shown in Table 8, the model deviance ratio was 3682997 or 179.669 lower than Model 1’s, indicating that Model 2 fit the data better. The slopes of usage frequency effect were positive on average, (7 = 0.047”, p < .01), and they significantly varied from person to person (0 = 0.006**,p < .01). Regression analyses showed that usage frequency had a positive effect on innovation self-efficacy for 30% of participants, a negative effect for 6% of the sample, and non-significant effect for the rest (see Appendix 22). In Model 3 efficacy disconfirmation was added as a level-1 predictor with random slopes in HLM analyses. As shown in Table 8, the resulting deviance ratio for the model was 3302.435 or 380.562 lower than Model 2’s, indicating that Model 3 fit the data better. The slopes of efficacv disconfirmation and usage frequency effects were positive on average, (7 = 0.285**, p < .01; and y = 0.056**, p < .01, respectively), and only usage frequency slopes significantly varied from person to person (0 = 0.006**, p < .01 ). This suggests efficacy disconfirmation and usage frequency had independent positive effects on innovation self—efficacy. 82 ~O.VQ** 52.3 .000 30.0 **mwm.0 zones-.mxzoofinNAoeoRxm 8:: 1.805 :35 5.585 585.525. .83: 3.80m N30 mm..mmm 1.32.0 m .252 Roam 5.8.3 :90 $23.0 .8885”. 83: $3 3.0 R._m0m 3.20 N .0632 :22 $.mowm 3.0 mm.m.0m 300.0 . .352 35.25» 3.51% 3.525» 35:0m 3.325» 3.5.69 :83.— EU :83.— EU one—m mm > 03525» -555. -5353 A35§m~$mh 3.335: .8 3.35.0?35-5 33.05% a...“ 53:03,..— owamD .3 330%.. .3 mama—«.3. 532 .0 oz:- 83 To further evaluate Hypothesis 9, two other models were run in HLM to evaluate the influence of IT usage on efficacy disconfirmation. Results are showed in Table 9. In Model 1, efficacy disconfirmation was used as criterion without any predictor. Resulting deviance ratio for the model was 4520.854. ICC was close to 1.0. Between-person variance was close to .001 (0% of total variance in efficacy disconfirmation), whereas within-person variance was 0.228 (100% of total variance). ln Model 2, usage frequency was added as a level-1 predictor of efficacy iliscon/irination with random slopes. As shown in Table 9, the model deviance ratio was 4510.345 or 9.491 lower than Model 1’s, indicating that Model 2 fit the data better. The slopes of usage frequency effect were positive on average, (y = 0.018**, p > .01), and they did not significantly vary from person to person (0 = 0.004, p > .10). This shows that usage frequency had a positive effect on efficacy disconfirmation. Given that usage frequency had a positive effect on innovation self-efficacy on average, Hypothesis 8 was supported. Higher levels of usage frequency led to higher levels of innovation self- efficacv within persons. However, this effect was observed in a small portion of sample participants (30%). Because (a) usage frequency and innovation disconfirmation had independent positive effects on innovation self—efficacy on average and (b) usage frequency had positive influence on efficacy disconfirmation on average, Hypothesis 9 was supported. These outcomes suggest that efficacy disconfirmation partially mediated the effect of usage frequency on innovation self-efficacy. 84 ~O.VQ** 5.38 3.5.0 wood .55 So 55:55”. 555 $2 _ 3 2.8 $5.? .8... N .282 :52 5.3.82 53 mag .85 . 5.52 3:25» 3.3—6 3:..25» 35:0m 3:525» 3:25: :83.— EU :83.— EU one—m Hm > 20525» 5.55.3 -5255: =:..3=...N\=3M5 Aoeomxtxm :0 55:03,: «mam: a: 330:— 0: mama—5:4,. $3: .0 205,—- 85 To evaluate Hypotheses 13 (IT usage will have a positive effect on outcome expectation) and 14 (Outcome disconfirmation will mediate the effect of I T usage on outcome expectation), three models were run in HLM, and the results are presented in Table 10. In Model 1, outcome expectation was entered as a criterion without any predictor. Resulting deviance ratio for the model was 1685.8. ICC was .40. Between- person variance was 0.134 (60% of total variance), whereas within-person variance was 0.090 (4091- oftotal variance). In Model 2, usage frequency was added as a level-1 predictor of outcome expectation with random slopes. As shown in Table 10, the model’s deviance ratio was 1463.44 or 222.36 lower than Model 1’s, indicating that Model 2 fit the data better. The slopes of usage frequency effect were positive on average, (7 = 0.055**,p < .01), and they varied from person to person (0 = 0.00067”, p < .01). Regression analyses indicated that usage frequency had a significant positive effect on outcome expectation for 72% of participants (see Appendix 23). In Model 3 outcome-disconfimiation was added as a level-1 predictor with random slopes in HLM analyses. As shown in Table 10, the model’s deviance ratio was 822.84 or 640.6 lower than Model 25, indicating that Model 3 fit the data better. The slopes of outcome (lisr'on/irmation and usage frequency effects were positive on average (7 = 0.212**, p < .01; and y = 0.065**, p < .01), and only usage frequency slopes significantly varied from person to person (0 = 0.00065**,p < .01). To further evaluate Hypothesis 14, two additional models were run in HLM to anal yzc the influence of usage frequency on outcome disconfirmation. Results are 86 _O.VQ** www.mm 0000.0 000.0 1.2.60 noccfixfitoofiQmEooSQ 0vw.mmw @000 08.08? 5.020 308$ 1.30000 m00.0 1.30.0 55500500.me 2352 03.83 30.0 mmwsgm 3.02.0 $0.0: 1.500000 m00.0 *._.mm0.0 homozcoemowmmD :25: 000.32 000.0 wowsmov 102.0 :32 20032 3.525» «.335 353...; Pas—um 8.325» 3:559 5200 EU ace—om EU 2.2m Hm > 033...; -555» -5938: .S.:E.§5.m 2:335 :c greatmfisamQ East‘s 6:5 55:09.... «mam: ho £85m 00 mom.m_a:< 2.:— .0_ «BF—t 87 shown in Table l 1. In Model 1, outcome disconfirmation was used as criterion without any predictor. Resulting deviance ratio for the model was 5646.069. ICC was close to 1.0. Between-person variance was .001 (0% of total variance in efficacy disconfirmation), whereas within-person variance was 0.321 (100% of total variance). In Model 2, usage frequency was added as a level—1 predictor of outcome tliscon/irtnation with random slopes. As shown in Table 11, the model deviance ratio was 563 I .38‘) or 14.680 lower than Model 1’s, indicating that Model 2 fit the data better. The slopes of usage frequency effect were positive on average, (7 = 0.024**, p < .01), and they did not significantly vary from person to person (0 = 0.000, p > .10). Given that usage frequency had a positive effect on outcome disconfirmation on average. Hypothesis 13 was supported. Higher levels of usage frequency led to higher levels ofoutcome expectation within persons. This effect was observed in a majority of sample participants (72%). Because (a) usage frequency and outcome disconfirmation had independent positive effects on outcome expectation on average and (b) usage frequency had positive influence on outcome disconfirmation on average, Hypothesis 14 was supported. These results suggest that outcome disconfirmation partially mediated the effect of usage frequency on outcome expectation. 88 _O.VQ** 53% o «85 15.85 3:83.: 05$: 3.20m 05.0 Km.mm 000.0 N 3:52 :52 50.05% _mm.0 Seam _00.0 :25: 3:525» 25:0m 3:525» 2535 3:525» 3:532— :83: EU :83: EU 2.2m mm > 53525» 55:?» -5553— :SEEhPSMS 55825 :: 5:232...— ow5mD no 38tm a: 85355.. SE: .2 5.05,: 89 DISCUSSION .S'ununatjv of Findings The objective ofthe present study was to evaluate a dynamic model in which innovation self—efficacy and outcome expectation change, influence, and are influenced by IT usage over time within persons. Results, with a few omissions noted below, were summarized in Figure 6. The discussions below focused on usage frequency as a measure of usage frequency, it should be noted that a similar pattern of relationships was found when usage duration was used. At a broad level, results showed that innovation self-eflicacy—which increased over time~and outcome expectation affected and were affected by IT usage dynamically. Specifically, both innovation self-efficacy and outcome expectation had independent positive effects on usage frequency. The effect of innovation self-efficacy varied from person to person and was significantly positive for 54% of sample participants, whereas the effect of outcome expectation was consistent and did not change between persons. The effect of innovation self-efficacy on usage frequency weakened over time (not noted in Figure 6), but this effect pattern was observed in only 12% of the sample. The relationship of innovation self-efficacy and outcome expectation with IT frequency was also evidenced in the reverse direction. Usage frequency had positive effects on innovation self-efficacy and outcome expectation. The effect on innovation self-eflicacy varied between persons and was positive for 30% of sample participants, whereas the effect on outcome expectation also changed from person to person and was positive for 72% ofthe sample. 90 2555.5:5 r: .8 3:50.505”..— ..8: 855:»: a: .232 Eo~5_:wom¢_om 03.35% 50 950E + 5558.: :82 + . c .o v 58:55.28 3550*” 855.550 + . + [T 50:85 :25558295 + ow5mD 500535: {E 50:85 + + .5535nom + 8058:0585 + :ocfioomxm 580850 + v 2:850 91 At a more detailed level, the present study found that the effects of innovation s'el/letiicacj‘ and outcome expectation on usage frequency operated through behavioral intention and implementation intention. Innovation self-eflicacy and outcome expectation were found to have independent positive effects on behavioral intention. The effect of innovation self—efficacv varied between persons and was positive for 71% of the participants. whereas the effect of outcome expectation also varied between persons and was positive for 93% of the sample. Innovation self-efficacy and outcome expectation were also found to have independent influence on implementation intention. The effect of innovation self-efficacy on implementation intention varied between persons and was positive for 91" 0 of the sample, where as the effect of outcome expectation did not change from person to person. Behavioral intention and implementation intention in turn had positive independent effects on usage frequency. The effect of behavioral intention varied between persons and was positive for 96% of participants, whereas the effect of imp/ententation intention also varied between persons and was positive for 82% of the sample. Results also showed that the effects of usage frequency on innovation self-efficacy and outcome expectation on usage frequency were partially mediated by efficacy disconfirntation and outcome disconfirmation, respectively. Usage frequency had a positive effect on efficacy discomiirmation, and its effect did not vary between persons. Efficacy disconfirmation in turn had a positive influence on innovation self-efficacy, and its effect also did not vary between persons. Similarly, usage frequency had a positive effect on outcome disconfirmation, and its effect did not vary between persons. Eflicacy 92 iliscon/irtnation in turn had a positive influence on innovation self-efficacv, and its effect did not vary between persons. In summary, the present study found that innovation self-efficacy and outcome expectation affected and were affected by IT usage dynamically. The effect of innovation s‘c>l/1cf[fi('tic;i‘ and outcome expectation on usage frequency operated through behavioral intention and implementation intention, whereas the influence of usage frequency on innovation .sel/lefficacv and outcome expectation were partially channeled through efficaev tlisconfirnuition and outcome disconfirmation, respectively. Yet, there were significant individual differences in most of these relationships. The relationships between outcome expectation and usage frequency were more consistent or frequently observed in individuals than the relationships between innovation self-efficacy and usage frequency. Research Contribution and Implications The present study is the first to evaluate a within-person process model of how key beliefs influence and are influenced by IT innovation usage behaviors. It contributes to the current knowledge base in four key areas. lFirst, this study supported existing research showing that innovation self-efficacy, outcome expectation, and behavioral intention are key beliefs that drive IT usage behaviors. More importantly, it showed that these key beliefs change and explain varying IT usage within persons over time and thus accounted for behavioral changes within persons. This idea is not new and is assumed in dominant models such as the TAM (e.g., Davis et al., 1989) and the TPB (Ajzen, 1991). Yet, existing studies, relying on between-person analyses of cross-sectional data or datasets collected at a few time points, have not been able to provide evidence for such 93 intra-person processes. Rather, they showed that a person with high outcome expectation (or innovation self-efficacy) is likely to have a stronger intention to use IT innovation, which leads him to more IT innovation usage than another person who has low outcome expectation (or innovation self—efficacy) does. Such a between-person analysis does not reveal the dynamic processes that occur within persons. By focusing on intra-person processes, the present study showed that when a person has higher outcome expectation (or innovation .S'el/lefficaq’), he tends have a stronger intention to use IT innovation (or higher levels of IT usage) than he does when he has lower outcome expectation (or innovation self-efficacy). The difference, although it may seem minor on the surface, is important. Knowledge of within-person dynamic processes that account for behavioral change over time is the foundation of intervening efforts to train and/or persuade people to use IT innovations. Second, the present study showed that implementation intention is a proximal determinant of IT usage in addition to behavioral intention. This finding expands the current view of key psychological factors that are important in predicting and explaining IT usage. Current IT adoption models such as the TAM (e.g., Davis et al., 1989) and the TPB (Ajzen, 1991) assume that behavioral intention is the only proximal determinant of IT usage. To predict IT adoption, researchers typically backtrack and try to identify antecedents of behavioral intention. This approach is most useful if behavioral intention is the only proximal determinant of IT usage. The present research does not dispute the important role of behavioral intention. In fact, it found that behavioral intention explained 22% of within-person variance in IT usage and 28% of between-person variance. Yet, it argued that behavioral intention is not the only proximal determinant. 94 Implementation intention was found to have an incremental effect on IT usage, explaining an additional 10% of within-person variance and 7% of between-person variance above-and-beyond behavioral intention. As a result, implementation intention is viewed as another proximal determinant of IT innovation usage. The finding of implementation intention effect is consistent with Gollwitzer’s (1999) work that shows implementation intention increases the probability of behaviors to be enacted when individuals have general behavioral intention to carry out such behaviors. Its effect stems from the fact that the path from behavioral intention to behavior is not automatic. People need cognitive resource to execute their behavioral intention, but they often face many distractions and competing thoughts. Implementation intention helps people focus, connect situational cues to behavioral responses, and shield themselves from distracting alternatives (Gollwitzer). It, thus, gives people a better chance to carry out their intended behaviors. Given the role of implementation intention, future studies perhaps should incorporate this construct into their models and evaluate its effect. Additionally, effort to examine potential antecedents of implementation intention—other than innovation self- e[/icac_v and outcome expectation—could be useful, as it could lead to additional psychological factors responsible for IT usage. The third contribution of the current research regards the findings that IT usage influenced innovation self-efficacy and outcome expectation within persons and the processes through which the effects occur. These findings extend our knowledge base in a significant way. Current models of IT innovation adoption such as the TAM (e. g., Davis et al., 1989) and the TPB (Ajzen, 1991) propose that equivalents of innovation self- 95 efficacv and outcome expectation (ease of use and perceived usefiilness) influence IT usage. Although they assume that these two psychological drivers are changed as results of IT experience, they have not specified specific linkages, and no studies, to my best knowledge, have examined this issue. The present study showed that IT usage had positive effects on innovation self-efficacy and outcome expectation. F urthermore, efficacv tliscon/irmation—the discrepancy between prior innovation self-efficacy and perceived ability, an internal measure of personal performance—partially mediated the effect of IT usage on innovation self-efficacy within person, whereas outcome disc'on/irmation—the discrepancy between prior outcome expectation and perceived outcomes, an internal measure of product performance—partially mediated the effect of IT usage on outcome expectation. In other words, IT usage directly and indirectly through e/j’icacv tlixt'on/irmation and outcome disconfirmation affects innovation self-efficacy and outcome expectation within persons. With these findings, user experience is viewed as a dynamic process in which key beliefs—innovation self-eflicacy and outcome expectation-—determine and are determined by IT usage within persons. It is a cyclical process in which beliefs and behaviors mutually reinforce each other. Fourth, though neither last nor least, the present study analyzed whether individuals differ in their intra-process and quantified the proportion of individuals to whom a specific process is applied (Proportions of the sample for which specific processes are applied are reported in this study. However, for simplicity I did not infer confidence intervals for those proportions, which could be easily estimated). This type of analysis is tedious and time consuming. It may also seem to clutter the study’s results and obfuscate key gleaned insights. Yet, it presents a more precise knowledge of intra-person 96 processes among individuals. Researchers often ask whether a process exists (e.g., Does innomtion self-efficacv influence IT usage within person?) and typically detemiine the answer at an aggregate, “average” level (e. g., Innovation self-efficacy has a positive in II ucnce on IT usage for an average person.) What does not get asked is if the process exists at an aggregate level, what is the proportion of people for whom the process is at work? The latter question is important, if not more, to answer from research and practical standpoints. For example, this study found that outcome expectation influenced IT usage within person for an average person, and this effect did not significantly vary between persons. Similarly, innovation self—eficacv was also found to have a positive impact on IT usage on average, but its effect changed between persons and was significantly positive for 52% of the sample. From a research standpoint, these findings raise the question of why the effect of innovation self-efficacy was present in half of the sample but not the other. Perhaps, individual factors could be at work, an issue that future research could examine. From a practical standpoint, if practitioners have to choose whether to alter innovation self-efficacy or outcome expectation in order to encourage Palm Pilot usage, these findings suggest that the better bet is outcome expectation, as its effect is consistent across individuals. In another example, the current research found that the effect of innovation self-efficacy on usage frequency weakens over time for an averaged person. However, this effect varied from person to person and was significant for only 12% of the sample. These findings raise the question why the weakening effect exists in some individuals but not others, an issue that future studies could address. They also suggest that the weakening effect of innovation self-efficacy may not be of importance and perhaps could be ignored .from a practical standpoint, as it is evidenced in 97 a very small group of individuals. These examples illustrate the significance of knowing proportions of individuals for which a specific process or relationship is relevant. They present a new approach to look at data—one that the present study, to my best knowledge, is the first to conduct. Practical Implications As this research is based on observational data, causation inferences may not be warranted. If the findings in this study are to be replicated in controlled experiments where drivers of the change processes are manipulated and systematically altered, a number of implications may be relevant to practitioners. The first set of implications concerns the issue of product design. Because outcome expectation may be a consistent predictor of IT usage over time, it may be an important design factor. Products that generate hiin outcome expectation among users may be more likely to be adopted. Product designers may benefit significantly by viewing the product from users’ perspective and providing product features that users think are useful. Useful features and hi gh levels of benefits may enhance users’ outcome expectation and thus likely encourage adoption and usage. Besides outcome expectation, users’ perception of their ability to use the innovation (innovation self-eficacy) may be another important factor to consider. Perceived ease of use may enhance adoption rates because it may influence IT innovation usage above-and-beyond outcome expectation. When design engineers focus on products’ functionalities while neglecting users’ abilities to use them, they may end up with great features that users may have troubles Ieaming and adopting. It may be greatly beneficial to users when products are built with user friendly, easy-to-use features from their perspective. Useful and easy to use products from users' standpoint may be more likely 98 to be adopted and used, as indicated by the positive independent effects of outcome expectation and innovation self-efficacy on usage behaviors in this study. The second set of implications regards how IT innovations, once designed, should be introduced to users. Because outcome expectation may be an important driver of usage behavior, innovations’ benefits may need to be communicated to users early, in order to help shape their opinions. Communication attempts may address what benefits the innovations provide and how they might fulfill users’ needs. In addition to “selling” innovative products, the findings in this study suggest that training and coaching may be crucial, especially during the initial phase of product adoption. Training and coaching may help the adoption process as they may enable users to use the products well and boost their sch-efficacy. Beyond “selling” and training, developing an implementation mindset may also facilitate users' adoption. As this research and other studies suggested, having an implementation mindset may help users carry out intended behaviors. It enables users to develop strong associations of when and where they should use the innovations and to follow through with their intention to use the innovations. The third set of implications concerns what may be done to enhance the quality of user experience. The present study found that when users’ expectations were unrealistically high, they were likely to adjust their expectations downwards after going through actual experience. Failing to meet expectations were assumed to generate negative affect and lead to a higher likelihood of downward adjustment of expectations. Conversely, when users’ expectations were unrealistically low, they were likely to adjust 99 their expectations upwards. Exceeding expectations were assumed to create positive feelings and a higher likelihood of upward revision of expectations3. These findings thus suggest that practitioners may need to avoid setting unrealistically high expectations or oversell the products. Doing so may create negative user experience, which may discourage future usage. Instead, communicating the benefits of IT innovations in an incremental way and then providing training to help a user meet or exceed their expectation at every turn may be more beneficial. This process could be done in iterations. Initially, realistic benefits of IT innovations that are easier to obtain may be explained to users. Users may need to be convinced that the innovations are useful to them and they could successfully use the innovations. Expectations that are high enough to convince users of the benefits of the innovations may be more helpful, but they should not be set too high to create a disconfirming experience. Training and coaching that enable users to realize their expectations may provide additional benefits. The process of communicating realistic benefits, training, and coaching users materialize their expectations may be repeated until users reach advanced skills that allow them to fully exploit potential benefits of the innovations. Although this continuous “realistic-promise and over-deliver” strategy may need frequent assessments of users’ expectations and skills and interventions along the way, it have the potential of providing a positive user experience and fostering commitment to long-term usage of new innovations. " Although one may attribute these adjustments as the results of regressions to the means and the resulting fluctuations as artifacts, it is unlikely to be the case. Results showed that IT innovation self-efficacy increased with time, and thus suggested that changes in innovation self-efficacy were real and meaningful. Although outcome expectation did not significantly increase or decrease over time, its variance was predicted by outcome disconfirmation, a construct that captures the discrepancy between experience and expectation. This suggests that random regression to the means was not at play as the fluctuations in outcome expectation were explained and thus not random. 100 Bountlatjv Conditions and Future Research The results of the present study have a number of boundary conditions. First, because the sample contained only undergraduate students who were young and had limited work experience. The findings are applicable to this population. Questions remain whether results would be generalizable to adults in work settings. In some work settings, the effect of confomting to social norms and authority may be stronger. In addition, workers may be obligated to carry out certain behaviors regardless of their perceptions of benefits other than fulfilling ones’ obligations. The effects of social influence and social contract may be at a preconscious level but nevertheless impact behaviors. As a result, much research is needed to replicate the findings in work settings with workers’ samples. The second boundary condition regards the present study’s focus on the early stage of IT adoption. This focus allows us to capture users experience at a critical stage when their beliefs and behaviors form and change. Examinations of user experience at the early stage of adoption are critical because users make their decision to continue usage of the innovations at this stage. However, the findings at the early stage may not be applicable to user experience late in the adoption process. In later stages, users’ beliefs and behaviors may stabilize and may not exhibit much variance. The intra-person model in the present study may not be applicable. Clearly, research is needed to examine user experience at a later stage in the adoption process. Another boundary condition regards the nature of the IT innovation. In this study, we examined the adoption of Palm Pilots with a specific set of applications. This type of technology could be operated independently by individuals. Usage by one individual does not affect others. Although this type of technology reflects the nature of many 101 innovations, it is different from the other innovations that require collaborative usage of multiple individuals (e.g., Enterprise Resource Planning system). Thus, future research could evaluate whether the findings in the present study generalize user-interdependent IT innovations. Besides testing additional boundary conditions, future research could benefit by extending the current model by identifying and testing exogenous factors that influence outcome expectation and innovation self-efficacy, key drivers of IT innovation usage. In past between-person studies, numerous factors have been found to influence self-efficacy and outcome expectation. For example, persuasion and behavioral modeling alter self- etiicacv and outcome expectation (Bandura, 1997). IT innovations’ qualities such as job relevance, output quality, and result demonstrability have significant impacts on outcome expectation (Venkatesh & Davis, 2000), whereas qualities such as complexity, reliability, flexibility. integration, accessibility, and timeliness influence innovation self-efficacy (Wixom & Todd, 2005). Future research could examine the effects of those factors with respect to time using within-person design. Such research would be useful in determining what factors to change and when to implement the changes in order to encourage IT innovation usage. As significant between-person variances were found for many relationships described in this research, a fruitful area would be to investigate and identify individual difference factors that may be responsible for the observed variances across people. As technology innovations become more integrated and sophisticated, ability to cope with cognitive overloading can be a factor. As results, cognitive ability may be an important factor affecting adoption behaviors. Individuals with high levels of cognitive ability may 102 be more likely to adopt and adapt to new technology innovations as they have the capacity to deal and cope with cognitive overloading associated with new innovations. Personality factors such as conscientiousness and openness to experience may be significant factors. Research in personality domain has showed that individuals with high levels ofconscientiousness tend to be organized, disciplined, and efficient. They may be likely to be drawn into certain types of innovations such as integrated device that would help them organize tasks and activities and keep things in orderly, efficient ways. Individuals who are more open to new experience may tend to adopt new innovations. Yet they may be willing to abandon older innovations when newer ones are available and thus may adopt more innovations over time. Conclusions The present study is the first to evaluate a within-person process model of how key beliefs influence and are influenced by IT innovation usage behaviors. Results showed that innovation self-eficacy and outcome expectation affected and were affected by IT usage dynamically within persons. The effect of innovation self-efficacy and outcome expectation on usage frequency operated through behavioral intention and imp/ementation intention, whereas the influence of usage frequency on innovation self- etiicacv and outcome expectation were partially channeled by efficacy disconfirmation and outcome disconfirmation, respectively. Significant individual differences were observed in many of these relationships. With these findings, the present study contributes to existing knowledge in four main ways. First, it showed that innovation self-efficacy, outcome expectation, and behavioral intention changed and accounted for varying IT usage within persons over 103 time and thus explained behavioral changes within persons. Second, the present study argued and provided evidence that implementation intention is a proximal predictor of IT usage in addition to behavioral intention, expanding our knowledge of key proximal factors that are important in understanding users’ adoption behaviors. Third, this study showed that IT usage influenced innovation selfefficacy and outcome expectation within persons and the processes through which the effects occur. With these findings, user experience is viewed as a dynamic process in which key beliefs—innovation self-efficacy and outcome expectation—determine and are determined by IT usage within persons. Fourth. the present study is the first to quantify the proportion of individuals for whom specific within-person processes are applied. It thus presents a more precise knowledge of intra-person processes among individuals. Future research should examine which individual factors account for variances in within-person processes among individuals. It is also important to extend the current work by identifying and testing external factors that influence outcome expectation and innovation self-efficacy with respect to time using within—person design. Such research would be useful in determining what factors should be changed and when changes should be implemented in order to encourage IT innovation usage for different groups of individuals. 104 APPENDICES 105 Appendix 1: Previous PDA Experience How familiar are you with using PDAs (e. g., Palm Pilots or Blackberry) to track your daily and weekly appointments? How familiar are you with using PDAs (e. g., Palm Pilots or Blackberry) to organize your to-do lists? How familiar are you with using PDAs (e.g., Palm Pilots or Blackberry) to systematically outline and organize your ideas? How familiar are you with using cell phones to track your daily and weekly appointments? How familiar are you with using cell phones to organize their to-do lists? How familiar are you with using cell phones to organize and outline your ideas? How familiar are you with using other computer device (e.g. personal computers) to track your daily and weekly appointments? How familiar are you with using other computer device (e. g. personal computers) to organize your to-do lists? How familiar are you with using other computer device (e. g. personal computers) to systematically organize and outline your ideas? 1. None at all. IQ Little. 3. Some. 4. Average Ur Above average 6. Way above average 106 Appendix 2: Innovativeness of Palm Pilots Is using a Palm Pilot to track your daily and weekly appointments something new to you personally"? Is using a Palm Pilot to organize your to-do lists something new to you personally? Is using a Palm Pilot to outline and organize your ideas something new to you personally? 1. Totally new Ix.) Somewhat new 3. Average 4. Somewhat familiar 5. Very familiar 107 Appendix 3: Palm Pilot Training Booklet Purpose . Course Description: a This training is designed for the new owner of a Palm Pilot. The goal is to provide an overview of Palm Pilot and its variety of uses and applications. . Training Objectives: 0 Upon completion ofthe course, you will: 0 Have a basic understanding of Palm features, navigation, data input and synchronization. 0 Develop the ability to use the main applications: calendar, contact, and task, Why use a Palm Pilot? Advantages over a cell phone °Handwritten notes °Longer Battery Life °Bigger Screen Advantages over a PC °Portable °Accessibility °Cheaper Locating controls on your Palm Pilot ' Screen Displays the applications and information on your Palm Pilot. The screen is touch sensitive. Clock icon Displays the current time and date. 108 Display icon Lets you adjust the brightness and contrast of your Palm Pilot’s display. Input area Lets you enter info with Graffiti® writing or open the onscreen keyboard. Locating the controls on your Palm Pilot (Cont.) Power button Turns your Palm Pilot on or off Application Opens the Calendar and Contacts applications. 5-way navigator Helps you move around and select info to display on the screen. What’s on my Palm Pilot? Your Palm Pilot comes with many applications preinstalled and ready to use. Open these applications by selecting the icons on the Home screen. Calendar Manage your schedule, from lunch with a friend, to weekly meetings, to annual events like holidays, to extended events like conferences and vacations. Even color—code your schedule by category. Contacts Store names and addresses, phone numbers, e-mail and web site addresses— cven photos and birthdays. Organize your contacts into categories. What’s on my Palm Pilot? (Cont) Tasks Stay on top of your to—do list. Enter things you need to do, prioritize them, set alarms, and then monitor your deadlines. Note Pad Write on the screen in your own handwriting or draw a quick sketch. Progect. Let you outline your thoughts on the move, rearrange them, and export the outlines to computers. Diet Helpers: Track your daily diet and exercise to help you stay in shape. 109 ° Hi Money Track your personal finance and provide reports after you synchronize with your computer. Using the 5-way navigator - The navigator lets you access your information quickly with one hand and without the stylus. The navigator does various things based on which type of screen you’re on. To use the navigator, press Up, Down, Right, or Left; or press Select in the center. Moving around the Home screen - In the Home screen, use the navigator to open an application. - Right or Left Scrolls to the next or previous application category. - Select Inserts the selection highlight. When the selection highlight is present: 0 Up, Down, Right, or Left scrolls to the next icon in the corresponding direction. 0 Select Opens the selected application. Using Menus Menus allow you to Access hidden options or preferences and locate of many key features in programs. ° Open an application- 0 Tap Menu to open the menus. - Tap a menu, and then tap a menu item. Calendar ° Staying on top of your schedule is an important part of being productive both at work and at home. Calendar can help you remember important events (appointments, birthdays, homework deadline, school holidays) and spot schedule conflicts. 110 Calendar (Cont.) ° You can view your calendar by day, week, or month, or as an agenda list that combines your list of tasks with your appointments. Scheduling an appointment ° Press the Calendar application button. - Tap the Day View icon ° Select the date ofthe appointment: . Tap Go To. - Tap the arrows to select the year - Tap the month. . Tap the date. Tap the line next to the time the appointment begins and enter a description. Data inputs Use Graffiti or pop-up keyboard! Graffiti gives you the ability to use a form of shorthand to add text to various Palm applications. Yet it is quite slow and prone to errors. Keyboards are recommended. Tests have shown entering information through keyboard is quicker and more accurate. ~Prcss abc for a letter keyboard Press 123 the number keyboard 111 Checking your schedule Sometimes you want to look at your schedule for a particular date, while other times you want to see an overview of a week or month. ' Press the Calendar application button. ° Tap the icons in the lower-left comer to see four different Calendar views: Contacts Say good-bye to a paper address book that you need to update manually every time I someone moves, changes their e-mail address, or gets a new work extension. With l Contacts, not only is it easy to enter information such as names, addresses, and phone numbers, but it is just as quick to view, update, and organize contact information. You can back up contact information to your computer, easily share info with other Adding a contact Press the Contacts application button. Add your contact infomiation: Select New. Tap each field where you want to enter information, and enter it. Tap the scroll arrows to move to the next page. Locating a contact on your list Press the Contacts application button. Search for the contact: Tap the Look Up line at the bottom of the screen and enter the first letter of the name you want to find. 112 Enter the second letter of the name, and so on, until you can easily scroll to the contact you want. Select the contact to open it. Tasks Some of the most successful people in the world are also the busiest. When asked how they manage to do it all, busy people usually say, “I make lists.” The Tasks application on your Palm Pilot is the perfect place to make a list of the things you need to do and create subtasks. You can set priority, alarms, and attach other information about the tasks. Tasks allows you set priority, track deadlines, and stay focused. Create a task Select Tasks from Home screen Tap New. Enter a description ofthe task. Organizing your tasks Sometimes you want to look at all the things you need to do, while other times you want to see only certain types oftasks. Go to the Home screen and select Tasks . In the Tasks list, select one of these options: All Displays all your tasks. Date Displays tasks that are due in a specific time frame. Tap the pick list in the upper-right to select Due Today, Last 7 Days, Next 7 Days, or Past Due. 113 Marking a task complete You can check off a task to indicate that you’ve completed it. Go to the Home screen and select Tasks. Select the check box on the left side ofthe task. Synchronizing Your Palm Pilot with Your Computer Maybe you only think to use your Palm Pilot on its own to look up phone numbers, enter appointments, and so on. But you can do much more with your Palm Pilot if you synchronize it with your computer. Synchronizing simply means that information that has been entered or updated in one place (your Palm Pilot or your computer) is automatically updated in the other. No need to enter information twice. Synchronizing Information Prepare your Palm Pilot: Connect the HotSync® cable to the USB port on your computer, and then insert the other end into the mini-USB connector on your Palm Pilot. Make sure your Palm Pilot is on. Synchronize your Palm Pilot with your computer: Tap HotSync . When synchronization is complete, a message appears at the top of your Palm. Pilot screen, and you can disconnect your Palm Pilot from the cable. Be patient; synchronization may take some time. 114 Practice on your own - In the next 15 minutes, perform the following operations on your own. ° Tap the Home Icon or use the 5-point navigator to access various software applications. ° Create a contact, schedule an appointment, and set up a task that you may want to complete in a near future. Palm Pilot do’s and don’ts Do ° Use only the stylus to tap the screen—no pens, pencils, paper clips, or other sharp objects. ° Charge the battery daily. ° Use so ft reset to recover from freezing problems. Don’t - Do not carry your Palm Pilot in your back pocket; you might sit on it by mistake. 0 Do not expose your Palm Pilot to very hot or cold temperatures. - Never let your Palm run out of battery. ° Do not install any other applications to your Palm during the entire 8 weeks of the study. Installation of addition software will compromise the study and disqualify you from further participation and any cash rewards. 115 Additional Help! ° You can download how-to manuals for the installed applications in your palm at: 0 www.msu.edu/~dangchi/palm.htm ° Contact Chi Dang if you have any questions or problems with your palm ° Phone: 337-7873 ' Email: dangchi@msu.edu 116 Appendix 4: Palm Pilot Study Consent Form This study is designed to investigate the dynamic usage of Palm Pilots. If you choose to participate in this study, you will be asked to complete a series of survey items and learn how to use a Palm Pilot. You will be then provided a Palm Pilot to take home and will be asked to use it daily at your discretion and answer survey items administered by the Palm Pilot twice a day for eight weeks. In addition, you will be asked to meet the investigator biweekly for a few minutes at pre-scheduled times and locations (a few minutes before the beginning of your Psychology 101 lecture or mutually-agreed times and locations) so that your Palm usage data can be downloaded and secured. At the end of the study, you will return the Palm Pilot to the experimenter and listen to a short debriefing. The first meeting and Palm Pilot training should take approximately 1.5 hours, and the subsequent four meetings and survey responses (about 5 minutes daily) should take a total of 5.5 hours. For your participation, you will receive research credits for subject pool participation. In addition, you will have an opportunity to earn $70. Specifically, you will earn $15 during the first two weeks if (1) you meet the experimenter to have your two-week usage data downloaded AND (2) you answer or provide non-applicable responses for at least 90% of survey items and use the provided Palm Pilot daily at least 12 days out of 14 days with usage duration at your discretion. Similarly, you will earn 315 during the second two weeks, $15 during the third two weeks, and $25 during the final two weeks, contingent on BOTH conditions (1) and (2) described above. Approximately within a few weeks after you finish the study, you will be informed via email whether you meet the conditions for the cash payment and a check will be mailed to you if you qualified. Your participation in this research is completely voluntary. You are free to decline to answer any questions or to terminate your participation at any time without penalty. Your participation in this study will be kept confidential to the maximum extent allowable by 11w. Your data will be included in a summary report with the data from others. The report will not include any information that will allow anyone to identify any of your individual responses. If you have any questions about this study, please contact the responsible investigator, Daniel Ilgen, by phone: (517) 355-7503, email: ilgen@msu.edu, or by regular mail: 340A Psychology Building, East Lansing, MI 48824. You may also contact the secondary investigator, Chi Dang, by phone: (517) 337-7873, email: dangchi@msu.edu , or regular mail: 348 Psychology Building, East Lansing, MI 48824. If you have any questions or concerns regarding your rights as a study participant, or are dissatisfied at any time with any aspect of this study, you may contact - anonymously, if you wish - Peter Vasilenko, Ph.D., Director of the Human Subject Protection Programs at Michigan State University, by phone: (517) 355-2180, fax: (517) 432-4503, email: irb@‘nrsrr.edu, or regular mail: 202 Olds Hall, East Lansing, MI 48824. 117 PARTICIPANT’S STATEMENT I agree to participate in the Palm Pilot Study. I have been informed that I will learn how to use a Palm Pilot, use it daily at my discretion, answer survey questions twice daily for eight weeks, and return the Palm Pilot to the investigator at the completion of the experiment. My materials will be kept confidential and will not be seen by anyone other than the research team. I consent to having these materials used for research purposes. I was infomied that I will earn $70 if I answer at least 90% of the survey items, use the provided Palm Pilot daily at least 12 days every two weeks, and meet the investigator at mutually-agreed times to have my usage data downloaded biweekly. My participation is voluntary. I may discontinue participation and return the Palm Pilot at any time without penalty. I was informed that all of my individual responses will be kept strictly confidential, and that I will not be identified in any report of this study. Signature: Date: Printed name: Email: Address to which the check should be sent if I were to meet conditions for the cash payment: 118 Agreement of Responsibility 1 agree to participate in the “Adoption of Palm Pilots” study conducted by Dr. Daniel Ilgen and Chi Dang in return for research credits. As part of the study, I received a Palm Pilot and will use it for 8 weeks. I understand that the Palm Pilot is a property of the Management Department at MSU and will use it with care until I return it to the experimenters no later than December 5, 2006. Furthermore, I understand that I will be financially responsible for the market value of the Palm ($100) if it is lost or damaged due to non-accidental causes. Full Name (Print): Signature: Date: 119 Appendix 5: Demographic Information and Individual Differences Demographic Information What is your age? Gender? GPA? SAT score? Or ACT score? What is your ethnicity? (Check all that apply): [:I White/Caucasian E] African America E] Hispanic [:1 Native American [1 Asian What is the estimated total family yearly income? D $20,000 or less [:I $20,001 -40,000 [:| $40,001 ~60,000 E] $60,001 -80,000 E] $80,001-100,000 [:1 $ 1 00,001 - 120,000 C] $120,000 or more NEO Five-Factor Inventory l. I am not a worrier. 2. I like to have a lot of people around me. 3. I don’t like to waste my time daydreaming. 4. I try to be courteous to everyone I meet. 5. I keep my belongings clean and neat. 6. I often feel inferior to others. 7. I laugh easily. 120 9. 10. 11. Once I find the right way to do something, I stick to it. I often get into arguments with my family and co-workers. I’m pretty good about pacing myself so as to get things done on time. When I’m under a great deal of stress, sometimes I feel like I’m going to pieces. . I don’t consider myself especially “light-hearted.” . I am intrigued by the patterns I find in art and nature. . Some people think l’rn selfish and egotistical. . I am not a very methodical person. . I rarely feel lonely or blue. . I really enjoy talking to people. . I believe letting students hear controversial speakers can only confuse and mislead them. . I would rather cooperate with others than compete with them. . I try to perform all the tasks assigned to me conscientiously. . I often feel tense and jittery. . I like to be where the action is. . Poetry has little or no effect on me. . I tend to be cynical and skeptical of others’ intentions. . I have a clear set of goals and work toward them in an orderly fashion. . Sometimes I feel completely worthless. . I usually prefer to do things alone. . I often try new and foreign foods. . I believe that most people will take advantage of you if you let them. 121 L’J A V . I waste a lot of time before settling down to work. 3 I. I rarely feel fearful or anxious. 32. I often feel as if I’m bursting with energy. 33. I seldom notice the moods or feelings that different environments produce. 34. Most people I know like me. 35. I work hard to accomplish my goals. 36. I often get angry at the way people treat me. fix. 1' 37. I am a cheerful, high-spirited person. 38. I believe we should look to our religious authorities for decision on moral issues. 39. Some people think of me as cold and calculating. 40. When I make a commitment, I can always be counted on to follow through. 41. Too often, when things go wrong, I get discouraged and feel like giving up. 42. I am not a cheerful optimist. 43. Sometimes when I am reading poetry or looking at a work of art, I feel a chill or wave of excitement. 44. I’m hard-headed and tough-minded in my attitudes. 45. Sometimes I’m not as dependable or reliable as I should be. 46. I am seldom sad or depressed. 47. My life is fast—paced. 48. I have little interest in speculating on the nature of the universe or the human condition. 49. I generally try to be thoughtful and considerate. 50. I am a productive person who always gets the class done. 122 51. 52. 50. 57. 58. 59. 60. I often feel helpless and want someone else to solve my problems. I am a very active person. . I have a lot of intellectual curiosity. . Ifl don’t like people, I let them know it. . I never seem to be able to get organized. At times I have been so ashamed I just wanted to hide. I would rather go my own way than be a leader of others. I often enjoy playing with theories or abstract ideas. If necessary, I am willing to manipulate people to get what I want. I strive for excellence in everything I do. Computer Self-Efficacy I could use computer to do my work. . . . if there was no one around to tell me what to do as I go. . ifl had only the software manuals for reference. . if I had seen someone else using it before trying it myself. . ifl could call someone for help ifl got stuck. . if someone else had helped me get started. . . ifl had just the built-in help facility for assistance. . . . ifsomeone showed me how to do it first. Computer Anxiety Computers do not scare me at all. Working with a computer makes me nervous. I do not feel threatened when others talk about computers. 123 It wouldn’t bother me to take computer courses. Computers make me feel uncomfortable. I feel at ease in a computer class. I get a sinking feeling when I think oftrying to use a computer. I feel comfortable working with a computer. Computers make me feel uneasy. Social In fluence How many people that you know use PDAS (e.g., Palm Pilots or Blackberry) to track their daily and weekly appointments? How many people that you know use PDAs (e. g., Palm Pilots or Blackberry) to organize their to-do lists? How many people that you know use PDAs (e.g., Palm Pilots or Blackberry) to outline and organize their ideas? How many people that you know use cell phones to track their daily and weekly appointments? How many people that you know use cell phones to organize their to-do lists? How many people that you know use cell phones to organize and outline their ideas? How many people that you know use computers to track their daily and weekly appointments? How many people that you know use computers to organize their to-do lists? How many people that you know use computers to systematically organize and outline their ideas? 124 ‘!"J V I Kl)- .' L . n I Computer skills I frequently read computer magazines or other sources of information that describe new computer technology. I know how to recover deleted or “lost data” on a computer. I know what a LAN (Local Area Network) is. I know what an operating system is. I know how to write computer programs. I know how to install software on a computer. I know what a database is. I frequently use a computer for getting information on the Internet. I regularly use a computer for word processing Internet skills I know how to navigate the www using hyperlinks or entering its address (URL) in the browser. I know how to search for information using search engine (e. g., google, yahoo) on the Internet. I know how to send and receive email using the Internet. I know how to upload files from my computer to the Internet. I know how to download files from the Internet to my computer. I know how to install applications downloaded from the Internet to my computer. I know how to program a web page using simple or advanced computer language. Technology Use How often do you use a computer? (circle a number) 125 l. I don’t use a computer 5. everyday, for less than 1 hour. [\J . about once a month 6. everyday, for 1 to 3 hours '0) . a few times a month 7. everyday, for more than 3 hours 4. a few times a week How many computers do you have at home? (write 0 if you have no C omputer). How often do you use the Internet? (circle a number) I. I don’t use the Internet 5. everyday, but for less than 1 hour 2. about once a month 6. everyday, for 1 to 3 hours 3. a few times a month 7. everyday, for more than 3 hours 4. a few times a week Do you have Internet at home? (circle one) Yes No If Yes, how do you connect to the Internet? (circle one) 1. Telephone dial up 4. Wireless connection 2. Cable modem 5. T-l fiber optic 3. DSL-enabled phone line 6. Don’t know Do you ever go online using a wireless devise, like a PDA or cell phone? Yes No PDQ you ever go online using a wireless computer or laptop? Yes No Overall, how would you rate your computer skills? 1. No skills 4. Good skills 2. Poor skills 5. Very good skills 3. Average skills 126 Technology Activities There are many things people can do when they use a computer or other technology. Please indicate how often you do each of the following things. 1 2 3 4 Never Some Often Very times often 1) Create a Web page 2) Surf the Web 3) Talk on a cell phone 4) Text message on a cell phone 5) Lise a photo editing program 6) Play videogames on a computer 7) Play videogames on a console 8) Play video games in an arcade 9) Instant message with friends 10) Instant message with strangers l 1) Download music files 12) Buy something online 13) Create documents for school 14) Save images/graphics 15) ) Take a course online 16) Take a survey online 17) Take a test online 18) Visit a chat room 19) Post to a mailing list 20) Use a laptop 21) Read mailing list messages 22) Use an MP3 player 23) Use a DVD player 24) Use email 25) Use a PDA (like blackberry) 26)Search the Web for information about 27) health, diet, fitness 28) news. current events J9) religion, spirituality gonobs 31) colleges, universities, education 32) entertainment (movies, pop stars) 33) politics, political leaders 34) depression, mood, mental illness 3 5) government services 3 6) something I need for a school report flh—li—dI—ID—DI—lh—lI—Ir—‘DsfiU—ih-dr—Ih—dhafi—Ifi—‘rsh—iHD—‘h—iI—‘flh—fi—‘r—‘I—ir—lr—dp—‘p—I—Ir—i—OH NNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNN we.)wmumwt»wwwwwwwwwuwwwwwwwwwwwwwwwL»moat» h$bhhhbh$bh¥hhbbhhhhh#hh-bhhAhAbA-§hhhh 3 7) an interest or hobby of mine 127 Appendix 6: Debriefing Form The study that you just participated examines how beliefs/perceptions about Palm Pilots influence users’ reactions and usage of Palm Pilots. Specifically, the study investigates how beliefs about the usefulness of technology and one’s confidence his/her capability to use Palm Pilot change and influence intentions to use, satisfaction with, and usage of Palm Pilots dynamically over time. It also seeks to explain how individual difference factors such as computer anxiety and computer efficacy affect the patterns of usage behaviors. If you are interested in the detailed results of the study, please contact the second investigator in May 2007 when the results are expected to be available. Vv’ithin a few weeks when we have the chance to look at your responses, we will inform you via email whether you meet the conditions for the cash payment and a check will be l'llilllCd to you if you qualified. It is very important that you not discuss your experiences with this study with your friends and classmates for 2 reasons: 1. Talking to your classmates may bias their reactions during the experiment. 2. They may participate in a different study altogether. 'Thank you very much for your participation today. Research 1ike this cannot go forward \vithout your help. 128 l\) (1) Appendix 7: Outcome Expectation . Using Palm Pilot enables me to accomplish tasks more quickly. . 1 would find Palm Pilot useful in my work. . Using Palm Pilot would enhance my productivity. . lfl use Palm Pilot, I would improve my personal performance. 129 Appendix 8: Innovation Self-Efficacy How well can you use the following applications in Palm Pilot? 1. Calendar 2. Address 3. Task 4. Note Pad 5. Progect o. HiMoney 7. Diet Helper Response scale 1. Not well at all 3. Neither U1 . Very well 130 [0 Appendix 9: Behavioral Intention . I intend to use Palm Pilot during the next 24 hours. . I predict 1 would use Palm Pilot during the next 24 hours. . 1 plan to use Pam Pilot during the next 24 hours 131 IQ Appendix 10: Implementation Intention l have thought about particular situations in which I will use the Palm Pilot for the next 24 hours. I have thought about when I will use the Palm Pilot for the next 24 hours. I already know the situations in which I will use the Palm Pilot for the next 24 hours. I have planned how I will use the Palm Pilot for the next 24 hours. I already know how I will use the Palm Pilot for the next 24 hours. 132 Appendix 11: Outcome Disconfirmation 1. Compared to my expectation expressed in the last survey, the ability of Palm Pilot to improve my performance was (much worse than expected much better than expected). 2. Compared to my expectation expressed in the last survey, the ability of Palm Pilot to increase my productivity was (much worse than expected much better than expected). 3. Compared to my expectation expressed in the last survey, the ability of Palm Pilot to be useful was (much worse than expected much better than expected). 133 Appendix 12: Efficacy Disconflrmation 1. Compared to my expectation expressed in the last survey, my ability to use Palm Pilot is (much worse than expected much better than expected). 2. Compared to my expectation expressed in the last survey, my ability to execute various features in Palm Pilot is (much worse than expected much better than expected). 3. Compared to my expectation expressed in the last survey, my ability to use different applications in Palm Pilot is (much worse than expected much better than expected). 134 Appendix 13: Usage Frequency by Applications over Time Week Contacts Calendar Tasks Note Progect Hglgter Mrriey Total 1 4 8 4 4 1 O 0 22 2 1 5 4 5 3 2 1 21 3 1 5 5 5 3 3 1 23 4 1 5 5 4 3 2 1 21 5 1 5 5 5 3 3 1 24 6 1 6 5 6 4 3 1 26 7 1 5 4 5 3 2 1 21 8 1 5 5 5 3 2 1 23 Notes: As data were missing for some days of the week, daily averages was first calculated. They were then multiplied by 7 to estimate numbers of weekly usage frequency 135 Appendix 14: Oblique Rotated Factor Pattern for Behavioral Intention and Implementation Intention Scale Item Factor 1 Factor 2 Implementation intention 1 0911* -0.002 Implementation intention 2 0885* 0.002 Implementation intention 3 0780* -0.005 Implementation intention 4 0904* 0.013 Implementation intention 5 0922* —0004 Behavioral intention 1 0.002 0879* Behavioral intention 2 0.003 0884* Behavioral intention 3 -0003 0879* Explained Variance 58% 25% Inter-Factor Correlation 0.36 Factor loadings > .32 are marked with an asterisk 136 t . _ Al all! I Appendix 15: Regression Analyses of Effect of Behavioral Intention on Usage Frequency within Persons Person Behavioral Intention 1 1.09 2 0.69 3 0.98 4 0.92 5 0.26 6 1.39 7 1.11 8 1.2 9 0.34 10 0.86 1 1 0.87 12 1.01 13 1.34 14 0.79 15 1.05 16 0.91 17 1.16 18 0.85 19 1.66 20 0.74 21 0.79 22 0.82 23 0.82 24 1.21 25 1.17 26 0.93 27 0.98 28 0.65 29 1.14 30 1.32 31 1.35 32 1.16 33 0.93 34 1.1 137 Appendix 15 (Continued): Regression Analyses of Effect of Behavioral Intention on Usage Frequency within Persons Person Behavioral Intention 35 1.11 36 1.05 37 0.98 38 1.25 39 1.05 40 1.2 41 1.25 42 0.93 43 2.27 44 1.14 45 0.88 46 1.06 47 1.49 48 2.31 49 1.9 50 1.58 51 1.46 53 0.82 54 0.71 55 1.33 56 1 57 0.91 58 0.85 Note: Numbers in bold were insignificant Innovation self-eflicacy slopes 138 Appendix 16: Regression Analyses of Effects of Behavioral Intention and Implementation Intention on Usage Frequency within Persons Person Behavioral Intention Implementation Intention l 0.63 0.79 2 0.58 0.58 3 0.8 0.7 4 0.66 0.67 5 0.25 0.15 6 0.96 0.8 7 0.83 0.69 8 0.98 0.38 9 0.19 0.39 10 0.64 0.38 1 1 0.8 0.35 12 1.03 1.04 13 1.14 0.36 14 0.8 -0.04 15 0.85 0.55 16 0.77 0.46 17 1.06 0.2 18 0.56 0.87 19 1.57 0.15 20 0.66 0.59 21 0.67 0.45 22 0.65 0.51 23 0.82 -0.02 24 1.1 0.49 25 1.13 1.01 26 0.71 0.69 27 0.82 0.54 28 0.6 0.67 29 0.79 0.57 30 1 0.65 31 1.06 0.78 32 0.89 0.52 33 0.68 0.99 34 0.88 0.57 35 0.71 0.75 139 Appendix 16 (Continued): Regression Analyses of Effects of Behavioral Intention and Implementation Intention on Usage Frequency within Persons Person Behavioral Intention Implementation Intention 36 0.93 0.51 37 0.84 0.32 38 1.06 0.36 39 1.04 1.3 40 0.84 0.66 41 1.02 0.45 42 0.95 1.57 43 1.65 0.77 44 0.66 0.75 45 0.64 0.82 46 0.91 0.47 47 1.25 0.43 48 1.7 0.87 49 1.72 0.78 50 0.9 0.71 51 1.07 0.8 53 -0.27 0.91 54 0.64 0.29 55 0.89 0.65 56 0.75 0.49 57 ‘ 0.52 0.65 58 JL 0.69 0.73 Note: Numbers in bold were insignificant implementation intention slopes 140 Appendix 17: Regression Analyses of Effects of Outcome Expectation an Innovation Self-Efficacy on Behavioral Intention within Persons Person Outcome Expectation Innovation SelfiEf/icacy 1 1.77 0.468 2 1.24 -0.102 3 0.63 0.369 4 2.11 0.727 5 1.41 ~0.017 6 1.31 0.561 7 1.06 0.601 8 1.5 0.647 9 0.23 0.053 10 1.49 0.403 1 1 1.53 0.674 12 0.25 0.24 13 0.96 0.387 14 0.9 2.123 15 1.09 0.892 16 0.76 0.879 17 1.08 0.755 18 0.91 0.841 19 1.01 0.388 20 1.97 0.399 21 1.29 0.673 22 g 0.39 0.402 23 0.6 -0.04 24 0.68 0.186 25 1.27 0.273 26 0.77 0.359 27 0.96 0.446 28 1.02 -0.196 29 1.69 0.265 30 0.75 0.757 31 0.52 0.358 32 1.01 0.505 33 0.61 0.316 34 1.07 0.133 141 Appendix 17 (Continued): Regression Analyses of Effects of Outcome Expectation and Innovation Self-Efficacy on Behavioral Intention within Persons Person Outcome Expectation Innovation Self-Efficacy 35 1.46 0.568 36 0.97 -0.043 37 0.87 0.865 38 0.53 0.464 39 1.2 0.069 40 0.93 0.286 41 0.91 0.68 42 0.71 -0.322 43 0.58 0.354 44 1.18 0.376 45 1.12 0.07 46 0.89 0.322 47 0.87 0.483 48 0.85 0.191 49 0.85 -0.124 50 0.69 0.275 51 0.62 0.793 52 0.01 1.832 53 0.33 0.154 54 1.01 -0.049 55 0.87 0.239 56 1 0.36 57 0.93 0.636 58 0.42 0.383 Note: Numbers in bold were insignificant innovation self-efficacy slopes 142 15: Appendix 18: Regression Analyses of Effects of Outcome Expectation and Innovation Self-Efficacy on Implementation Intention within Persons Person Outcome Expectation Innovation Self-Efficacy 1 0.212 0.848 2 0.401 0.394 3 0.033 0.504 4 0.703 1.258 5 0.538 1.896 6 1.139 0.448 7 0.899 0.733 8 0.998 0.821 9 0.133 0.349 10 0.257 0.852 1 1 -0.573 0.949 12 -0.133 0.41 13 0.672 0.93 14 0.51 1.307 15 0.316 1.041 16 0.365 0.537 17 0.203 1.184 18 0.44 0.869 19 0.515 0.926 20 0.376 1.287 21 0.744 0.452 22 0.234 0.426 23 0.29 0.299 24 0.037 0.805 25 0.473 1.439 26 0.487 0.378 27 0.671 0.618 28 0.087 0.127 29 -0.225 0.707 30 0.469 0.835 31 -0.057 1.197 32 0.539 0.61 33 0.169 0.417 34 0.395 0.81 143 Appendix ]8 (Continued): Regression Analyses of Effects of Outcome Expectation and Innovation Self-Efficacy on Implementation Intention within Persons Person Outcome Expectation Innovation Self-Efficacy 35 -0.107 1.037 36 0.268 0.336 37 0.592 0.855 38 0.526 0.304 39 0.143 0.629 40 0.543 0.836 41 0.619 0.69 42 0.154 1.385 43 0.274 0.749 44 -0.251 0.83 45 0.76 0.439 . 46 0.314 0.441 47 0.71 0.814 48 1.084 0.112 49 0.531 0.145 50 1.174 0.622 51 1.007 0.617 52 0.44 8.124 53 -0.027 0.894 54 0.365 0.407 55 0.116 0.517 56 0.736 0.551 57 -0.609 0.867 58 -0.136 0.472 Note: Numbers in bold were insignificant innovation self-efficacy slopes 144 Appendix 19: Regression Analyses of Effects of Outcome Expectation and Innovation Self-Efficacy on Usage Frequency within Persons Outcome Expectation Innovation Self-Efficacy Person Slope Slope 1 2.18 0.69 2 1.64 ~0.41 3 1.37 0.74 4 1.47 1.51 5 3.18 1.64 6 3.3 1.33 7 1.43 1.16 8 3.51 0.53 9 0.73 -0.17 10 2.16 0.21 I 1 2.36 0.92 12 0.32 0.8 13 1.85 1.52 14 2.09 2.56 15 1.94 1.7 16 1.18 0.96 17 2.44 0.47 18 1.33 0.93 19 2.92 0.98 20 2.51 0.76 21 1.2 1.12 22 1.14 0.32 23 0.99 -0.06 24 1.3 0.27 25 2.62 0.09 26 1.17 0.7 27 0.9 1.19 28 0.76 —0.5 29 2.42 0.3 30 0.96 1.28 31 1.78 0.49 32 1.96 0.81 33 1.18 0.43 34 0.9 0.92 145 Appendix 19 (Continued): Regression Analyses of Effects of Outcome Expectation and Innovation Self-Efficacy on Usage Frequency within Persons Innovation Self-Eflicacy Person IOutcome Expectation Slope Slope 35 2.19 0.86 36 1.91 -0. 7 37 1.31 0.89 38 0.95 0.87 39 2.54 -0.31 40 0.85 2.25 41 1.94 0.95 42 1.98 0.24 43 2.64 1.28 44 2.02 0.86 45 1.59 0.32 46 1.5 0.55 47 1.56 1.49 48 2.52 0.81 49 2.07 -0.06 50 2.41 0.86 51 1.41 1.2 52 1.08 7.34 53 1.94 0.26 54 1.85 -0.4 55 1.57 0.76 56 1.49 0.48 57 0.49 0.79 58 0.69 1.21 Note: Numbers in bold were insignificant innovation self-efficacy slopes Numbers in italics were significantly negative innovation self-efi’icacy slopes 146 Appendix 20: Regression Analyses of Effects of Innovation Self-Efficacy on Usage Frequency within Persons over Time Outcome Innovation Time x Innovation Person Time Expectation Self-Efficacy Self-Efficacy l 0.01 2.01 1.01 —0.03 2 —0.04 1.05 0.45 0 3 0.03 1.16 0.37 -0.01 4 0.02 1.37 2.43 -0.05 5 0.04 2.69 -0.59 0.15 6 -0.02 3 .3 0.1 0.05 7 -0.03 1.46 2.65 -0.03 8 -0.02 3.26 1.1 0 9 —0.02 0.49 0.37 0 10 -0.01 1.87 0.54 -0.02 1 1 -0.03 2.16 2.65 -0.05 12 0.03 0.22 0.05 0.01 13 0.01 1.79 0.9 0.03 14 -0.01 1.64 8.19 -0.3 15 -0.02 1.89 2.2 -0.01 16 -0.03 1.14 2.94 -0.05 17 -0.02 2.16 1.21 -0.01 18 -0.01 1.14 2.56 -0.06 19 0.01 2.88 0.04 0.03 20 0.03 2.1 -0.1 0.05 21 0.03 0.64 0.35 0.01 22 0.03 0.74 0.06 -0.01 23 -0.03 0.67 0.27 0.01 24 -0.02 0.99 0.92 -0.01 25 0.02 2.36 -0.54 0.05 26 0.03 0.86 0.35 -0.01 27 0 0.45 2.63 -0.05 28 -0.04 0.32 0.35 -0.01 29 0 2.07 1.01 -0.04 30 -0.05 1.12 3.61 -0.06 31 -0.03 1.57 -0.37 0.07 32 0.01 1.87 1.32 -0.02 33 0.04 0.84 -1.04 0.01 34 0.02 0.91 0.85 -0.02 147 Appendix 20 (Continued): Regression Analyses of Effects of Innovation Self-Efficacy on Usage Frequency within Persons over Time Outcome Innovation Time x Innovation Person Time Expectation Self-Efficacy Self-Efficacy 35 0.02 1.5 2.04 -0.07 36 -0.03 1.37 -0.04 -0.01 37 -0.03 1.4 3.17 -0.06 38 -0.02 0.85 1.53 -0.01 39 -0.04 1.76 2.57 -0.03 40 0 0.93 2.32 0 41 0.04 1.67 1.6 -0.04 42 -0.03 1.4 0.88 0.04 43 0.04 2.56 3.68 —0.07 44 0.02 2.09 1.02 -0.02 45 -0.03 0.97 0.15 0.03 46 0.03 1.11 0.13 -0.01 47 -0.02 1.69 2.29 -0.01 48 -0.03 2.05 4.24 -0.11 49 0.01 1.65 0.95 -0.07 50 -0.01 2.45 1.29 0.00 51 -0.03 1.42 2.6 -0.04 53 0.02 3 -0.94 0.02 54 -0.03 1.18 -0.17 0.01 55 0.04 1.49 -0.6 0.02 56 0.05 1.67 0.7 -0.04 57 0 0.49 1.31 -0.04 58 0.03 1.38 -0.95 0.04 Note: Numbers in bold were insignificant interaction effects 148 Appendix 21: Regression Analyses of Innovation Self-Efficacy Growth over Time Person Time 1 0.023 2 0.025 3 0.017 4 0.012 5 0.004 6 0.006 7 0.009 8 0.008 9 0.021 10 0.019 1 1 0.013 12 0.018 13 0.005 14 0.003 15 0.004 16 0.009 17 0.007 18 0.01 19 0.005 20 0.004 21 0.011 22 0.016 23 0.014 24 0.017 25 0.003 26 0.018 27 0.013 28 0.02 29 0.023 30 0.01 31 0.007 0 32 0.007 33 0.019 34 0.015 149 Appendix 21 (Continued): Regression Analyses of Innovation Self-Efficacy Growth over Time Person Time 35 0.019 36 0.014 37 0.009 38 0.009 39 0.007 40 0.006 41 0.009 42 0.005 43 0.007 44 0.017 45 0.018 46 0.018 47 0.009 48 0.005 49 0.004 50 0.006 51 0.009 52 0.001 53 ' 0.017 54 0.022 55 0.015 56 0.014 57 0.019 58 1 0.014 150 Appendix 22: Regression Analyses of Effect of Usage Frequency on Innovation Self-Efficacy within Persons Person Usage Frequency 1 0.198 2 -0.188 3 0.169 4 0.087 5 -0.019 6 0.02 7 0.064 8 -0.001 9 -0.089 10 0.123 1 1 0.086 12 0.083 13 0.051 14 0.008 1 5 -0.006 16 0.041 17 -0.005 1 8 0.054 19 0.039 20 -0.037 21 0.155 22 0.094 23 -0.099 24 0.14 25 0.007 26 0.159 27 0.1 28 -0. 23 29 0.176 30 0.047 3 1 0.019 32 0.029 33 0.085 34 0.1 15 151 Appendix 22 (Continued): Regression Analyses of Effect of Usage Frequency on Innovation Self-Efficacy within Persons Person Usag Frequency 35 0.14 36 -0.079 3 7 0.05 3 8 -0.003 39 -0.002 40 0.026 41 0.045 42 0.017 43 0.03 44 0. 175 45 -0.007 46 0.217 47 0.03 48 -0.007 49 -0.001 50 0.015 5 I 0.062 52 -0.002 53 0.063 54 -0.134 55 0.153 56 0.175 57 0.208 58 0.1 16 Note: Numbers in bold were insignificant slopes Numbers in italics were significantly negative slopes 152 Appendix 23: Regression Analyses of Effect of Usage Frequency on Outcome Expectation within Persons Person Usage Frequency 1 0.055 2 0.128 3 0.055 4 0.009 5 0.017 6 0.058 7 0.12 8 0.026 9 0.068 10 0.096 1 1 0.036 12 -0.005 13 0.047 14 -0.001 15 0.012 16 0.124 17 0.039 18 0.143 19 0.065 20 0.048 21 0.073 22 - 0.055 23 0.217 24 0.081 25 0.055 26 0.06 27 0.018 28 0.164 29 0.089 30 0.077 31 0.062 32 0.029 33 0.043 34 0.011 153 Appendix 23 (Continued): Regression Analyses of Effect of Usage Frequency on Outcome Expectation within Persons Person Usage Frequency 35 0.042 36 0.107 37 0.094 38 0.006 39 0.034 40 0.034 41 0.027 42 0.06 43 0.045 44 0.057 45 0.082 46 0.05 47 0.061 48 0.013 49 0.024 50 0.046 51 0.076 52 0.034 53 0.011 54 0.12 55 0.066 56 0.01 57 ‘ -0.017 58 0.038 Note: Numbers in bold were insignificant slopes 154 BIBLIOGRAPHY 155 BIBLIOGRAPHY Ackerman, P. L., & Kanfer, R. (1993). Integrating laboratory and field-study for improving selection: Development of a battery for predicting air-traffic—controller success. Journal of Applied Psychology, 78, 413-432. Adam, F., 8: O'Doherty, B. (2003). ERP projects: Good or bad for SME. Cambridge, UK: Cambridge University Press. Ajzen, I. (1991). The theory of planned behavior. Organizational Behavior and Human Decision Processes, 50, 179-211. Ajzen. I. (2001 ). Nature and operation of attitudes. Annual Review ofPsyChologv, 52, 27- 58. Ajzen. I. (2002). Perceived behavioral control, self-efficacy, locus of control, and the theory of planned behavior. Journal of Applied Social Psychology, 32, 665-683. Ajzen, 1., & F ishbein, M. (1977). Attitude—behavior relations: Theoretical-analysis and review of empirical research. Psychological Bulletin, 84, 888-918. Bandura, A. (1997). Theoretical perspectives. In A. Bandura (Ed.), Self-efficacy: The exercise of control (pp. 1-35). New York: Freeman. Bandura. A., & Locke, E. A. (2003). Negative self-efficacy and goal effects revisited. Journal of/lpplied Psychology, 88, 87-99. Bandura, A. (1986). Social Foundations of Thought and Action: A Social Cognitive Theorv. En glewood Cliffs, NJ: Prentice Hall. Bandura. A., & Locke, E. A. (2003). Negative self-efficacy and goal effects revisited. Journal o/‘Applied Psychology, 88, 87-99. Bhattacherjee, A. (2001). Understanding information systems continuance: An expectation-confirmation model. MIS Quarterly, 25, 351. Bhattacherjee, A., & Premkumar, G. (2004). Understanding changes in belief and attitude toward information technology usage: A theoretical model and longitudinal test. MIS Quarterly 28, 229-254. Brancheau, J. C ., & Brown, C. V. (1993). The management of end-user computing: Status and directions. Computing Surveys, 25, 437-482. Bryan, .I. F., & Locke, E. A. (1967). Goal setting as a means of increasing motivation. Journal oprplied Psychology, 51, 274-277. 156 Bryk, A. S., & Raudenbush, S. W. (1992). Hierarchical linear models: Applications and data analysis methods. Thousand Oaks, CA: Sage Publications, Inc. C hau, P. Y. K., & Hu, P. J. H. (2001). Information technology acceptance by individual professionals: A model comparison approach. Decision Sciences, 32, 699-719. Chin. W. W., & Gopal, A. (1995). Adoption intention in Gss: Relative importance of beliefs. Data Base for Advances in Information Systems, 26, 42-64. Compeau, D. R., & Higgins, C. A. (1995). Computer self-efficacy: Development ofa measure and initial test. MIS Quarterly, 19, 189-211. Compeau, D. R., Higgins, C. A., & Huff, S. (1999). Social cognitive theory and individual reactions to computing technology: A longitudinal study. MIS Quarterly, 23, 145-158. C urley, K. F., & Pyburn, P. J. (1982). Intellectual technologies: The key to improving white-collar productivity. Sloan Management Review, 24, 31-39. Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13, 319-340. Davis, F. D., Bagozzi, R. P., & Warshaw, P. R. (1989). User acceptance of computer- technology: A comparison of two theoretical models. Management Science, 35, 982— 1003. de Vries, P., Midden, C., & Bouwhuis, D. (2003). The effects of errors on system trust, self—confidence, and the allocation of control in route planning. International Journal ofI-Iunum—Computer Studies, 58, 719-735. Dewett, T. & Jones G. R., (2001). The role of information technology in the organization: a review, model, and assessment. Journal of Management, 3, 313-346. Erez. M., & Zidon, I. (1984). Effect of goal acceptance on the relationship of goal difficulty to performance. Journal of Applied Psychology, 69, 69-78. Eveland, J. D., Hetzner, W. A., & Tomatzky, L. G. (1983). Innovation processes research implications for public-policy. IEEE Transactions on Engineering Management, 30, 37-39. Gist, M. 13., & Mitchell, T. R. (1992). Self-efficacy: A theoretical-analysis of its determinants and malleability. Academy of Management Review, 1 7, 183-211. Gollwitzer, P. M. (1999). Implementation intention: Strong effects of simple plans. American Psychologist, 54, 493-503. 157 Gollwitzer, P. M., & Brandstatter, V. (1997). Implementation intention and effective goal pursuit. Journal of Personality and Social Psychology, 73, 186-199. Gollwitzer, P. M., & Oettingen, G. (1998). The emergence and implementation of health goals. Psychology & Health, 13, 687-715. Harrison, D. A., Mykytyn, P. P., & Riemenschneider, C. K. (1997). Executive decisions about adoption of information technology in small business: Theory and empirical tests. Information Systems Research, 8, 171-195. Hill, T., Smith, N. D., & Mann, M. F. (1987). Role of efficacy expectations in predicting the decision to use advanced technologies: The case of computers. Journal of Applied Psychology, 72, 307-313. Hu. P. 1., Chau, P. Y. K., Sheng, O. R. L., & Tam, K. Y. (1999). Examining the technology acceptance model using physician acceptance of telemedicine technology. Journal of Management Information Systems, 16, 91-112. Igbaria, M., Pavri, F. N., & Huff, S. L. (1989). Microcomputer applications: An empirical look at usage. Information & Management, 16, 187-196. Ives, B., & Learmonth, G. P. (1984). The information system as a competitive weapon. Communications ofthe ACM, 27, 1193-1201. .lames. D., 8.: Wood, M. L. (2000). A second wind for ERP. McKinsey Quarterly, 2, 100- 107. .laspcrson, J. 8., Carter, P. E., & Zmud, R. W. (2005). A comprehensive conceptualization of post-adoptive behaviors associated with information technology enabled work systems. MIS Quarterly, 29, 525-557. .leyaraj, A., Rottman, J. W., & Lacity, M. C. (2006). A review of the predictors, linkages, and biases in IT innovation adoption research. Journal of Information Technology, 2/(1),l-23. Johansen, R., & Swigart, R. (1996). Upsizing the individual in the downsized organization: Managing in the wake of reengineering, globalization, and overwhelming technological change. Reading, MA: Addison-Wesley. Kalmeman, D. (1973). Attention and effort. Englewood Cliffs, NJ: Prentice-Hall. Kanfer, R., & Ackerman, P. L. (1989). Motivation and cognitive abilities: An integrative aptitude treatment interaction approach to skill acquisition. Journal of Applied Psychology, 74, 657-690. 158 Kanfer, R., Ackerman, P. L., Murtha, T. C., Dugdale, B., & Nelson, L. (1994). Goal- setting, conditions of practice, and task-performance: A resource allocation perspective. Journal of Applied Psychology, 79, 826-835. Landauer, T. K. (Ed.). (1995). The trouble with computers: Usefulness, usability, and prmluctivity. Cambridge, MA: MIT Press. Latham, G. P., & Locke, E. A. (1991). Self-regulation through goal-setting. Organizational Behavior and Human Decision Processes, 50, 212-247. Lee, D. M. S. (1986). Usage pattern and sources of assistance for personal-computer users. MIS Quarterly, 10, 313-325. Legris, P., Ingham, J ., & Collerette, P. (2003). Why do people use information technology? A critical review of the technology acceptance model. Information & Management, 40, 191-204. ‘ Lindsley, D. H., Brass, D. J ., & Thomas, J. B. (1995). Efficacy-performance spirals: A multilevel perspective. Academy of Management Review, 20, 645-678. Locke, E. A., & Bryan, J. F. (1969). Knowledge of score and goal level as determinants of work rate. Journal of Applied Psychology, 53, 59-65. Locke, E. A., & Latham, G. P. (Eds). (1990). A theory ofgoal setting and task performance. Englewood Cliffs, NJ: Prentice Hall. Martocchio, J. J ., & Webster, J. (1992). Effects of feedback and cognitive playfulness on performance in microcomputer software training. Personnel Psychology, 45, 553-578. Mathieson, K. (1991). Predicting user intentions: Comparing the technology acceptance model with the theory of planned behavior. Information Systems Research, 2, 173- I91 . Melone, N. P. (1990). A theoretical assessment of the user satisfaction construct. Management Science, 36, 76-92. Milner, K. (2002). Understanding and improving training transfer motivation: an application of recent advances in motivational theory. Unpublished Dissertation. Michigan State University. Mitchell, T. R., Hopper, H, Daniels, D., George-Falvy, J ., & James, L. R. (1994). Predicting self-eflicacy and performance during skill acquisition. Journal of Applied Psvchologv, 7.9, 506-517. Moore, G. A. (1991). Crossing the chasm: Marketing and selling high-tech products to mainstream customers. New York: Harper-Collins. 159 Morris, M. G., Venkatesh, V., & Ackerman, P. L. (2005). Gender and age differences in employee decisions about new technology: An extension to the theory of planned behavior. leee Transactions on Engineering Management, 52, 69-84. Muir, B. M. (1994). Trust in automation .1. Theoretical issues in the study of trust and human intervention in automated systems. Ergonomics, 37, 1905-1922. Norman, D. A. (1993). Things that make as smart: Defending human attributes in the age ofthe machine. Reading, MA: Addison-Wesley. Olson, J. C ., & Dover, P. A. (1979). Disconfirmation of consumer expectations through product trial. Journal of Applied Psychology, 64, 179-191. Powers, W. T. (1973). Behavior: The Control of Perception. Chicago: Aldine. Powers, W. T. (1978). Quantitative analysis of purporsive systems: Some spadework at the foundations of scientific psychology. Psychological Review, 85, 417-435. Rockart, J. F., Earl, M. J ., & Ross, J. W. (1996). Eight imperatives for the new IT organization. Sloan Management Review, 38, 43-47. Rogers, E. M. (1995). Diffusion of innovations (4th ed.). New York: The Free Press. Ross, .1. W., & Weill, P. (2002). Six IT decisions your IT people shouldn't make. Harvard Business Review, 80, 84-92. Seijts, G. H., & Latham, G. P. (2001). The effect of distal Ieaming, outcome, and proximal goals on a moderately complex task. Journal of Organizational Behavior, 22, 291-307. Sheeran, P., & Orbell, S. (1999). Implementation intention and repeated behaviour: Augmenting the predictive validity of the theory of planned behaviour. European Journal of Social Psychology, 29, 349-369. Sheeran, P., & Orbell, S. (2000). Using implementation intention to increase attendance for cervical cancer screening. Health Psychology, 19, 283-289. Sichel, D. E. (1997). The computer revolution: An economic perspective. Washington, DC: The Brookings Institution. Stajkovic, A. D., & Luthans, F. (1997). A meta-analysis of the effects of organizational behavior modification on task performance, 1975-95. Academy of Management Journal, 40, 1122-1149. Stajkovic, A. D., & Luthans, F. (1998). Self-eflicacy and work-related performance: A 160 meta—analysis. Psychological Bulletin, I 24, 240-261. Stratopoulos, T., & Dehning, B. (2000). Does successful investment in information technology solve the productivity paradox? Information & Management, 38, 103-1 17. Straub, D., Limayem, M., & Karahanna-Evaristo, E. (1995). Measuring system usage — Implications for IT theory testing. Management Science, 41, 1328-1342. Taylor, S., & Todd, P. (1995). Assessing IT usage: The role of prior experience. MIS Quarterly, 19, 561-570 Vancouver, J. 8., Thompson, C. M., & Williams, A. A. (2001). The changing signs in the relationships among self-efficacy, personal goals, and performance. Journal of Applied Psychology, 86, 605-620. Vancouver, J. 8., Thompson, C. M., Tischner, E. C., & Putka, D. J. (2002). Two studies -1 examining the negative effect of self-efficacy on performance. Journal of Applied 1 Psychology, 87, 506-516. \r’ancouver, J. B., & Kendal, L. N. (2006). When self efficacy negatively relates to motivation and performance in a Ieaming context. Journal of Applied Psychology, 91, 1 146-1 153. Venkatesh, V. (2000). Determinants of perceived ease of use: Integrating control, intrinsic motivation, and emotion into the technology acceptance model. Information Svstems Research, I l , 342-365. Venkatesh, V., Brown, S. A., Maruping, L. M., & Bala, H. (2008). Predicting different conceptualizations of system use: The competing roles of behavioral intention, facilitating conditions, and behavioral expectation. Mis Quarterly, 32, 483-502. Venkatesh, V., & Davis, F. D. (2000). A theoretical extension of the Technology Acceptance Model: Four longitudinal field studies. Management Science, 46, 186- 204. Venkatesh, V., Davis, F. D., & Morris, M. G. (2007). Dead or alive? The development, trajectory and future of technology adoption research. Journal of the Association for In/brnuttion Systems, 8, 267-286. Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of infomiation technology: Toward a unified view. MIS Quarterly, 27, 425-478. Vroom, V. H. (1964). Work and Motivation. New York, NY: John Wiley and Sons. Weiner, L. R. (1993). Digital woes: Why we should not depend on software. Reading, MA: Addison-Wesley. 161 Weiss. H. M., Beal, D. J., Lucy, S. L., & MacDermid, S. M. (2004). Constructing EMA studies with PMAT: The Purdue Momentary Assessment Tool user's manual. Software manual available at http://www.mfri.purdue.edu/pmat. Will, R. P. (1991). True and false dependence on technology: Evaluation with an expert system. Computers in Human Behavior, 7, 171-183. Wixom, B. H., & Todd, P. A. (2005). A theoretical integration of user satisfaction and technology acceptance. Information Systems Research, 16, 85-102. Zimmemtan, B. J. (2000). Attaining self-regulation: A social cognitive perspective. In M. Bockaerts, P. R. Pintrich & M. Zeidner (Eds), Handbook of self-regulation (pp. 13- 39). San Diego: Academic Press. Zimmerman, B. J., & Bandura, A. (1994). Impact of self-regulatory influences on writing course attainment. American Educational Research Journal, 31, 845-862. 162