MOBILE USER EXPERIENCE RESEARCH METHODS & CONTEXT OF USE: A SYSTEMATIC LITERATURE REVIEW By Jennifer Ismirle A THESIS Michigan State University in partial fulfillment of the requirements for the degree of Digital Rhetoric and Professional Writing—Master of Arts Submitted to 2018 ABSTRACT MOBILE USER EXPERIENCE RESEARCH METHODS & CONTEXT OF USE: A SYSTEMATIC LITERATURE REVIEW By Jennifer Ismirle With the rapid growth and development of mobile technologies, there is a need for examination of current mobile UX research to understand whether the methods being used have or are evolving (e.g., into the wild) and how mobile context of use may be considered for these types of studies. This thesis describes a systematic literature review to understand the types of methods, evaluation settings, procedures, focuses, study designs, tools, and context considerations used for mobile UX research. Overall, the results indicate a further push into the “wild” or field settings and a broadening of UX mobile research and contextual considerations through the use of more open and less restrictive methods, conducting research in dynamic environments of use over semi-longitudinal time periods, and the use of a range of methods and tools based on the goals and context of each study to allow for greater understanding of impacts and performance. However, further research and consideration are needed for the inclusion of a broader range of experiences and user or social contexts to continue to understand and meet the needs of humans and their ever-expanding and evolving technological ecosystems. Copyright by JENNIFER ISMIRLE 2018 ACKNOWLEDGEMENTS I would like to thank the members of my committee for all their guidance, support, encouragement, and kindness throughout my graduate school experience: Liza Potts Sarah Swierenga Bill Hart-Davidson As well as Ben Lauren for his helpful advice and direction. And I would like to thank my family and friends, especially my parents, Sheri, and Ginger, for their love and support during this journey. iv TABLE OF CONTENTS LIST OF TABLES ......................................................................................................................... vi LIST OF FIGURES ...................................................................................................................... vii Introduction ..................................................................................................................................... 1 Mobile UX Research: Lab versus Field .................................................................................... 2 Mobile Context of Use .............................................................................................................. 5 Mobile UX Research Methods & Consideration of Context of Use ........................................ 6 Methods........................................................................................................................................... 7 Results ........................................................................................................................................... 11 Evaluation Type and Procedure .............................................................................................. 11 Field Studies...................................................................................................................... 12 Lab Studies........................................................................................................................ 14 Both Field and Lab Settings. ............................................................................................. 16 Evaluation Setting Not Specified. ..................................................................................... 17 Study Focus, Devices, Participants, and Evaluation Tools ..................................................... 17 Study Focus. ...................................................................................................................... 17 Devices Used in a Study. .................................................................................................. 18 Participant Characteristics. ............................................................................................... 19 Evaluation Tools/Techniques to Collect Data. ................................................................. 20 Mobile Context of Use ............................................................................................................ 21 Environment or Physical Context. .................................................................................... 22 User or Social Context. ..................................................................................................... 24 Task/Activity Context. ...................................................................................................... 25 Technology or Technical/Information Context. ............................................................... 26 Temporal Context. ............................................................................................................ 26 Discussion & Conclusion .............................................................................................................. 28 REFERENCES ............................................................................................................................. 31 v LIST OF TABLES Table 1. Publications initially selected for systematic literature review. ....................................... 8 Table 2. Publications and number of articles included in review. .................................................. 9 Table 3. Types of evaluations described in publications and specific studies. ............................. 12 Table 4. Mobile context frameworks or models. .......................................................................... 22 vi LIST OF FIGURES Figure 1. Facilitation of field studies. ........................................................................................... 12 Figure 2. Tasks/activity for field studies. ...................................................................................... 13 Figure 3. Facilitation of lab studies............................................................................................... 15 Figure 4. Tasks/activity for lab studies. ........................................................................................ 15 Figure 5. Tools/techniques used to collect data and/or evaluate experiences within a study. ...... 20 vii Introduction Originally, a focus on usability primarily involved the use of usability testing near the end of a process, then broadened to user-centered design (UCD) to include user research and usability evaluation throughout a design process, and now has broadened further to user experience (UX) to encompass a larger context of use (Redish & Barnum, 2011). The International Organization for Standardization or ISO (1998) has defined usability as the “extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use.” However, as discussed by Sun (2012), usability research often focuses most on ease of use, and the specified context of use often only includes user characteristics, types of tasks, and a physical environment (such as a workplace) versus dynamic contexts. User-centered design and research focuses on including representative users and considering their needs, goals, and interactions with technology throughout every step (e.g., user requirements analysis, conceptual design, prototyping and implementation, and launch and maintenance) of an iterative development and evaluation process. User experience has been defined by the ISO (2010) as a “person's perceptions and responses resulting from the use and/or anticipated use of a product, system, or service” which includes a user’s “emotions, beliefs, preferences, perceptions, physical and psychological responses, behaviors and accomplishments that occur before, during and after use.” The ISO’s (2018) definition of usability was also updated to expand the scope to systems and services and to consider a wider range of goals and outcomes. Furthermore, Sullivan has advocated for a more open approach to UX design and research methods through a focus on relinquishing control and being more “open to the voice of another” and “encounters” instead of only controlled research contexts and limited inclusion of various types of users (Sullivan, 2017, p. 19), thereby allowing 1 researchers to consider more dynamic and situation-driven contexts and empathy toward participants. Experience architecture (XA) is another example of a push for a broadening of the focus and approaches of UX design to more dynamic experiences by “creating experiences that are augmented by technology” and “understand[ing] ecosystems of activity” versus “single activities that envelop us in technology” (Potts & Salvo, 2017, p. 4). UX practitioners come from many backgrounds, fields, and disciplines, including human- computer interaction (HCI) and technical communication (TC). Within HCI, a focus on usability has grown from first optimizing interaction between humans and machines through usability testing to collect performance metrics and to discover points of error, then including more emphasis on enabling the communication between humans and machines and considering cognitive aspects, and now to considering a broader interaction of meaning making and supporting situated actions and varying contexts and interpretations (Harrison, Tatar, & Sengers, 2007). Usability testing in technical communication has involved user, task, and context analysis for the creation of content and documentation to “make complex interactions understandable and usable” (e.g., training documentation to help users avoid errors) and has continued to expand to UCD and UX approaches (e.g., using a variety of research and analysis methods throughout design and development) including the design of products, information design and architecture, content strategy and management, and so on (Redish & Barnum, 2011). Mobile UX Research: Lab versus Field Traditional usability testing often has involved a controlled environment in which representative users attempt specific tasks within a specific time frame in the presence of a moderator and/or observer(s) in a laboratory setting, conference room, or similar common location (i.e., participants use a testing computer or device in a research setting), as well as the 2 use of video recordings to capture a participant’s actions and body language and the computer screen (Barnum, 2002). In recent years, mobile technology and remote testing options have made it more possible to extend usability evaluation beyond the traditional laboratory setting. For example, a mobile recreation application designed to provide useful information on nearby activities could be tested “in the field” at an actual area for recreation (such as a public lake or park) where users would try tasks on a mobile device with a moderator observing interactions in a representative context or environment (Swierenga, Propst, Ismirle, Figlan, & Coursaris, 2014), thereby potentially increasing the ecological validity of the evaluation (e.g., the study context represents the actual use of the product, and could then allow for the results to be more representative and generalizable). However, some have argued that simulated context of use within a lab setting can provide sufficient ecological validity (e.g., Kaikkonen, Kekäläinen, Cankar, Kallio, & Kankainen, 2005). Although the concept of lab results versus field results has been examined within human- computer interaction (HCI), there has not been much consensus on which type of evaluation yields “better” results and which methods to use for mobile user experience research and testing (e.g., evaluation of mobile websites, applications, devices, etc.). For example, some studies have found the same amount of issues for a product when testing in the lab and in the field (e.g., Kaikkonen et al., 2005), whereas others have found more issues when testing in the field (e.g., Baillie & Schatz, 2005). However, a push toward conducting research “in the wild” has been evident as mobile technologies, and technologies overall, continue to become embedded in everyday life and increase the need for user experience research to evaluate technologies in contexts of their use and to understand how they are integrated into daily life (Crabtree et al., 2013). 3 Coursaris and Kim (2011) conducted a qualitative meta-analytical review of more than 100 empirical mobile usability studies from over a ten-year period (2000-2010). They found only 21% were field studies and 10% were both lab and field studies, and pointed to the need for research involving dynamic environmental factors. Most of these studies involved a multi- method approach (e.g., device data triangulated with questionnaires, observation, and/or interview data). Additionally, Coursaris and Kim (2011) found there was a lack of research involving open and unstructured tasks (e.g., to understand the types of actions and behaviors of users without constraint) and that the research was limited in the range and frequency of user characteristics studied (e.g., cross-cultural, disabilities, age, gender, etc.) as prior experience was the most prominent user-related variable. Kjeldskov and Skov (2014) conducted a review to explore the past decade (2004-2014) of mobile HCI research on field and lab evaluation, specifically to investigate responses in the literature to their “Is it worth the hassle?” paper (Kjeldskov, Skov, Als, & Høegh, 2004) in which they had concluded that a field setting provided little added value when compared to a simulated field setting in a lab. For their 2014 review, they divided up the publications into those that were conducted in a lab setting (23) or field setting (39), compared lab and field (16), and discussed lab and field (64). The lab studies took place in controlled or artificial environments (and 14 studies included simulated context in some way) and this type of study was justified by the need for more control, replicability, easier data capture, and less time required. Field studies involved different forms of experimental control in a natural environment or the “real world.” Kjeldskov and Skov (2014) found that there is “a general agreement that contextual realism plays an important role when evaluating mobile HCI, but this may be achieved both by simulating contextual factors in the lab or by taking the study ‘outside’ into the field” (p. 48), 4 although there are no guidelines or definitions for either of these forms of evaluations as the studies reviewed involve testing of a variety of different products. Overall, Kjeldskov and Skov (2014) describe the tension as more a question of considering and balancing ecological validity and control and advocate for moving beyond a focus on usability and controlled settings to “embrace field studies that are truly wild and longitudinal in nature in order to fully experience and explore real world use” (p. 50). By including these types of studies, researchers could then learn about and understand how technologies can fit into people’s lives and their digital ecosystems and contexts of use. Mobile Context of Use Beyond comparing lab and field settings and considering the physical context of evaluations, there has been a focus on developing a framework for context of use specifically for the mobile arena of UX research. For example, Coursaris and Kim (2011) proposed a contextual usability framework for a mobile computing environment based on the work of several scholars involving context of use and approaches to usability evaluation (e.g., Han, Yun, Kwahk, & Hong, 2001; Kwahk & Han, 2002), which includes contextual factors (user, technology, task/activity, and environment) that impact usability in an outer circle and sixteen usability dimensions in an inner circle. Jumisko-Pyykkö and Vainio (2010) completed a literature review of over 100 articles from the field of HCI to determine definitions and characteristics associated with the context of mobile computing. Although these researchers determined that the components of mobile context presented were similar across the literature (e.g., physical, temporal, social, task, and technical context components), they also found that the field of mobile HCI is still characterizing context of use as a “relatively static phenomenon” instead of considering the “dynamics and 5 heterogeneousness of mobile contexts” (p. 6). Jumisko-Pyykkö and Vainio (2010) developed their own definitions for the identified context components based on their review, but they also call for additional research to focus on the dynamic aspects of context (such as temporal components and transitions between contexts) in relation to mobile to continue to fill the research gap for mobile usability. In addition, a push to think more globally and across cultures has been evident. For example, Sun (2012) developed the Culturally Localized User Experience or CLUE framework as a means to consider contextualized activity, experiences, and meaning-making across a range of localized cultures and subcultures, as well as the dynamic nature of culture. Sun points to the need for the involvement of actual localized users and consideration of their overall and situated experiences within their everyday context of use in order to develop usable and meaningful designs, instead of only using shallow changes (e.g., language, color schemes, etc.) to consider and reach a range of cultures. Meloncon (2017), who focuses on developing more multi- dimensional and embodied personas, has also advocated for expanding focus from an ideal user to considering local, cultural, global, and technological experience differences of users as well as their physical and psychological presence. Mobile UX Research Methods & Consideration of Context of Use With the rapid growth and development of mobile technologies, there is a need for examination of current mobile UX research to understand whether the methods being used have or are evolving (e.g., into the wild) and how mobile context of use may be considered for these types of studies. 6 Methods To understand the state of mobile UX research, I conducted a systematic literature review (e.g., Al-Ismail & Sajeev, 2014) which involved collection and critical analysis of mobile UX evaluation studies while considering the following major research questions: • How do HCI/TC researchers conduct mobile UX evaluations? • How is context considered, described, or addressed in the design of these studies? To begin the search and collection process, I developed search strings to try for different databases (i.e., ACM, IEEE Xplore, ScienceDirect, Springer, Taylor & Francis, SAGE, Elsevier, and Google Scholar) in order to find mobile UX evaluations that also described “context” in some form. These search strings included: • Basic Search: (mobile) AND (context) • Descriptive Search: (mobile) AND ((usability) OR (“user experience”)) AND ((evaluation) OR (test) OR (process) OR (method)) AND (context) Across the databases I used, these initial searches produced results with thousands of articles, which led me to decide to limit the scope of my research to specific publications based on previous systematic reviews and other sources with similar focuses to my own goals, impact factor and relevance (e.g., user experience research), and with the inclusion of a range of types (e.g., journals and conference proceedings) and different publishers. The publications I selected (Table 1) are aimed toward academic researchers and/or industry practitioners in relation to user experience (UX), specifically within the fields of human-computer interaction (HCI) and/or technical communication (TC). 7 HCI/UX Table 1. Publications initially selected for systematic literature review. TC Human-Computer Interaction (HCI) Journal of Business and Technical Communication (JBTC) Journal of Technical Writing and Communication (JTWC) SIGDOC Communication Design Quarterly* International Journal of Human-Computer Studies (IJHCS) International Journal of Mobile Human Computer Interaction (IJMHCI) Journal of Usability Studies (JUS) Personal and Ubiquitous Computing* SIGCHI Conference on Human Factors in Computing Systems (CHI) Proceedings SIGCHI International Conference on Human- Computer Interaction with Mobile Devices and Services (MobileHCI) Proceedings *Note: These sources were eventually excluded due to lack of relevance of articles found. STC Technical Communication* Technical Communication Quarterly* Transactions on Professional Communication (TransPC) SIGDOC International Conference on the Design of Communication Proceedings (DOC) When conducting initial searches of these publications, I used the following filtering criteria in addition to the search strings described previously as a means of first-level criteria for inclusion in this research: • Year of Publication: 2013 - 2017 • Peer-reviewed article • Available to access online (via Michigan State University) • Written in English After initial searches of the publications listed above, I found and accessed 248 articles. At this stage, I read the abstract for each article to further ensure whether each was relevant to my research (i.e., focused on a mobile evaluation specifically and discussed or described context in some form) which led to a significant amount of exclusion of articles (especially for the technical communication sources) that may have been focused more on theoretical frameworks or mobile designs without the inclusion of an evaluation. After this round of selection, 63 articles 8 (Table 2) were determined to fit the criteria of this study across the different selected publications. Table 2. Publications and number of articles included in review. Type Source HCI IJHCS IJMHCI HCI MobileHCI HCI HCI HCI HCI CHI JBTC TC TC DOC HCI/UX JUS TransPC TC TC JTWC Total 16 11 9 8 8 3 3 2 2 1 2015 6 3 2 3 0 0 0 0 0 0 2013 1 4 1 0 2 2 0 0 2 0 2014 1 2 3 0 4 0 0 1 0 0 2016 2 1 3 2 2 0 2 1 0 0 2017 6 1 0 3 0 1 1 0 0 1 A combination of closed and open coding was used to allow for clear themes to be evident and to allow for themes to emerge after data were collected. For example, the evaluation setting of a study could be simply coded as field, lab, or both, whereas discussion of context varied widely across publications and required the collection of descriptions for analysis after data collection was completed. To address the research questions described previously, the following data were collected from each publication to understand the types of methods being used for mobile UX evaluations and how context is being described or considered: • Methods used: What terminology is used to describe the overall study and/or the methods used for the evaluation (excluding analysis methods such as affinity diagramming or thematic analysis)? • Evaluation setting: Lab, field, or both? 9 • Multiple: Are multiple studies/methods/phases described? Is quantitative or qualitative data (or both) collected? • Participants: What are the criteria or characteristics used for participants? How many participants are included? • Purpose: What is being tested or investigated in the evaluation? Are participants given any introduction to what is being tested? (e.g., a trial run or tutorial) • Devices: What type of device is used in the evaluation? Is this a testing or personal device? • Procedure: Is the evaluation moderated or observed or are participants on their own? Do participants attempt specific tasks or is the focus on open use? Is a talk aloud protocol used? What is the length of the study? What tools are used to collect data or evaluate experiences, etc.? • Context: How is context of use considered, defined, or discussed? Overall, this study is limited to a focus on specific peer-reviewed journals and may be considered more of an initial review to understand common practices across mobile UX research. Additionally, the results included in this study are based on the descriptions of methods included in publications by the authors and not on assumptions (e.g., evaluation setting was not always specified). Further research is needed with a focus beyond the sources used in this study or peer- reviewed journals to understand mobile UX research and the consideration of context within practitioner spaces, and to understand what methods are being used to consider how technology fits into and impacts our lives. 10 Results In the following section, results are described first by evaluation type (field, lab, both, or unspecified) in regard to procedure including facilitation, tasks, length, and terminology used. Nest, across all the evaluation types, study focus or purpose, devices used, participant characteristics, and evaluation tools or techniques are described. Finally, results of how mobile context of use is described or addressed are discussed. Evaluation Type and Procedure Across the 63 publications I analyzed, 106 studies were described (i.e., 30 publications included multiple studies). Additionally, 57% of publications include multiple or mixed methods within one study or across multiple studies described (e.g., combination of user studies, interviews, focus groups, etc.); 7 publications specifically mentioned using iterative design and/or testing processes and 4 publications described using participatory design methods and stages. In terms of data collection, 60% of publications included both quantitative and qualitative data, 22% included only quantitative data, and 18% included only qualitative data. A comparison trend was also evident across the publications with 70% including some type of comparison, most commonly involving different designs, modes, strategies, or design processes. Other types of comparisons included: different groups, cultures, or countries; different mobile devices, apps, or games; different platforms (e.g., mobile and desktop, digital and physical); conditions using or not using an app; different ways to collect data (e.g., various diary methods); different simulated contexts in a lab; and experiences after different periods of use. For evaluation settings, both the majority of publications and the majority of specific studies, across both human-computer interaction and technical communication sources, were field studies (Table 3). The following sections provide further details for each study type. 11 Table 3. Types of evaluations described in publications and specific studies. Evaluation Setting Publication (63) Field only* Lab only Both settings Setting not specified *Note: Includes studies that are online, such as online surveys. HCI/UX 65% 9% 15% 11% Overall 65% 8% 14% 13% TC 67% 0% 11% 22% Study (106) HCI/UX 69% 16% 2% 13% TC 55% 9% 0% 36% Overall 68% 14% 3% 15% Field Studies. The 72 field studies included evaluations with varying levels of control by researchers and were conducted in non-laboratory settings. As shown in Figure 1, over half of the studies were conducted without a moderator or observer as participants were on their own in their own environments for the research period. Without the presence of a researcher, a variety of tools were used to collect data on a participant’s experience or behavior (see Evaluation Tools). Figure 1. Facilitation of field studies. For studies that were moderated (e.g., researcher directly facilitated the session with a participant) or observed (e.g., researcher only observed the user participating in the study), the 12 evaluation setting was a location in which typical use occurs or is likely to occur for the focus of the particular study (e.g., library, health club, classroom, museum, sports stadium, home, outdoor city or campus environment, public cafe, etc.). “Combo” studies involved a combination of observation and participants being on their own during the research. As shown in Figure 2, most studies involved participants completing specific tasks given or open trial use of a prototype such as a mobile application, system, or tool (e.g., participants tried using the prototype or product over a specified amount of time and then reported on and/or discussed their experiences with researchers). Smaller amounts of studies involved a combination of open trial use as well as specific tasks, or focused on the investigation of “open behavior” to understand a specific type of activity or technology/service usage within the environment in which it occurs. Figure 2. Tasks/activity for field studies. The terminology used to describe these studies and/or the main methods used for the evaluation varied without much agreement or correlation with the procedure (e.g., facilitation, 13 tasks, etc.). However, the terminology used by authors for these studies conducted in field settings involved one or a combination of the following: • Overall type of study or main method (e.g., empirical study, experiment, case study, user-centered design study, user evaluation, participatory design study, current use study, autoethnography, etc.) • Environment (e.g., field, naturalistic, real-world, or home study, online study, etc.) • Purpose or length of study (e.g., explorative study, technical or feasibility study, longitudinal deployment study, etc.) • Type of data collection or analysis (e.g., diary or experience sampling method study, qualitative and/or quantitative study, etc.) Additionally, the length of these studies varied widely from a single session or first-time use up to 2 years. The most common time frame described for these studies was either first-time use or a single session/survey (either in-person or online), or a period of weeks, most commonly over a period of 1 to 4 weeks (although this did not necessarily mean that participants were using a prototype during every day of this time frame). Overall these different common ranges reflect efforts of researchers across these studies to conduct research in field settings as first-time use and also going further to research experiences over continued use periods. Lab Studies. The 15 evaluations that were specified as laboratory studies were more controlled in terms of the setting, facilitation, and the tasks or activity of participants. Most studies did not describe much about the environment of a study beyond a controlled lab setting; however, one study did involve the use of 3 simulated contexts using different setups and background sounds (Seebode, Schleicher, & Möller, 2014). 14 As shown in Figure 3, facilitation of research was not always specified, but moderation was the only type of facilitation described for these studies. In addition, the majority of lab studies involved participants completing specific tasks during the research (see Figure 4). Figure 3. Facilitation of lab studies. Figure 4. Tasks/activity for lab studies. 15 Two studies involved combinations of specific tasks, discussion, and open trial use of a prototype within focus groups. One study did involve open trial use of a prototype while also being guided by a moderator, and one study involved open discussion with a moderator (e.g., reactions and feedback were given while moderator showed the prototype to a participant). For lab studies, the length was limited to first-time use and/or single sessions (e.g., 3 of the studies were focus groups or user forums). However, the range of terminology used to describe these studies and/or the main methods used for the evaluation were similar to those used for field studies: • Overall type of study or main method (e.g., experiment, focus groups, user- centered design study, usability evaluation, user forums, etc.) • Environment (e.g., lab study, controlled study, etc.) • Purpose of study (e.g., technical evaluation, etc.) • Type of data collection or analysis (e.g., qualitative and/or quantitative study) Both Field and Lab Settings. Of the 106 studies analyzed, 3 studies included both field and lab conditions within the study. One of these involved focus group sessions with two different types of groups (novices and experts) and one session conducted in-person in a lab setting and one session conducted online via a mobile chat app (Zhang, 2017). The other two studies involved experiments/user evaluations with conditions in both settings: • one study included first-time use and specific tasks with one group participating on their own in an outdoor city environment and another group participating via simulated environment using virtual reality in a lab setting (Brade et al., 2017); • and one study was conducted over 7 days with an initial moderated session that with specific tasks conducted in a lab setting and then a continued trial in each 16 participant’s home using the app/devices on their own (Lawson et al., 2013). Evaluation Setting Not Specified. Sixteen of the studies did not specify the setting of the evaluation (although for most it could be assumed that these were lab studies) and were therefore not included within the preceding sections. Similar to lab studies, facilitation methods that were specified were moderated, most of these studies involved participants attempting specific tasks, and first-time use or interview sessions were the most common. The terminology used to describe these studies and/or the main methods used for the evaluation were limited to either the overall type of study (e.g., experiment, test sessions with users, case study, usability testing, heuristic evaluation, user studies, test/evaluation, interviews, user-centered/participatory design, UX agile project, etc.) or main method or the purpose of the study (e.g., performative evaluation, qualitative study, etc.). Study Focus, Devices, Participants, and Evaluation Tools Across the 106 studies, I also collected data on what was being tested or investigated, the types of devices used in the evaluation and whether these were testing or personal devices, the characteristics or criteria given for participants, and the tools or techniques used to collect data and/or evaluate experiences within a study. Study Focus. The testing of mobile applications, systems, and/or tools was the focus of over half of the evaluations, by far the most common focus across all of the studies. These evaluations included navigation or information guides; methods for data collection such as diary or lifelogging; learning or health-related tools; collaboration or crowdsourcing tools; authentication or security tools or systems; games; mobile or video user interfaces; social networking apps; tools for facilitating interactions across devices; and so on, with some specifically mentioning focusing on context-aware apps. 17 For 17% of the studies, behavior or perceptions related to a specific type of activity, service, or technology was the focus of the study (e.g., use of various mobile devices and technology, methods of locking/unlocking a mobile device, current use of a library, self-tracking methods, etc.). A focus on designs or preferences was evident for 9% of studies (e.g., auditory and haptic aesthetic appeals of different mobile casings and sounds, participatory design of wellness activity gadgets and games, mobile infographics, etc.); 8% of studies focused on the use of different gestures or input techniques with mobile devices; and 7% focused on different types of messaging or mediums of communication, manuals, and content delivery methods. Additionally, 19% of studies mentioned providing an introduction, instructions, or guidance to participants regarding what they were testing; 19% described providing a demonstration or tutorial to participants before starting; and 6% mentioned allowing participants to explore or familiarize themselves with what was being testing before beginning tasks. Devices Used in a Study. When mobile devices were used in a study, 42% mentioned using testing devices (e.g., participants were given a testing device to use for the research versus using their own device) and 28% mentioned having participants use their personal device for the research. Additionally, for five studies participants had the option to use their personal device or a testing device, for three studies participants used a personal smartphone as well as a testing device such as a wearable, and one study mentioned providing participants with smartphones to use for testing as their personal phone that they could keep after the research ended. For field studies specifically, 47% of studies mentioned participants using personal devices when possible, whereas only 6% of lab studies (or only one study) mentioned participants using personal devices. 18 Smartphones were the most common type of device used or focused on across all the studies and when the type of smartphone was specified, Android smartphones were by far the most commonly mentioned (66%) versus iPhones or Windows smartphones. Wearables or sensors (e.g., smartwatches, Microsoft Kinect, Google Glass, etc.) were used in 15 studies, often in conjunction or in comparison with smartphones used in the study as well. Similarly, desktop or laptop computers were mentioned for 10 studies as platform comparisons within a study or as part of a combined setup. Tablets were mentioned for 8 studies and nearly always Android versions. Participant Characteristics. The most common characteristics or criteria for participants was representative experience or activity (53% of the studies) including: type of device such as smartphones (16%); specific type of activity or usage (e.g., wellness management, commuter, cloud storage use) (14%); specific job roles (e.g., rideshare drivers, developers) or expertise (e.g., musical training, business background) (11%); experts and/or novices of a service (6%); and visitors or customers (e.g., museum, library) (6%). (Note: Multiple types of participant characteristics may have been described within a single study and therefore the overall frequency distributions described for participants exceeds 100%.) Additionally, disability or health conditions (e.g., visual impairments, functional limitations, sleep conditions, etc.) were used as a focus and participant criteria (12%), as well as focuses on a particular culture/country or comparison between cultures (e.g., China, Sweden and Latvia, Finland and India, China and the United States, Uganda, etc.) (12%). A few studies focused on age (e.g., seniors, children, etc.) (7%). However, specific characteristics or criteria were not evident for 41% of the studies, as 29% described using a convenience sample (e.g., 19 university students, staff, and/or researchers, friends and/or family of researchers, or researchers themselves) and 11% did not provide a description for participants. Evaluation Tools/Techniques to Collect Data. A variety of tools and techniques (Figure 5) were used across the different types of studies to collect data and/or evaluate experiences (based on what was mentioned by the researchers for a study). Across field, lab, and unspecified studies, questionnaires (which were completed by participants before, during, and/or after the main research) were the most commonly used tool for collecting data. For some studies, online surveys were used as the main method or in conjunction with other methods. Figure 5. Tools/techniques used to collect data and/or evaluate experiences within a study. Observation and diary and experience sampling methods were used exclusively for field studies, and data logs or sensors were used for nearly half of these types of studies. Diary or experience sampling methods involved answering specific prompts, providing subjective ratings about experiences and other measures, and providing comments or other types of information 20 either daily or directly after a specific use or activity (e.g., an app or message could prompt the participant to answer the questions each time) to gain contextual data over a specified period of continued use, such as a week. The use of data logs or sensors involved the collection of usage data (inputs, interactions, time to respond, system/web logs, etc.) of a specific device, system, or tool, behavior/specific activity data, and/or contextual data (e.g., GPS, environmental, and mobility data) via mobile apps for data logging and sensors of mobile smartphones or wearables. This type of data collection was also used by approximately a quarter of the other types of studies. Interviews were most often used as a complement within studies, such as for when participants were on their own for most of the research. “Discussion” refers to studies that involved focus groups or user feedback sessions that were not task focused. Video and audio recordings were more often used in a lab setting to collect data. The collection of photographs, which was mentioned infrequently, involved a wearable camera that automatically took photographs or participants submitting photographs they took based on specific prompts (e.g., photo diary). Additionally, across all the types of studies, the use of a talk aloud protocol was typically not mentioned. Mobile Context of Use As described earlier, frameworks of context (Table 4) have been developed as a means to understand how context can be considered and how to plan and conduct UX research that considers the dynamic nature of mobile technologies. In addition, a push for considering or expanding the consideration of local, cultural, global, technological, and psychological experiences of users has been described (Sun, 2012; Meloncon, 2017). Using these frameworks and considerations as guides, I reviewed how researchers were describing context, such as 21 through the purpose and setting of their study, and through the types of methods used and types of data collected to understand the context of use of participants. Table 4. Mobile context frameworks or models. “A Contextual Usability Framework for a Mobile Computing Environment” (Coursaris & Kim, 2011) Environment: “A Model of Context of Use in Human- Mobile Computer Interaction” (Jumisko-Pyykkö & Vainio, 2010) Physical: • Physical (Auditory, visual, co- location, experiment type) • Psychosocial or social conditions User: • Demographics/culture • Knowledge/experience/self-efficacy • Perception/cognitions • Emotional/psychological context Task/Activity: • Realism: Open (user defines outcome) vs. Closed (pre-defined outcome or goal) • Task descriptions (open/unstructured) Technology: • Device type • Interface - Input mode • Spatial location, functional place, and space • Sensed environmental attributes • Movements and mobility • Artefacts Social: • Persons present • • Culture Interpersonal actions Task: • Multitasking Interruptions • • Task type Technical and Information: • Other systems and services • • • Mixed reality Interoperability Informational artifacts and access Temporal: • Duration • Time of day/weeks/year • Before - during – after • Actions in relation to time • Synchronism Environment or Physical Context. Across the different types of studies, 71% were described as being conducted in an actual physical environment of use or behavior, or an 22 environment or semi-naturalistic setting of likely use or behavior (e.g., in a specific public cafe or while walking a specific route outdoors or indoors), thereby specifically considering this type of context to resemble real use as much as possible. The rationale for field studies often involved a discussion of the importance of realistic or uncontrolled environments without constraining users and “testing in ecologically valid contexts over sufficient periods to face real-world challenges” (Panëels, Olmos, Blum, & Cooperstock, 2013, p. 2107). Also, two of the lab studies included simulated field settings (e.g., use of virtual reality to simulate outdoor setting and simulating bar, office, and lab contexts using background sounds and props). Across the publications, 54% included at least one study with participants on their own, and for the field studies specifically, 60% involved participation in the research without the presence of a researcher (i.e., allowing the user to participate in their context of use without researcher disruption). For these “user on own” studies, 79% involved open trial use of an app, system, etc. or open behavior of participants instead of the restriction to only specific tasks (i.e., allowing for a broader range of usage and activities by the user). The use of data logs and sensors, most often via the functionalities of mobile devices or the addition of a specific application, was also evident to collect contextual environmental and location data, as well as usage data such as activities, inputs, and so on. Overall, 52% of the publications included at least one study that used this type of contextual tool, although studies varied on the range of data collected (e.g., Filho, Prata, & Oliveira (2016) described collecting data on a user’s location, movement status, weather conditions, data connection availability, and spontaneous facial expressions in context). Additionally, focusing on context-awareness was a trend evident across the publications (35%) either through evaluating a context-aware application or product (e.g., Micallef, Just, 23 Baillie, Halvey, & Kayacik (2015) evaluated a context-sensitive screen locking application that determined when to require a user to unlock their phone based on their context of use and preferences set by the user) and/or trying to understand how use can differ by context in relation to a design or product or in relation to collection of data (e.g., context-triggered experience sampling methods). User or Social Context. In terms of user characteristics, 53% of the studies used representative experience or activity as the criteria for the participants, including experience with a specific type of mobile device, job role, or service, or as a visitor or customer (e.g., of a specific museum). However, few studies included or mentioned other types of characteristics, such as including a range of experiences, background, or use; different user groups (e.g., different cultures or novices and experts); a particular culture or country or comparison between cultures; disability or health conditions; or various age ranges. Several studies that focused on cultural contexts described the need for considering how well Western methods may be appropriate (or not), such as differences across cultures in how comfortable participants may be speaking aloud or giving criticism of a product (e.g., Jonsson et al., 2016; Rau et al., 2015; Sim, Read, Gregory, & Xu, 2015; Sun, May, & Wang, 2017; Zhang, 2017). Additionally, 29% only described using a convenience sample (such as university students) and 11% did not provide a description of their participants. Across all types of studies, questionnaires were mentioned by 51% of studies and used before, during, and/or after usage as a means to collect data from users regarding knowledge, perceptions, experiences, and/or satisfaction/emotion, although the types of questionnaires or questions used were not always described. For field studies in particular, the use of questionnaires, diary or experience sampling methods, and/or online surveys were mentioned for 24 74% of studies. Diary or experience sampling methods involved asking participants to answer specific prompts or questions either just after a specific activity or use of a device or service, once a day, or after receiving a message or prompt at random. The use of these types of contextual tools can allow for collecting data in the moment and across continued use while also aiming to avoid interrupting a participant incessantly. Task/Activity Context. The types of tasks or activities described across the different types of studies was somewhat limited with 51% involving participants attempting specific tasks, 28% including open trial use, 9% focusing on open behavior, and 9% including a combination of these previous types. However, 71% of studies did involve a field setting that allows for the inclusion of a real-world or quasi-real life setting, which could include multi-tasking (such as movement or other activities while using a mobile device) and potential interruptions similar to everyday life. Various tools such as questionnaires, diary or experience sampling methods, or data logs and sensors could be used to collect data on different types of activities, impacts of interruptions, attention patterns, distracting contextual factors, and so on (e.g., Beja, Lanir, & Kuflik, 2015; Grandhi & Jones, 2015; Schröder, Hirschl, & Reichl, 2016). Diary or experience sampling methods (e.g., Chang, Paruthi, Wu, Lin, & Newman, 2017; Hernandez, McDuff, Infante, Maes, Quigley, & Picard, 2016; Rapp & Cena, 2016; Sun, Golightly, Cranwell, Bedwell, & Sharples, 2013) are useful for collecting data on behavior in real time and in context, although a balance was often discussed between considering preferences and providing some control to participants to avoid too much interruption and to fit their everyday situations, as well as the potential for possible missing data based on selective reporting or the perceptions of the participants. In addition, some studies looked at activities and context settings to understand connections and 25 multi-activity, such as the use of mobile device screen capture data and a wearable camera to capture the actual situations or contexts of use and the types of activities engaged in, as well as allow for synchronizing with the user’s interactions with the mobile device (e.g., Licoppe & Figeac, 2017). Technology or Technical/Information Context. Overall, the inclusion or option to use personal devices during the research was more common for field studies (47% of these studies) versus only 28% across all of the different types of studies. In addition, the types of devices used (either testing or personal) was heavily Android across the different types of studies and only 12 studies focused on a comparison or interaction between mobile devices, most often involving smartphones and wearables such as smartwatches (e.g., Braga Sangiorgi, 2014; Chattopadhyay, Salvadori, O’Hara, & Rintel, 2017; Chen, Grossman, Wigdor, & Fitzmaurice, 2014; Sun, Golightly, Cranwell, Bedwell, & Sharples, 2013), demonstrating a lack of focus on these types of interactions and use of different types of devices. Temporal Context. Across the publications, 46% included at least one study conducted over a period of at least 1 to 4 weeks, showing some evidence of a push towards studies focused on use over a period of time. However, across all of the studies, 58% involved first-time use or 1-2 sessions (which includes studies that primarily used focus groups, interviews, or online surveys) and 26% were conducted over a period of weeks. For field studies in particular, this range was less drastic with 43% involving first-time use and 36% being conducted over a period of weeks, with between 1 to 4 weeks as the most common time frame. As described previously, data logs or sensors were used to collect usage data, which often included time of day and duration. Questionnaires and diary or experience sampling methods were also used to collect data before, during, and/or after use. 26 Including at least one phase or study in a series that is at least semi-longitudinal allows for time to understand contexts of use, activity and behavior, different types of technology use, and so on. For example, pilot or controlled studies can be used to test specific variables and ensure feasibility of a product or study in an earlier phase, and a longitudinal field study conducted over weeks allows “for a reliable estimation of users' evaluation of a product or service in the long run, [as] it is critical that users have time to familiarize themselves with it and [that] they have gained experiences of it in varied situations” (Kujala, Mugge, & Miron-Shatz, 2017, p. 58). 27 Discussion & Conclusion The results of this study demonstrate a shift from the findings of Coursaris and Kim (2011) and are a further demonstration of researchers moving into the wild and the consideration of environmental or physical context of use, as the use of field settings was evident either across all the studies included in a publication or at least as one of the studies within a publication, with varying levels of control within a setting outside of a lab. Additionally, for field studies, the majority were also conducted with participants on their own, allowing for more realistic behavior without the presence of a moderator. Although the use of specific or closed tasks was still common across the different types of studies (as found by Coursaris & Kim, 2011), field studies did represent a shift toward more realistic use with 40% involving open trial use and nearly half involving participants using personal devices, if possible. In terms of length of a field study and temporal context, there was some evidence of the push toward longitudinal use (as recommended by Kjeldskov & Skov, 2014), although it was most common for these studies to either involve first-time use or to involve semi-longitudinal use (e.g., most commonly between 1 to 4 weeks) to understand how technology fits into people’s lives; some studies also included a demonstration or trial use before testing even began. However, similar to Coursaris and Kim (2011), the most common type of user characteristics involved representative experience or activity and over a quarter used convenience samples, demonstrating a lack of inclusion of a range of characteristics or user/social contexts (e.g., culture, abilities, ages, etc.). Questionnaires were the most commonly-used type of tool to collect data across the different types of studies, and data logs or sensors were used across the different types. For field studies, diary or experience sampling methods, as well as data logs or sensors, were used as a 28 form of observation at times, often in conjunction with interviews and questionnaires, to collect data on the perspectives and behaviors of users in context over continued usage. Through the use of these various tools and open trial use or open behavior in the field, researchers can also collect data on multi-activity, typical interruptions or distractions, and potentially the use of different technologies or cross-device activity (although this type of context was not described often). Multiple methods were also commonly used either across different studies within a publication or within one study, similar to the findings of Coursaris and Kim (2011). Including a variety of methods, such as the use of both lab and field settings, first-time use and continued use, and moderated and “user on own” setups across different studies in a process, showcase ways to use varying levels of control and consider different types of context (i.e., one type of study design may be unable to capture such a range of useful findings). Several publications also demonstrated a focus on considering how to provide or increase control or personalization options for users, such as a means to reduce interruptions or burdens on users. (e.g., looking at ways to make technology more flexible and allow for more customization to individual needs and adaptation across different types of contexts or situations for the user). Additionally, at times, cultural differences were considered to determine or customize study design (e.g., whether use of a think aloud protocol was applicable). Determining relevant metrics beyond traditional usability focuses of efficiency, effectiveness, and satisfaction was also observed: “Depending on the experience that is of interest, methods and metrics need to be designed to capture that experience—for example, if the objective of the evaluation were to measure fun, then metrics would be required to capture fun” (Sim et al., 2015, p. 268). Therefore, this flexible or customization approach (e.g., one side does not fit all) was evident in relation to a focus on research methods being used based on the goal of each study or phase of a 29 study (e.g., concept validation in a lab, observation of user behavior in context, longitudinal deployment with users on their own in their everyday lives, etc.) and a further example of broadening research to overall user experiences and beyond. Overall, a turn to the “wild” or field settings and a broadening of UX mobile research and contextual considerations is evident through this systematic review through a push to be more open and less controlling, conducting research in dynamic environments of use over semi- longitudinal time periods, and the use of a range of methods and tools based on the goals and context of each study to allow for greater understanding of impacts and performance. However, further research and consideration are needed for the inclusion of a broader range of experiences and user/social contexts to continue to understand and meet the needs of humans and their ever- expanding and evolving technological ecosystems. 30 REFERENCES 31 REFERENCES Ahtinen, A., Isomursu, M., Ramiah, S., & Blom, J. (2013). Advise, acknowledge, grow and engage: Design principles for a mobile wellness application to support physical activity. International Journal of Mobile Human Computer Interaction, 5(4), 20–55. doi:10.4018/ijmhci.2013100102 Ali, A. E., & Ketabdar, H. (2013). Magnet-based around device interaction for playful music composition and gaming. International Journal of Mobile Human Computer Interaction, 5(4), 56–80. doi:10.4018/ijmhci.2013100103 Al-Ismail, M., & Sajeev, A. S. M. (2014). Usability challenges in mobile web: A systematic literature review. In Proceedings of the IEEE International Conference on Communication, Networks and Satellite (COMNETSAT), 50–55. Jakarta, Indonesia: IEEE. doi:10.1109/COMNETSAT.2014.7050525 Alnuaim, A., Caleb-Solly, P., & Perry, C. (2014). Evaluating the effectiveness of a mobile location-based intervention for improving human-computer interaction students’ understanding of context for design. International Journal of Mobile Human Computer Interaction, 6(3), 16–31. doi:10.4018/ijmhci.2014070102 Apostu, S., Al-Nuaimi, A., Steinbach, E., Fahrmair, M., Song, X., & Möller, A. (2013). Towards the design of an intuitive multi-view video navigation interface based on spatial information. In Proceedings of the 15th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI ’13), 103–112. Munich, Germany: ACM. doi:10.1145/2493190.2493240 Baillie, L., & Schatz, R. (2005). Exploring multimodality in the laboratory and the field. In Proceedings of the 7th International Conference on Multimodal Interfaces (ICMI ’05), 100-107. Trento, Italy: ACM. doi:10.1145/1088463.1088482 Banovic, N., Brant, C., Mankoff, J., & Dey, A. (2014). ProactiveTasks: The short of mobile device use sessions. In Proceedings of the 16th International Conference on Human- Computer Interaction with Mobile Devices and Services (MobileHCI ’14), 243–252. Toronto, ON, Canada: ACM. doi:10.1145/2628363.2628380 Barnum, C. M. (2002). Usability testing and research. New York, NY: Longman. Beja, I., Lanir, J., & Kuflik, T. (2015). Examining factors influencing the disruptiveness of notifications in a mobile museum context. Human–Computer Interaction, 30(5), 433– 472. doi:10.1080/07370024.2015.1005093 Brade, J., Lorenz, M., Busch, M., Hammer, N., Tscheligi, M., & Klimant, P. (2017). Being there again – Presence in real and virtual environments and its relation to usability and user experience using a mobile navigation task. International Journal of Human-Computer Studies, 101, 76–87. doi:10.1016/j.ijhcs.2017.01.004 32 Braga Sangiorgi, U. (2014). Electronic sketching on a multi-platform context: A pilot study with developers. International Journal of Human-Computer Studies, 72(1), 45–52. doi:10.1016/j.ijhcs.2013.08.018 Burke, R., & Broderick, J. (2017). Navigating the gig: Rideshare drivers and mobile technologies in context. In Proceedings of the 35th ACM International Conference on the Design of Communication (SIGDOC ’17), Article No. 34. Halifax, NS, Canada: ACM. doi:10.1145/3121113.3121233 Chang, Y.-J., Paruthi, G., Wu, H.-Y., Lin, H.-Y., & Newman, M. W. (2017). An investigation of using mobile and situated crowdsourcing to collect annotated travel activity data in real- word settings. International Journal of Human-Computer Studies, 102, 81–102. doi:10.1016/j.ijhcs.2016.11.001 Chattopadhyay, D., Salvadori, F., O’Hara, K., & Rintel, S. (2017). Beyond presentation: Shared slideware control as a resource for collocated collaboration. Human–Computer Interaction, 0(0), 1–44. doi:10.1080/07370024.2017.1388170 Chen, X. A., Grossman, T., Wigdor, D. J., & Fitzmaurice, G. (2014). Duet: Exploring joint interactions on a smart phone and a smart watch. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’14), 159-168. Toronto, ON, Canada: ACM. doi:10.1145/2556288.2556955 Coursaris, C. K., & Kim, D. J. (2011). A meta-analytical review of empirical mobile usability studies. Journal of Usability Studies, 6(3), 117-171. Crabtree, A., Chamberlain, A., Grinter, R. E., Jones, M., Rodden, T., & Rogers, Y. (2013). Introduction to the special issue of “The Turn to The Wild”. ACM Transactions on Computer-Human Interaction (TOCHI), 20(3), Article No. 13. Cui, Y., & Honkala, M. (2013). A novel mobile device user interface with integrated social networking services. International Journal of Human-Computer Studies, 71(9), 919–932. doi:10.1016/j.ijhcs.2013.03.004 Faris, M. J., & Selber, S. A. (2013). iPads in the technical communication classroom: An empirical study of technology integration and use. Journal of Business and Technical Communication, 27(4), 359–408. doi:10.1177/1050651913490942 Filho, J. F., Prata, W., & Oliveira, J. (2016). Affective-ready, contextual and automated usability test for mobile software. In Proceedings of the 18th International Conference on Human- Computer Interaction with Mobile Devices and Services Adjunct (MobileHCI ’16), 638– 644. Florence, Italy: ACM. doi:10.1145/2957265.2961834 FitzGerald, E., & Adams, A. (2015). Revolutionary and evolutionary technology design processes in location-based interactions. International Journal of Mobile Human Computer Interaction, 7(1), 59–78. doi:10.4018/ijmhci.2015010104 33 Grandhi, S. A., & Jones, Q. (2015). Knock, knock! Who’s there? Putting the user in control of managing interruptions. International Journal of Human-Computer Studies, 79, 35–50. doi:10.1016/j.ijhcs.2015.02.008 Gugenheimer, J., De Luca, A., Hess, H., Karg, S., Wolf, D., & Rukzio, E. (2015). Colorsnakes: Using colored decoys to secure authentication in sensitive contexts. In Proceedings of the 17th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI ’15), 274-283. Copenhagen, Denmark: ACM. doi:10.1145/2785830.2785834 Güldenpfennig, F., Nunes, F., Ganglbauer, E., & Fitzpatrick, G. (2016). Making space to engage: An open-ended exploration of technology design with older adults. International Journal of Mobile Human Computer Interaction, 8(2), 1–19. doi:10.4018/ijmhci.2016040101 Häkkilä, J. R., Posti, M., Schneegass, S., Alt, F., Gultekin, K., & Schmidt, A. (2014). Let me catch this! Experiencing interactive 3D cinema through collecting content with a mobile phone. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’14), 1011-1020. Toronto, ON, Canada: ACM. doi:10.1145/2556288.2557187 Han, K., Shih, P. C., Bellotti, V., & Carroll, J. M. (2015). It’s time there was an app for that too: A usability study of mobile timebanking. International Journal of Mobile Human Computer Interaction, 7(2), 1–22. doi:10.4018/ijmhci.2015040101 Han, S. H., Yun, M. H., Kwahk, J., & Hong, S. W. (2001). Usability of consumer electronic products. International Journal of Industrial Ergonomics, 28(3-4), 143-151. Harrison, S., Tatar, D., & Sengers, P. (2007). The three paradigms of HCI. In Proceedings of Alt. Chi. Session at the SIGCHI Conference on Human Factors in Computing Systems (CHI 2007). San Jose, CA, USA: ACM. Hennes, J., Wiley, K., & Anderson, J. B. (2016). The Trail Reporter mobile application: Methods for UX research and communication design as civic agency. In Proceedings of the 34th ACM International Conference on the Design of Communication (SIGDOC ’16), Article No. 24. Silver Spring, MD, USA: ACM. doi:10.1145/2987592.2987620 Hernandez, J., McDuff, D., Infante, C., Maes, P., Quigley, K., & Picard, R. (2016). Wearable ESM: Differences in the experience sampling method across wearable devices. In Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI ’16), 195–205. Florence, Italy: ACM. doi:10.1145/2935334.2935340 International Organization for Standardization. (1998). Ergonomic requirements for office work with visual display terminals (VDTs) – Part 11: Guidance on usability. (ISO Standard No. 9241-11:1998(E), 1st ed.) International Organization for Standardization. (2010). Ergonomics of human-system interaction — Part 210: Human-centred design for interactive systems (ISO Standard No. 9241- 34 210:2010(en)). Retrieved from https://www.iso.org/obp/ui/#iso:std:iso:9241:-210:ed- 1:v1:en International Organization for Standardization. (2018). Ergonomics of human-system interaction — Part 11: Usability: Definitions and concepts (ISO Standard No.9241-11:2018(en), 2nd ed.). Retrieved from https://www.iso.org/obp/ui/#iso:std:iso:9241:-11:ed-2:v1:en Jonsson, O., Haak, M., Tomsone, S., Iwarsson, S., Schmidt, S. M., Martensson, K., Svensson, T., & Slaug, B. (2016). Cross-national usability study of a housing accessibility app: Findings from the European innovAge project. Journal of Usability Studies, 12(1), 26– 49. Retrieved from http://uxpajournal.org/ Jumisko-Pyykkö, S., & Vainio, T. (2010). Framing the context of use for mobile HCI. International Journal of Mobile Human Computer Interaction, 2(4), 1-28. Kaikkonen, A., Kekäläinen, A., Cankar, M., Kallio, T., & Kankainen, A. (2005). Usability testing of mobile applications: A comparison between laboratory and field testing. Journal of Usability studies, 1(1), 4-16. Kaptein, M., Markopoulos, P., de Ruyter, B., & Aarts, E. (2015). Personalizing persuasive technologies: Explicit and implicit personalization using persuasion profiles. International Journal of Human-Computer Studies, 77, 38–51. doi:10.1016/j.ijhcs.2015.01.004 Kefalidou, G., & Sharples, S. (2016). Encouraging serendipity in research: Designing technologies to support connection-making. International Journal of Human-Computer Studies, 89, 1–23. doi:10.1016/j.ijhcs.2016.01.003 Kim, J. W., Kim, H. J., & Nam, T. J. (2016). M.Gesture: An acceleration-based gesture authoring system on multiple handheld and wearable devices. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, 2307-2318. San Jose, CA, USA: ACM. doi:10.1145/2858036.2858358 Kjeldskov, J., Skov, M. B., Als, B. S., & Høegh, R. T. (2004). Is it worth the hassle? Exploring the added value of evaluating the usability of context-aware mobile systems in the field. In S. Brewster and M. Dunlop (Eds.), MobileHCI 2004, LNCS 3160 (pp. 61-73). Berlin Heidelberg, Germany: Springer-Verlag. doi:10.1007/978-3-540-28637-0_6 Kjeldskov, J., & Skov, M. B. (2014). Was it worth the hassle? Ten years of mobile HCI research discussions on lab and field evaluations. In Proceedings of the 16th International Conference on Human-Computer Interaction with Mobile Devices & Services (MobileHCI ’14), 43-52. Toronto, ON, Canada: ACM. doi:10.1145/2628363.2628398 Kujala, S., Mugge, R., & Miron-Shatz, T. (2017). The role of expectations in service evaluation: A longitudinal study of a proximity mobile payment service. International Journal of Human-Computer Studies, 98, 51–61. doi:10.1016/j.ijhcs.2016.09.011 35 Kwahk, J., & Han, S. H. (2002). A methodology for evaluating the usability of audiovisual consumer electronic products. Applied Ergonomics, 33, 419-431. Lam, C. (2013). The efficacy of text messaging to improve social connectedness and team attitude in student technical communication projects: An experimental study. Journal of Business and Technical Communication, 27(2), 180–208. doi:10.1177/1050651912468888 Lawson, S., Jamison-Powell, S., Garbett, A., Linehan, C., Kucharczyk, E., Verbaan, S., Rowland, D., & Morgan, K. (2013). Validating a mobile phone application for the everyday, unobtrusive, objective measurement of sleep. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’13), 2497-2506. Paris, France: ACM. doi:10.1145/2470654.2481345 Licoppe, C., & Figeac, J. (2017). Gaze patterns and the temporal organization of multiple activities in mobile smartphone uses. Human–Computer Interaction, 0(0), 1–24. doi:10.1080/07370024.2017.1326008 Loorbach, N., Karreman, J., & Steehouder, M. (2013). Verification steps and personal stories in an instruction manual for seniors: Effects on confidence, motivation, and usability. IEEE Transactions on Professional Communication, 56(4), 294–312. doi:10.1109/TPC.2013.2286221 Matthews, M., Murnane, E., & Snyder, J. (2017). Quantifying the changeable self: The role of self-tracking in coming to terms with and managing bipolar disorder. Human–Computer Interaction, 32(5-6), 413–446. doi:10.1080/07370024.2017.1294983 Melicher, W., Kurilova, D., Segreti, S. M., Kalvani, P., Shay, R., Ur, B., Bauer, L., Christin, N., Cranor, L. F., & Mazurek, M. L. (2016). Usability and security of text passwords on mobile devices. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, 527-539. San Jose, CA, USA: ACM. doi:10.1145/2858036.2858384 Meloncon, L. (2017). Embodied personas for a mobile world. Technical Communication, 64(1), 50-65. Micallef, N., Just, M., Baillie, L., Halvey, M., & Kayacik, H. G. (2015). Why aren’t users using protection? Investigating the usability of smartphone locking. In Proceedings of the 17th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI ’15, 284–294. Copenhagen, Denmark: ACM. doi:10.1145/2785830.2785835 Musić, J., & Murray-Smith, R. (2016). Nomadic input on mobile devices: The influence of touch input technique and walking speed on performance and offset modeling. Human– Computer Interaction, 31(5), 420–471. doi:10.1080/07370024.2015.1071195 O'Kane, A. A., Rogers, Y., & Blandford, A. E. (2014). Gaining empathy for non-routine mobile device use through autoethnography. In Proceedings of the SIGCHI Conference on 36 Human factors in Computing Systems (CHI ’14), 987-990. Toronto, ON, Canada: ACM. doi:10.1145/2556288.2557179 Oh, J., Kim, J., Kim, M., Choi, W., Lee, S., & Lee, U. (2017). Understanding mobile document capture and correcting orientation errors. International Journal of Human-Computer Studies, 104, 64–79. doi:10.1016/j.ijhcs.2017.03.004 Panëels, S. A., Olmos, A., Blum, J. R., & Cooperstock, J. R. (2013). Listen to it yourself! Evaluating usability of “what's around me?” for the blind. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’13), 2107-2116. Paris, France: ACM. doi:10.1145/2470654.2481290 Pearson, J., Robinson, S., & Jones, M. (2017). BookMark: Appropriating existing infrastructure to facilitate scalable indoor navigation. International Journal of Human-Computer Studies, 103, 22–34. doi:10.1016/j.ijhcs.2017.02.001 Piasek, P., Irving, K., & Smeaton, A. F. (2015). Exploring boundaries to the benefits of lifelogging for identity maintenance for people with dementia. International Journal of Mobile Human Computer Interaction, 7(4), 76–90. doi:10.4018/ijmhci.2015100105 Potts, L., & Salvo, M. J. (2017). Introduction. In L. Potts & M. J. Salvo (Eds.), Rhetoric and experience architecture (pp. 3-13). Anderson, SC: Parlor Press. Rapp, A., & Cena, F. (2016). Personal informatics for everyday life: How users without prior self-tracking experience engage with personal data. International Journal of Human- Computer Studies, 94, 1–17. doi:10.1016/j.ijhcs.2016.05.006 Rau, P. L. P., Huang, E., Mao, M., Gao, Q., Feng, C., & Zhang, Y. (2015). Exploring interactive style and user experience design for social web of things of Chinese users: A case study in Beijing. International Journal of Human-Computer Studies, 80, 24–35. doi:10.1016/j.ijhcs.2015.02.007 Redish, J. G., & Barnum, C. (2011). Overlap, influence, intertwining: The interplay of UX and technical communication. Journal of Usability Studies, 6(3), 90-101. Rozado, D., Moreno, T., Agustin, J. S., Rodriguez, F. B., & Varona, P. (2015). Controlling a smartphone using gaze gestures as the input mechanism. Human–Computer Interaction, 30(1), 34–63. doi:10.1080/07370024.2013.870385 Ryokai, K., & Agogino, A. (2013). Off the paved paths: Exploring nature with a mobile augmented reality learning tool. International Journal of Mobile Human Computer Interaction, 5(2), 21–49. doi:10.4018/jmhci.2013040102 Schröder, S., Hirschl, J., & Reichl, P. (2016). CoConUT: Context collection for non-stationary user testing. In Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct (MobileHCI ’16), 924–929. Florence, Italy: ACM. doi:10.1145/2957265.2962658 37 Seebode, J., Schleicher, R., & Möller, S. (2014). Affective quality of audio feedback on mobile devices in different contexts. International Journal of Mobile Human Computer Interaction, 6(4), 1–21. doi:10.4018/ijmhci.2014100101 Silvennoinen, J., Vogel, M., & Kujala, S. (2014). Experiencing visual usability and aesthetics in two mobile application contexts. Journal of Usability Studies, 10(1), 46–62. Retrieved from http://uxpajournal.org/ Sim, G., Read, J. C., Gregory, P., & Xu, D. (2015). From England to Uganda: Children designing and evaluating serious games. Human–Computer Interaction, 30(3–4), 263– 293. doi:10.1080/07370024.2014.984034 Singh, A., Piana, S., Pollarolo, D., Volpe, G., Varni, G., Tajadura-Jiménez, A., Williams, A. C., Camurri, A., & Bianchi-Berthouze, N. (2016). Go-with-the-Flow: Tracking, analysis and sonification of movement and breathing to build confidence in activity despite chronic pain. Human–Computer Interaction, 31(3–4), 335–383. doi:10.1080/07370024.2015.1085310 Sonderegger, A., & Sauer, J. (2015). The role of non-visual aesthetics in consumer product evaluation. International Journal of Human-Computer Studies, 84, 19–32. doi:10.1016/j.ijhcs.2015.05.011 Strantz, A. (2016). Designing a lab for mobile user experience through NFC-tagged documentation. In Proceedings of the 34th ACM International Conference on the Design of Communication (SIGDOC ’16), Article No. 31. Silver Spring, MD, USA: ACM. doi:10.1145/2987592.2987627 Sullivan, P. (2017). Beckon, encounter, experience: The danger of control and the promise of encounters in the study of user experience. In L. Potts & M. J. Salvo (Eds.), Rhetoric and experience architecture (pp. 17-40). Anderson, SC: Parlor Press. Sun, H. (2012). Cross-cultural technology design: Creating culture-sensitive technology for local users. New York, NY: Oxford University Press. Sun, X., Golightly, D., Cranwell, J., Bedwell, B., & Sharples, S. (2013). Participant experiences of mobile device-based diary studies. International Journal of Mobile Human Computer Interaction, 5(2), 62–83. doi:10.4018/jmhci.2013040104 Sun, X., May, A., & Wang, Q. (2017). Investigation of the role of mobile personalisation at large sports events. International Journal of Mobile Human Computer Interaction, 9(1), 1–15. doi:10.4018/ijmhci.2017010101 Swierenga, S. J., Propst, D. B., Ismirle, J., Figlan, C., & Coursaris, C. K. (2014). Mobile design guidelines for outdoor recreation and tourism. In F.F.-H. Nah (Ed.), Human-Computer Interaction in Business, HCII 2014, LNCS 8527 (pp. 371-378). Cham, Switzerland: Springer International Publishing. doi:10.1007/978-3-319-07293-7_36 38 Teli, M., Bordin, S., Menéndez Blanco, M., Orabona, G., & De Angeli, A. (2015). Public design of digital commons in urban places: A case study. International Journal of Human- Computer Studies, 81, 17–30. doi:10.1016/j.ijhcs.2015.02.003 Tham, J. C. K. (2017). Wearable writing: Enriching student peer review with point-of-view video feedback using Google Glass. Journal of Technical Writing and Communication, 47(1), 22–55. doi:10.1177/0047281616641923 Vandenbroucke, K., Ferreira, D., Goncalves, J., Kostakos, V., & De Moor, K. (2014). Mobile cloud storage: A contextual experience. In Proceedings of the 16th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI ’14), 101–110. Toronto, ON, Canada: ACM. doi:10.1145/2628363.2628386 Wang, W., & Reani, M. (2017). The rise of mobile computing for Group Decision Support Systems: A comparative evaluation of mobile and desktop. International Journal of Human-Computer Studies, 104, 16–35. doi:10.1016/j.ijhcs.2017.02.008 Wein, L. (2014). Visual recognition in museum guide apps: Do visitors want it? In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’14), 635-638. Toronto, ON, Canada: ACM. doi:10.1145/2556288.2557270 Winckler, M., Bach, C., & Bernhaupt, R. (2013). Identifying user experience dimensions for mobile incident reporting in urban contexts. IEEE Transactions on Professional Communication, 56(2), 97–119. doi:10.1109/TPC.2013.2257212 Xu, Q., Li, L., Lim, J. H., Tan, C. Y. C., Mukawa, M., & Wang, G. (2014). A wearable virtual guide for context-aware cognitive indoor navigation. In Proceedings of the 16th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI ’14), 111–120. Toronto, ON, Canada: ACM. doi:10.1145/2628363.2628390 Zemla, J. C., Tossell, C. C., Kortum, P., & Byrne, M. D. (2015). A Bayesian approach to predicting website revisitation on mobile phones. International Journal of Human- Computer Studies, 83, 43–50. doi:10.1016/j.ijhcs.2015.06.002 Zhang, Y. (2017). Assessing attitudes toward content and design in Alibaba’s dry goods business infographics. Journal of Business and Technical Communication, 31(1), 30–62. doi:10.1177/1050651916667530 39