CREST Outputs

Projects

Articles

Do smartphone usage scales predict behavior?

Understanding how people use technology remains important, particularly when measuring the impact this might have on individuals and society. However, despite a growing body of resources that can quantify smartphone use, research within psychology and social science overwhelmingly relies on self-reported assessments. These have yet to convincingly demonstrate an ability to predict objective behavior. Here, and for the first time, we compare a variety of smartphone use and ‘addiction’ scales with objective behaviors derived from Apple's Screen Time application. While correlations between psychometric scales and objective behavior are generally poor, single estimates and measures that attempt to frame technology use as habitual rather than ‘addictive’ correlate more favorably with subsequent behavior. We conclude that existing self-report instruments are unlikely to be sensitive enough to accurately predict basic technology use related behaviors. As a result, conclusions regarding the psychological impact of technology are unreliable when relying solely on these measures to quantify typical usage.

(From the journal abstract)


Ellis, D. A., Davidson, B. I., Shaw, H., & Geyer, K. (2019). Do smartphone usage scales predict behavior? International Journal of Human-Computer Studies, 130, 86–92.

Authors: David Ellis, Brittany Davidson, Heather Shaw, Kristoffer Geyer
https://doi.org/10.1016/j.ijhcs.2019.05.004
Behavioral consistency in the digital age

Efforts to infer personality from digital footprints have focused on behavioral stability at the trait level without considering situational dependency. We repeat Shoda, Mischel, and Wright’s (1994) classic study of intraindividual consistency with data on 28,692 days of smartphone usage by 780 people. Using per app measures of ‘pickup’ frequency and usage duration, we found that profiles of daily smartphone usage were significantly more consistent when taken from the same user than from different users (d > 1.46). Random forest models trained on 6 days of behavior identified each of the 780 users in test data with 35.8% / 38.5% (pickup / duration) accuracy. This increased to 73.5% / 75.3% when success was taken as the user appearing in the top 10 predictions (i.e., top 1%). Thus, situation-dependent stability in behavior is present in our digital lives and its uniqueness provides both opportunities and risks to privacy.

(From the journal abstract)


Shaw, H., Taylor, P., Ellis, D. A., & Conchie, S. (2021). Behavioral consistency in the digital age [Preprint]. PsyArXiv.

Authors: Heather Shaw, Paul Taylor, David Ellis, Stacey Conchie
https://doi.org/10.31234/osf.io/r5wtn
Language Style Matching : A Comprehensive List of Articles and Tools

Language style matching (LSM) is a technique in behavioural analytics which assess the stylistic similarities in language use across groups and individuals. The procedure targets the similarity of functions words, analysing the way people use language rather than the content. Function words consist of pronouns, articles, conjunctions, prepositions, auxiliary verbs e.t.c. which have a syntactical role in language. To assess the similarity of language use between interlocutors, the percentage of function words used can be compared within and across conversations using a metric designed to calculate the matching of specific word categories and overall LSM (Ireland et al., 2011) . It is also possible to assess language style matching to a group’s aggregate style. High language style matching is an indicator of interpersonal and group mimicry and has been shown to influence psychological factors and behavioural outcomes. These are listed in this preprint and categorised by topic. The list aims to be a complete summary of the existing literature to date exploring LSM. Therefore, please email the author if there are any projects and tools not listed below.


Shaw, H., Taylor, P., Conchie, S., & David Alexander Ellis, D. (2019, Mar 6). Language Style Matching: A Comprehensive List of Articles and Tools

Authors: Heather Shaw, Paul Taylor, Stacey Conchie, David Ellis
https://doi.org/10.31234/osf.io/yz4br
Fuzzy constructs in technology usage scales

The mass adoption of digital technologies raises questions about how they impact people and society. Associations between technology use and negative correlates (e.g., depression and anxiety) remain common. However, pre-registered studies have failed to replicate these findings. Regardless of direction, many designs rely on psychometric scales that claim to define and quantify a construct associated with technology engagement. These often suggest clinical manifestations present as disorders or addictions. Given their importance for research integrity, we consider what these scales might be measuring. Across three studies, we observe that many psychometric scales align with a single, identical construct despite claims they capture something unique. We conclude that many technology measures appear to measure a similar, poorly defined construct that sometimes overlaps with pre-existing measures of well-being. Social scientists should critically consider how they proceed methodologically and conceptually when developing psychometric scales in this domain to ensure research findings sit on solid foundations.


Brittany I. Davidson, Heather Shaw, David A. Ellis, (2022) Fuzzy constructs in technology usage scales, Computers in Human Behavior, Volume 133,

Authors: Brittany Davidson, Heather Shaw, David Ellis
https://doi.org/10.1016/j.chb.2022.107206

Back to top