Category: Research Methods and Statistics
Crowdsourcing methods for data collection have been established as an effective way to gather information from particular groups of people (e.g., participants with specific interests, mental health concerns, or demographic backgrounds; Chandler, Moecchi, & Shapiro, 2013) with efficiency. Prior research has already established Amazon’s Mechanical Turk (AMT) as an effective platform for collecting clinical research data (Chandler & Shapiro, 2016), with AMT participants offering reliable data and diverse backgrounds that deviate from the typical college-student sample (Casler, Bickel, & Hackett). Data were compiled from a 15-day study in which 145 participants were asked to report their daily mood, life satisfaction, substance use and Internet gaming. Of the 1086 total diary responses, 15.1% filled out an open comments box (total comments = 164) from 41 participants (28.3% of Participants). An inductive content analysis was conducted using the guidelines provided in thematic and content reviews (Hsieh & Shannon, 2005). Of the 164 comments, 2 major themes emerged, including broad (46.2%) and specific (2.5%) positive feedback, and response enrichment (31.7%), which included rationale for behaviors (18.4%), examples and elaborations (12%; e.g., “To explain, I woke up at 8 a.m. to play something and update it then…woke up hours later and check it again.”), and reported changes in behavior (1.3%; e.g., I have started using nicotine [gum], I feel like cigarette use is the first thing I have to eliminate.”). Other areas will be explored including survey access difficulties (1.3%), data accuracy concerns: temporal (e.g., “This response is for yesterday;” 11.4%), tracking (e.g., “day four;” 1.9%) and error correcting (1.9%), and study improvement suggestions (3.1%). In addition to the data from the main (14 day) diary study, data from a 3-day “false start” of the same survey are presented in contrast (AMT site crash), with a greater amount of comments related to study improvement (17.6% of total comments), with implications for pilot research and the quality of AMT participant responses. These comments offered insight into participant behavior that substantiate the inclusion of open-ended comments in online clinical survey research.
Charlotte Beard– Graduate Student, Palo Alto University, Milpitas, California
Travis Hyke– Doctoral Student, Palo Alto University, Sunnyvale, California
Shweta Ghosh– Palo Alto University
Amie Haas– Associate Professor, Palo Alto University, Palo Alto, California