Use, User and Usability Issues
User Research and Methods I
Best practices are neither adequately defined nor fully implemented within cartographic user studies. What constitutes a “best practice,” and why is it relevant to cartographic research? While profoundly appealing, even the concept of best practices—where “best” is relative to any comparable practice intended to achieve the same goal—is prone to oversimplification. Generally speaking, all facets of a study purporting to adhere to “best practices” should be reliable and valid, from its design, to its implementation, to the publication of the study’s outcomes. Yet we’ve never formally determined what cartographic best practices actually are, nor even the conditions necessary for identifying them. Ideally, best practices pervade every aspect of each cartographic user study. But, in reality, we embrace a diverse body of optimal, sub-optimal, and ad hoc qualitative and quantitative practices to conduct and report on user studies. As established methods of investigation give way to novel approaches and analytical techniques, it is critical that we are able to effectively pair particular research questions with the best methods available.
One long-standing goal of cartographic researchers has been to establish a best practices framework which clarifies how to choose and mix human subject research methods appropriate to cartography. I examined this broad goal by creating a database of over 200 existing cartographic user studies to analyze the causal relationship between the reporting of user studies and the design of user studies, focusing exclusively on methods of recruiting, vetting, and assessing human subjects prior to their involvement in their respective studies. By emphasizing “front-end” and “back-end” user study content (where front-end refers to content accessible by other researchers, such as refereed articles, and back-end refers to the design of and methods used in the study), I can use this database to gauge similarities between study designs. Gauging similarities between study designs allows for an evaluation of the results and conclusions of one author compared to the design of ensuing studies from different authors within a single specific area of study or using a specific method of investigation. This capability is particularly valuable given the diverse set of practices we employ versus our ability to consistently identify when and how to select, apply, and report on those practices.
In taking study design cues from pre-existing published studies without a reliable study evaluation metric, cartographers unwittingly perpetuate the “worst” practices alongside the “best.” Quality and consistency are conjoined—adhering to best practices yields more reliable results and facilitates the comparison of results between studies, and will improve the quality and consistency of emerging research. The identification of best user study design practices must consider the effects of both well-reported and poorly-reported source material. A well-designed study is of little use to another researcher if it is poorly reported. A poorly-reported study can result in many negative outcomes, regardless of whether the study was well- or poorly-designed. To achieve heightened consistency of best practices in the design of studies, we must promote equally optimal reporting techniques. My findings demonstrate how a user study database can be utilized to reveal the “best” practices among the many existing practices for designing and reporting on user studies.