Learning from Authoritative Security Experiment Results
Measuring the Success of Context-Aware Security Behaviour Surveys
Ingolf Becker, Simon Parkin and M. Angela Sasse
University College London
Abstract
Background: We reflect on a scenario-based survey methodology that evolved through deployment in two large partner organisations (A & B). In each organisation scenarios are grounded in real, relatable workplace tensions between security and employee’s productive tasks — as during use of specific security controls — rather than using established but generic questionnaires. Survey responses allow clustering of participants according to pre-defined groups.
Aim: We aim to establish the usefulness of questions about actual controls and problems experienced by employees by assessing the validity of the clustering. We introduce measures for the appropriateness of the survey scenarios for each organisation and the quality of candidate answer options. We use these scores to articulate the methodological improvements between the two surveys.
Method: We develop a methodology to verify the clustering of participants by coding 516 (A) and 195 (B) free-text responses by two annotators by coding free-text responses. Inter-annotator metrics are adopted to identify agreement. Further, we analyse 5196 (A) and 1824 (B) appropriateness and severity scores to measure the appropriateness and quality of the questions.
Results: Participants rank questions in B as more appropriate than in A, although the variations in the seriousness of the answer options available to participants is higher in B than in A. We find that the respondents in B are more likely to commit to their answers than in A, suggesting that the survey design has indeed improved. The annotators mostly agree strongly on their codings with Krippendorff’s a > 0:7. A number of clusterings should be questioned, although a improves for reliable questions by 0:15 from A to B.
Conclusions: To be able to draw valid conclusions from survey responses, the train of analysis needs to be verifiable. Our approach allows us to further validate the clustering of responses by utilising free-text responses. Further, we establish the relevance and appropriateness of the scenarios for individual organisations. While much prior research draws on survey instruments from prior research, it is often applied in a different context; in these cases adding metrics of appropriateness and severity to the survey design and ensure that results represent security on the ground.