Survey Logic Usability Testing

Overview

  • Company: Feedback Loop (acquired by DISQO)

  • Role: UX researcher

  • Methods: evaluative interviews, unmoderated & moderated usability testing

  • Skills: interviewing, usability testing, qualitative analysis, affinity mapping, whiteboarding

  • Tools: Zoom, Condens, Figma, Figjam, Maze, Google Suite, Pendo

  • Deliverables: presentations (slide decks), results summary, product recommendations

  • Impacts: increased self-serve product (feature) usage, improved user education

This research was comprised of three crucial phases, aimed at enhancing the usability and functionality of survey logic within DISQO’s survey platform “Experience Suite” (previously known as Feedback Loop).

In Phase 1, I examined “skip logic” usage within the platform as it was historically used incorrectly by users and/or received a high number of customer questions/complaints. The objective was to gather feedback on these issues via interviews to improve the usability and discoverability of skip logic and inform our “survey builder” roadmap.

Phase 2 centered on the evaluation of our recently introduced “display logic” feature, which had been suggested by users in Phase 1. By conducting customer interviews and moderated usability testing, we sought to understand users' mental models for survey logic, programming processes, and how well the current feature met survey use cases. This feedback was valuable for informing further development of logic capabilities and the implementation timeline.

Finally, in Phase 3, we internally tested an updated logic design within the survey builder that focused on both skip and display logic. The objective was to ensure the clarity and usability of various logic-related actions and features, such as multi-select logic rules, terminology, and the application of skip and display logic when building a survey. I used Maze to provide internal users with tasks to test the usability of the updated survey logic designs.

What is survey logic?

In the context of surveys, "logic" generally refers to the rules that guide how a survey responds to the answers that a respondent provides. These rules allow for the creation of dynamic and responsive surveys that can adjust to individual responses, enhancing the relevance of subsequent questions and overall user experience.

  • Skip Logic: routes survey participants to a future survey question or section based on their responses, skipping irrelevant questions for a smoother survey experience.

  • Display Logic: controls visibility of survey questions or sections based on prior answers, personalizing the survey by only showing relevant questions.

Experience Suite historically supported “tagging” logic which was programmed within the backend of the survey platform — no need for customers to apply it themselves. DISQO released a user-facing platform in 2022 that allowed customers to build their own surveys. This platform initially included a “skip” logic feature (and subsequent versions based on this research!)

Objectives

Phase 1: evaluate skip logic feature

Collect feedback on skip logic experiences & issues from internal and external users to improve the feature’s usability and discoverability; use insights to inform future development of survey logic and other needed improvements.

Phase 2: evaluate display logic feature & explore users’ survey & logic mental models

Understand if the new display logic feature meets the needs of users and identify areas needing improvement. Additionally, investigate the process of survey programming and the role of logic while also evaluating user understanding of skip and display logic terms and their significance.

Phase 3: internally test updated skip & display logic designs

Internally test and improve the usability of an updated survey logic design (built based on user feedback provided in Phases 1 & 2). Ensure clarity in logic rules, terminology, and application of skip and display logic.

Approach

Phase 1

  • 5 moderated 30-minute usability interviews with users, evaluating current skip logic functionality

  • Recorded interviews via Zoom and stored/transcribed in Gong

  • Used Figjam for affinity mapping and whiteboarding sessions to discuss feedback and ideas

Phase 2

  • 5 moderated 45-minute usability interviews with customers, including discovery questions around survey programming and logic as well as a usability task involving creating and programming a draft survey with the new display logic feature.

  • Recorded interviews over Zoom and transcribed/synthesized in Condens.

  • Created comprehensive slide deck that incorporated findings & recommendations; presented to design, product, engineering, and customer success teams.

Phase 3

  • Performed an internal unmoderated usability test using Maze with survey tasks ("missions") based on the updated survey logic design.

  • Collected 10 responses with participants sharing their screen and recording their video/audio

  • Created slide deck that incorporated Maze test results & synthesized data; shared findings with design, product, engineering, and customer success teams.

Learnings

Outlined below are the key learnings documented across the various research phases.

  • There was significant user confusion around skip logic, indicating a need for clear and simple guidance and educational resources in-platform.

  • Users’ mental models and terminology for logic vary, likely dependent on how extensive their research backgrounds are. This has implications for how much and what type of support (via help center & customer success) should be offered to users.

  • Users found the application and general understanding of survey logic, especially multi-select conditions, complex and cognitively demanding. The terminology and condition setup used in the platform was unclear to users.

  • Visual management of skip logic was lacking and needed improvement to help the user understand which questions included logic (and which did not)

  • A series of specific feature enhancements and additions, including visual indicators, branching logic capabilities, and improved instructions, were suggested by external & internal users.

  • On a positive note, the introduction of display logic was better received by users. The “positive” framing of display logic better aligned with users’ mental models. This was evident in higher usability scores and positive feedback provided during the usability testing.

  • Increased feature usage (AKA increased revenue): after multiple stakeholder presentations and ongoing, recurring discussions related to the design and functionality of survey logic within the Experience Suite platform, the feature underwent several design iterations until it reached its final (current) state. These design iterations were directly influenced by the findings and recommendations offered through the evaluative survey logic research I conducted. We tracked the feature usage in Pendo and saw a gradual increase. We also witnessed a usage shift from skip to display logic, further validating my research findings that display logic in general is better aligned with users’ mental models than skip logic.

  • Improved usability: the findings highlighted critical usability issues that directly impacted the user experience of programming logic in the Experience Suite platform. By addressing these issues, we enhanced the tool's user-friendliness and helped minimize cognitive load for users. Making survey logic intuitive to use resulted in higher user engagement (self-serve) and satisfaction, leading to increased adoption and likely reduced abandonment.

  • More user education: the confusion between skip and display logic signified a gap in user knowledge. Enhancing user education through clear, in-platform explanations, onboarding guides, more visual management, and help center articles empowered users to utilize the platform more effectively. An informed user base will likely result in less user error, less time spent asking customer success for help, and increased user effectiveness in survey creation.

  • Iterative product development: user feedback offered valuable insights for survey logic product development. Users suggested improvements, such as visual indicators, more detailed information on skip logic, branching logic capabilities, and more. Implementing these enhancements can streamline the survey creation process, thereby meeting user needs more precisely and giving the product a competitive advantage.

    In summary, this project had a substantial impact on survey logic within the Experience Suite product. It informed areas for product improvement, highlighted the importance of user education, drove better customer support, and ultimately led to an enhanced overall user experience with the platform for programming surveys.

Research Impact

Previous
Previous

Ad Testing Product Research

Next
Next

Enterprise User Personas