Ad Testing Product Research

Overview

  • Company: DISQO

  • Role: UX researcher

  • Methods: generative & evaluative user interviews, unmoderated usability testing, sales call audit

  • Skills: interviewing, usability testing, qualitative analysis, stakeholder buy-in

  • Tools: Zoom, Condens, Figma, Maze, Gong, Google Suite

  • Deliverables: presentations (slide decks), spreadsheets

  • Impact: successful product launch, feature improvements, happy customers, competitive parity

This research explored the needs of a new subset of DISQO’s customers (“ad testing”) and what product features & upgrades were needed to improve conversion rates among this targeted group. Sales call analysis, generative & evaluative interviews, product demo feedback, and usability testing with a variety of users (trial, new customers, internal stakeholders) revealed the need to develop a benchmarking scorecard, an industry standard, among potential other product improvements, such as template updates, self-serve audience building, and additional benchmarking levels.

At its core, we successfully established a competitive self-serve product that landed new ad testing users while defining and exploring differentiation. In addition to product enhancements, the research findings also helped inform service model needs for customer success and sales teams.

What is ad testing?

Ad testing is the process of evaluating ad concepts by gathering feedback and measuring consumer responses and behavior. It involves presenting different creatives (images, videos) to a representative sample of the target audience via survey and assessing aspects such as effectiveness, standout, interest, and believability. Brands invest heavily in ads due to their proven effectiveness, with pre-campaign ad testing further increasing the chances of a successful return. Testing ads before a launch helps drive campaign impact and mitigate risks.

DISQO’s ad testing solution includes a DIY survey builder, a defined target audience within the survey tool, and a results page with data visualizations. Customers have access to benchmarks via scorecard and raw survey data.

Objectives

  • Explore the needs of ad testing customers to inform future versions of the product

  • Understand customers’ pain points around their current ad testing experiences

  • Collect concept & usability feedback around proposed new features

  • Transform customers’ needs, pain points, & feedback into actionable feature recommendations to improve customer acquisition rates

Approach

The research spanned over six months and involved several iterative phases. These phases included a sales and demo call audit, generative and evaluative customer interviews, product demos, and unmoderated internal usability testing.

Each phase had distinct objectives, such as understanding customer pain points and needs (discovery), collecting feedback on product iterations (evaluation) via interviews & product demos, and testing new features (design iteration). The research also aimed to inform service model needs for the customer success and sales teams by collecting feedback on self-serve (DIY) features, needed services, and pricing.

Learnings & Recommendations

  • Sales call analysis revealed customers value customization in ad tests and visibility into the survey and audience-building processes. Customers emphasized the need for highly reliable and trustworthy data for validating their creatives. Some customers expressed skepticism towards subscription-based pricing and self-serve ad testing tools, providing data points to consider alternative pricing models in the future. Customers expect industry-standard benchmarks in creative testing and varying levels of benchmarking. Without a benchmarking scorecard, some users develop workarounds, like using subjective scales, comparing recent tests, or manually creating benchmarks. To stay competitive, offering customer-specific or industry-level benchmarks, or even more granular benchmarks like sub-industry, creative stage, or channel, could be beneficial.

  • Customer interviews provided valuable feedback on the scorecard design. Users liked its clear, easy-to-understand layout and inclusion of standard metrics. However, areas of confusion included the representation of percentages and benchmarks, the meaning of "brand fit," and potential accessibility issues. Design recommendations from the customer interviews included changing the color palette, incorporating shading for high-performing metrics, using icons for different metrics, introducing additional statistical testing, and improving the export functionality. For the final design, customers appreciated the color palette and suggested enhancing the scorecard's interactive functions. They proposed increasing the confidence interval to 95% and adding a top two box to variant view in charts. Customers also expressed interest in cutting the scorecard by segmentation questions.

  • In the follow-up internal usability testing, despite high usability scores and successful mission completions, users had misaligned expectations around the impact of changing the confidence interval (CI) to 95%. Recommendations included maintaining the pencil icon design and drop-down, making the guidance text more visible, and providing explicit instructions on how the scorecard adjusts when the CI changes.

V1 Scorecard

V2 Scorecard

V3 Scorecard

Research Impact

The impact of this research is multifaceted and significant, both in immediate and strategic contexts. Here's a comprehensive overview of the impact across different areas:

  • Iterative Product Development: the research findings directly influenced the development and improvement of the benchmarking scorecard feature within the ad testing offering. Insights collected from user feedback not only revealed the need for new scorecard elements features, but also guided iterations of existing ad testing features, such as template updates and self-serve audience building. This iterative approach to product development, grounded in user insights, led to a more user-friendly product that catered to existing and new customer needs and preferences.

  • Customer Acquisition and Retention: by understanding the pain points and needs of the ad testing customers, the research informed strategies to enhance conversion rates and customer acquisition. The continual incorporation of customer feedback into product enhancements also improved customer satisfaction, leading to better retention of DISQO’s existing enterprise customers.

  • Sales and Customer Success Strategy: the research highlighted valuable insights about customers' perceptions of support needed while programming surveys (DIY vs. service), pricing models, and their preferences for customization (vs. templates) in ad tests. Such insights informed strategic decisions in pricing, sales approach, and service models, supporting the sales and customer success teams to better cater to customer needs and expectations.

  • Competitive Advantage: the research aimed at creating a competitive self-serve product that meets the needs of ad testing customers. Customers provided feedback about their experiences (positive & negative) using other ad testing tools. By continuously refining and enhancing the DISQO ad testing product based on this user feedback, DISQO is likely to gain a competitive edge in the market.

  • Risk Mitigation: testing and refining features based on direct customer feedback reduces the risk of large-scale failures upon launch. This user-centered approach to developing the ad testing product allowed for early detection and rectification of potential issues, ensuring a smoother, more successful product launch.

  • Internal Collaboration: this research fostered new cross-functional collaboration between UX, product, sales, and customer success teams. Sharing research insights across departments encouraged a shared understanding of our user’s needs, leading to a more aligned, user-centered organizational approach.

In sum, the impact of this research was profound, driving user-centered product improvements, informing strategic decisions, fostering internal collaboration, and enhancing customer satisfaction and competitive positioning of DISQO in the ad testing space.

Next
Next

Survey Logic Usability Testing