Book a demo

Get Early Access

ANNOUNCING OUR NEW FUNDING AND PARTNERSHIP WITH GOOGLE 🎉
Read more
button-icon

Written by Kene Anoliefo

|

August 7, 2024

How to Drive AI Adoption by Breaking through the Blank Box Barrier

In this guest post, veteran UX Researcher Misha Cornes shares how AI companies can drive sustained growth by uncovering and prioritizing user needs.

Misha Cornes is a User Research executive with experience building and scaling research teams at companies like McKinsey and Lyft. This piece was originally published on LinkedIn.

Filling the Blank Box

Have you ever used a large language model (LLM) like ChatGPT or Google Gemini and wondered “Am I doing this right?” You’re not alone – according to a survey by Adobe Analytics, 53% of Americans have experimented with generative AI since ChatGPT’s debut in 2022 but less than half of those who tried it continue to use it regularly. 

One explanation for the dropoff between trial and adoption is what I call the Blank Box Barrier. LLMs hold limitless potential for everything from content creation to scientific discovery in a single blank text box, but the wide range of use cases makes it difficult for people to know where to start. After the initial novelty wears off, many first-time users struggle to pinpoint exactly how to integrate LLMs into their daily lives.

To overcome the Blank Box Barrier LLM companies must refine their products around a more narrow set of use cases that can deliver consistent and repeatable value for consumers. 

For many AI companies, the end goal is artificial generalized intelligence, or AGI. Unlike narrow AI systems that are skilled at specific tasks like image recognition or transcription, AGI systems will have the same type of broad, general intelligence that humans have, including the ability to plan, reason and learn. Striking a balance between general intelligence and domain specialization will be a delicate dance; domain specialization makes AI more useful for specific tasks and can drive near-term adoption, but might do so at the expense of investing in long-term general intelligence.

How do AI companies avoid creating products that are jack-of-all-trades but masters-of-none? I recommend building a Use Case Taxonomy to help uncover customer needs and prioritize which features will be most valuable to focus on to drive both short-term adoption and long-term intelligence goals.

Building A Use Case Taxonomy

A Use Case Taxonomy is a structured map of the various ways customers use your product. A Use Case begins before a customer encounters your product; what problem emerged in their life that caused them to go out into the world and look for a solution? A Use Case details that specific need, along with other important dimensions like goals, workflows and challenges. 

Because most products have multiple Use Cases, we create a “taxonomy” to capture the full breadth. The Use Case Taxonomy can be used in many ways, not least of which is to align product, technical and GTM teams on why customers are seeking out your product and what they hope to achieve.

To illustrate, let's consider Notion. Notion has emerged as a powerful, all-purpose tool to create, organize and publish content. An abbreviated version of their Use Case Taxonomy might look something like the following:

Creating a Use Case Taxonomy with HEARD

The best way to build a Use Case Taxonomy is through qualitative UX research like in-person interviews and focus groups that allow you to talk to your customers and understand their needs and goals. But one challenge for AI companies doing comprehensive qual research is scale: with millions of customers using their products in thousands of distinct ways, it could take months to interview a representative sample of their user base.

To solve this challenge, I recently used a product called HEARD to build a Use Case Taxonomy for a popular LLM company that wanted to define which audiences to target to drive growth beyond initial power users. 

HEARD uses AI to moderate open-ended, exploratory interviews with your customers. During the interviews, the AI adapts the questions it asks in real-time depending on what people say – just like a human moderator would. As it talks to more customers, it learns from prior answers to ask smarter questions. Instead of taking weeks to schedule, moderate and analyze in-person interviews, HEARD can interview hundreds of people in a day or two. This not only speeds up the research lifecycle, but also enables you to capture enough data to see real patterns and trends emerge at scale (which is notoriously hard to do with qualitative research).

Over the course of three days, HEARD conducted over 1,200 short (under 10 minutes each), chat-based interviews with the LLM’s users and then summarized the findings into an insights report. It asked questions designed to collect the data we needed to create the Use Case Taxonomy around customer needs and challenges. After completing the AI interviews, we did a small number of in-person interviews to add further “emotional” texture to the insights. With these insights we created a taxonomy with the top 8 Use Cases driving the LLM’s adoption to date with a detailed description of each, similar to the table above.

The taxonomy enabled us to segment the product into distinct use cases and see both the overlap and tension between each. For example, we discovered that for some use cases, users wanted to keep their activity on the platform private while for other use cases, users wanted to share their activity with their community. For the first time, leadership could see a full view of customer needs and begin making focused decisions based on the mapping.

Leadership prioritized five use cases based on factors like market demand, competitive landscape, and alignment with the LLM's technical capabilities. For instance, the team decided to prioritize a use case around workplace productivity because close to 25% of the existing customers already used the product in this way, and workplace productivity represents a large potential TAM that could drive growth beyond early adopters. 

With this new alignment the company began to coordinate strategy throughout all layers of the business – from how to iterate on the underlying model to what messaging would most resonate with potential users in marketing campaigns. And while the company’s model could still technically “do anything” in the long term, it was able to begin steering consumers toward high-value use cases in the short term.

This case study illustrates the power of UXR in uncovering diverse user needs and guiding product development, and how LLM companies can actually use AI to achieve this at scale. The next challenge is translating these insights into specific features that align with the product roadmap. Teams often grapple with the tension between prioritizing solutions for real user problems and pursuing technical advancements to maintain a competitive edge. In an upcoming article, I'll delve deeper into this critical challenge, exploring strategies that balance user-led design and cutting-edge AI research to bridge the Blank Box Barrier. 

Conclusion

The Blank Box Barrier is a significant hurdle in the path of widespread LLM adoption. While the potential of these tools is vast, users often find themselves at a loss when faced with a blank text box, unsure of how to leverage the technology effectively. By constructing a Use Case Taxonomy, AI companies can identify and prioritize high-value use cases and align them with the product's technical capabilities. By doing so, AI companies can transform initial curiosity into sustained engagement, making LLMs an indispensable tool in the daily lives of users.

‍

Sign up for our newsletter

Get the best content on user research, and product building delivered to your inbox.

Thank you!
Your submission has been received!
Please complete this required field.

Sign up to receive Part III

Get notified when we publish the next installment of Off The Record

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.