What testing frameworks do you use to evaluate the effectiveness of messaging for a launch?
A few options we’ve employed recently:
Data-based indicators
- A/B testing marketing email subject lines, and evaluating click-to-open rates
- Running multiple variants of ad copy and evaluating CTR & conversion rate
- Making three variants of a landing page (usually a published Coda doc), then (1.) supporting with paid ad spend to evaluate against ads metrics, as well as (2.) measuring SEO scores in AHRefs
- Running experiments on our site with language and/or design updates
Qualitative indicators
- Social media feedbackーwe’re still hearing about our Enough of this Sheet campaign nearly a year later, which is a strong indicator that this message still resonates
- Phrases used/sentiments shared with our support team via email (usually sorted by topic tagging)
- Search queries in our Doc Gallery
- Talk to (and listen, listen, listen to!) your customers; get on sales calls and take furious notes
- Good ol’ fashioned user surveys
Coding free response data or reviewing a bunch of raw text can be a bit of a slog, so I often maintain a table in Coda that I can visualize as a word cloud to give me a jumping off point for deeper exploration and categorization.
We generally start with the launch goals and then define more granular metrics leading up to the launch goals to measure the effectiveness of our messaging. Here are some initial ideas to consider:
- Conversion rates across the funnel - Did the messaging drive the desired actions at each customer touchpoint from initial engagement to sign up/trial/purchase?
- Signups, purchases, and/or adoption (depending on launch goals)
- Landing pages, emails, posts, and ad performance - How did individual assets and channels perform relative to benchmarks?
- Sales asset usage and engagement - Which assets, talking points, and messaging were used, and how did prospects and customers engage? Highspot will show you which sales assets are being used the most and which are driving the most customer engagement, while Gong.io provides sophisticated insights into sales calls to determine what’s working and what’s not.
- Finally don’t forget to capture qualitative feedback on your messaging from customers, prospects, and internal stakeholders. While presumably, you’ve done this pre-launch, another round of feedback post-launch will provide further insights you can quickly apply to post-launch campaigns for a fast follow.
I find UserTesting to be a great tool for getting a quick directional read for how your messaging will perform with your target. I typically test for comprehension and emotional response or in other words did the target understand the product benefit and how they feel about it.
Note that UserTesting will give you about 15 responses, so paid media may be a better bet if you are interested in testing messaging at scale. Also, if you have a customer advisory council or an early beta tester group, test your messaging with that audience as well. This group can give great feedback on whether your product actually delivers on the promise in your message.
One of my biggest criteria for success is how much messaging is repeated. Whether that's by your sales team, customers, media/articles, etc. So I tend to bring testing and validation into the messaging process pretty early and collect that qualitative data. I'm a big fan of going directly to account reps and sales engineers, and getting their honest feedback and see if it's something they'd actually repeat to customers. If you're able, go directly to customers and see if these are the words they'd use, and that these are the leading benefits to them. Or industry analysts to see if it matches with what they've heard from their clients. Because I think getting this in depth feedback is so valuable, I actually don't over-rotate on frameworks or process here.
In terms of how effective a launch was and the messaging's impact - that can depend on the goals of the launch. You can run A/B tests on web, emails, etc to see if the new messaging is creating more views/clicks/etc. One of my favorites is to work with SDRs to try out new emails and call scripts with the messaging and see if it increases response rates or meetings secured.
- a/b testing emails and landing pages
- using an agency to survey audiences for large brand campaigns
- direct user interviews
- interviewing your frontline teams
- competitive research
Sometimes you don't have time to do all of the above and you have to go with your gut and then evolve it over time as you get to know your product, users, and market better.
To measure market messaging, I focus on resonance, clarity, and differentiation. Whenever possible, I use a mix of qual and quant strategies to get data, but in my experience, even a handful of qualitative interviews with target personas provides a treasure trove of insight. What matters most is getting outside the metaphorical building to really pressure test what can often be strongly-held internal opinions.
Measuring sales messaging is also best done as a mix of qual and quant. I'm fortunate to work with a team of thoughtful, strategic sellers who are generous in bringing back feedback from the field. We'll often trade insights from mini-experiments over slack or in team meetings, or, when we have a larger rollout (like a new pitch deck), we build in dedicated feedback sessions. I also carve out time to listen to gong calls to see directly a) how messaging is delivered and b) how it's received. The questions that prospects ask and where they lean in vs. tune out speak volumes in terms of what's resonating, what's clear, and whether it feels differentiated.
These qual efforts help provide real-time insight. Ultimately, of course, we measure the impact new messaging has on sales stage progression. We have longer, enterprise sales cycles, so this takes time, and of course there are always multiple factors playing into win rate. That said, it's the most important metric we're all looking to impact, so it has to be central.
There are a few
- The AIDA (Attention, Interest, Desire, Action) framework: measures the success of the messaging in capturing the audience's attention, generating interest, creating desire for the product, and driving them to take action.
- The FAB (Features, Advantages, Benefits) framework: evaluates the messaging based on the features of the product, the advantages it offers over competitors, and the benefits it provides to the customer.
- The PAS (Problem, Agitation, Solution) framework: assesses the messaging by identifying the problem the product solves, agitating the pain points and challenges of the audience, and offering a solution through the product.
- The USP (Unique Selling Proposition) framework: measures the effectiveness of the messaging in highlighting the unique value and benefits of the product, and setting it apart from competitors.
- The HERO (Hope, Empathy, Relevance, Outcome) framework: evaluates the messaging based on its ability to inspire hope, connect with the audience's emotions, be relevant to their needs and goals, and provide a desired outcome.