More than almost anything, the ability to test and learn in low-stakes digital experiments to iterate and improve upon any marketing message has captured the imaginations of marketers and entrepreneurs over the last decade.
But for most people, the imagination is exactly where all those testing ideas stay. Reporting is clunky. Experiments are poorly designed. It’s nearly impossible to operationalize quantitative learnings into technical processes, let alone the creative process. Cross-channel experimentation is largely in its infancy for most marketing technology vendors, and digital marketing triggers have historically been too limited to get much utility out of them. Following an audience (or suppressing an audience) across those channels remains just a dream for most marketers.
Welcome to part two of “Creating a Culture of Experimentation with Digital Marketing Triggers,” where we’ve been outlining core capabilities and best practices that are necessary to design a team and workflow that favor testing and learning over blind certainty. If you missed part one, click here to catch up. If you’ve already read part one, keep on keeping on!
Before launching into any experiment, consider the unofficial commandments of marketing experimentation:
The 10 Commandments of Marketing Experimentation
- Have a hypothesis.
- Set a goal.
- Stick to a process.
- Test one thing at a time.
- Randomize your groups.
- Have a reasonable test population.
- Set an end-by date.
- Don’t leave them hanging.
- Don’t be afraid of the results.
- Celebrate failure.
Yesterday, we covered the first four — which were mostly about internal considerations — and today, we’re whipping into the last six, which are largely focused on the text audience.
Randomize Your Groups
If you are testing content variables, test groups should be equally randomized.
This doesn’t mean you can’t target specific segments for your messaging (e.g., experimenting with different types of content in your abandonment email on your most loyal customers). It means that all members across each group should look — at least statistically — the same.
The best way to get that is through randomized groupings of people within that broader population. To save time, effort, and avoid bias, teams should make sure they can automatically allocate experiment groups, especially for triggered campaigns.
Have a Reasonable Test Population
It is almost always better to run your experiments on a relatively small subsegment of the total addressable population.
If you A/B test two messages against the total population and one is accidentally offensive, you have just offended a full half of all of your customers. On the other hand, if you A/B test two messages against the total population and one is AMAZING, you’ve then wasted the chance at sending the most amazing marketing message ever to everyone you can.
Statistical significance refers to the necessary population size for getting around the natural variations in group behavior that you could observe in an A/A test.
While the populations all have to be of a reasonable size, they don’t all have to be the same size. You can create a smaller holdout group, and if you’re running multivariate experiments (MVEs), then perhaps you dedicate fewer population sizes to the single-variable baselines than the larger MVE groups.
Set an End Date
Experiments can go on forever, but in a test-and-learn environment, we want to make sure that we are rapidly iterating and demonstrating improvement against our objectives. This is especially true with rolling abandonment initiatives based on customer triggers. We can’t expect to do a massive send to all abandoned carts at once; instead, we have to wait for a sufficient population to pass through. (Hence the reliance on digital marketing triggers in this and other use cases.)
When it comes to setting a reasonable timeline, we can learn from historical data how long it takes a certain number of shoppers on average to abandon their cart and/or take the action we are trying to drive. That will help identify when we want to look at the results.
If we wait too long, we have either missed an opportunity to deploy a new approach to a wider audience or have missed a period of time we could have been testing for something better. If we pull up too soon, we might be missing out on the full picture, throwing out a potentially high-impact approach.
Don’t Leave Them Hanging
Abandonment shouldn’t be a one-and-done communication. While single event-triggered communications can certainly be effective (and easy to test), there’s an even bigger opportunity in testing across journeys with digital marketing triggers.
In testing these journeys, we have the opportunity to experiment with things like frequency, story order, channel impact, micro-conversion, calls-to-action, and more.
Pulling these all together into one story allows us to find what the highest-impact, cohesive conversation might be to capture attention, engage, and inspire abandoned customers back to the target goal.
Don’t Be Afraid of the Results
“Business as usual” content can be the last thing some marketing teams want to test. Testing your upcoming holiday campaign has much lower emotional stakes than testing that big-lift welcome series your team built three years ago and very much forgot about. Deep down, it’s scary to find out they’ve been wasting time or money or both.
Be comforted, however, just because it is not effective now, doesn’t mean that has always been the case. Times and customer expectations have changed, and we need to be vigilant about keeping up and serving them the best experience now, not a year ago. Embrace the change.
No team wants to fail. However, when it comes to making the most out of experimentation, failure should not just be anticipated, but celebrated. Why? Failure means you tried something, learned something, and got a not-so-great idea out of the way.
The most innovative brands understand that marketing is driven by many factors both within and outside of its control. The only way to navigate the complex waters of customer behaviors and preferences (especially when it comes to abandonment) is to throw yourself in and try things out.
As a marketing leader, with buy-in from the top, you have an opportunity to create a culture of experimentation. Celebrating failures and successes tells your team that they are supported to take calculated risks and not fear for their jobs if something doesn’t work.
The excitement and frustration of trial and error build a sense of intellectual (and emotional!) novelty that will create a stronger team bond. Some marketing leaders go so far as to give awards to the biggest “failed” experiment. Others have created a “Wall of Shame” (tongue-in-cheek, of course!) to highlight what didn’t work, thus engraining the lesson into the community of the marketing team, beyond just institutional knowledge.
If you missed part one of this series, click here to catch up and start getting ideas for your digital marketing triggers use cases. If you enjoyed these pieces, then you’ll love our white paper the success that’s possible when marketers own their data strategy: How Self-Serve Segmentation Led to a 300x Boost in Engagement