It’s been more than 10 years since experimentation as a digital business practice hit our shores in APAC. Since then, the undertaking of AB testing, experimentation, and user research has drastically evolved. What started out as simple button colour or copy testing, has evolved into a revenue-critical enterprise. Leading digital brands release hundreds of tests per year across complex organizational structures; testing everything from UI layouts, personalization, merchandising, validating products and features alike.
There’s no doubt that companies that have the right digital experimentation practices in their business realise more revenue opportunities. This is particularly true as Australia begins to rise from COVID-19 ashes.
Now is the time to review your experimentation practices and determine the best pathway to optimizing all your opportunities. At The Lumery, we often see experimentation programs not reach their full potential when the teams are not truly fit for purpose.
Over the years, we’ve worked with clients that span across the spectrum of different experimentation practices (including airlines, insurance, publishers, retail and ecommerce). Through this, we have identified 3 key stages of maturity for an experimentation program, defined as walking, running and flying.
Stage 1 — Walking
The early stages of an experimentation program are usually led by a CRO agency (conversion rate experimentation) or a digital marketing practitioner. Their core focus is to increase onsite conversion rate, whether that be quote completions, ecommerce transactions or product subscriptions. This starting point is a centralised operational model where experimentation is contained to a small group that nurtures the program and slowly begins sharing results and learnings to wider parts of the business.
This phase focuses on generating fast results with minimal business friction, whilst proving the revenue value of an experimentation program. Usually the complexity of testing within a CRO program is low. This stage focuses on lightweight tactical changes, like adding social proof or tweaking copy to generate quick wins through 1–4 tests per month.
A healthy CRO program should clearly show the value of experimentation. Having the right partner will help you prove out the value of experimentation and gain top senior level support.
What does it look like? Optimisation of existing traffic, lightweight experiential testing, testing strategies focused around a sales or buying funnel.
What’s reported back to the business? Conversion rate, revenue uplift and average order value.
How do you know you’re progressing? Steady state of testing velocity (1–3 per month), winning tests at least 50% of the time, institution of a regular forum for sharing insights and engaging the wider business.
Stage 2 — Running
Moving from a CRO level practice towards deeper business adoption and integration into digital and product streams of work is a sure sign of progress and maturity. At this stage there should be a small team established, consisting of an experimentation professional, data analyst, developer, QA tester and project manager. These experimentation teams work predominantly in a centralized manner, utilising a combination of CRO tactics and design thinking principles, like the double diamond framework to support marketing and digital teams with an aim improve campaigns and iterate on product releases.
However, an experimentation team that is ‘running’ should be a central point of testing within a business context. They are responsible for generating majority of the ideas and defining test hypotheses. Velocity should pick up too, scaling out testing to 5–10 tests per month. These tests should increase in complexity, including split testing, MVT and creating hold out groups for onsite personalization initiatives. The focus of a team in this phase is to get the most out of all media spend for optimised onsite conversion.
What does it look like? Expert tactical optimisations to boost the performance of your product or marketing campaign, with a focus on preliminary cross channel experimentation and increasing test velocity.
What’s reported back to the business? Testing velocity, feature utilization, funnel penetration, revenue outcomes, testing win rate, determining causality around consumer insights and areas for product improvement.
How do you know you’re progressing? Increased testing velocity of 5–10 tests per month, enrichment of your program through data analytics and UX resources, establishment of developer resources, like code repositories and standardisation of QA practices. This means regularly supporting product squads or marketing teams with experimentation through both an ideation and executional lens.
Stage 3 — Flying
This stage leads into the bleeding edge of digital experimentation as your practice starts to underpin almost everything that touches your digital customer experience.
Seen as a business-critical experimentation support stream, the focus shifts to coaching, training and enabling the wider business to drive strong data driven experimentation. This means creating a test and learn culture that influences strategic direction, product roadmaps, innovation and validation of the overall customer journey.
Experimentation professionals who are “flying” play in an advanced agile/scrum framework for digital teams, like SAFe and collaborate with data scientists, UX researchers, business analysts and product teams to drive complex change through experimentation. Teams of this calibre enable testing of up to 20–50 tests per month, in a decentralized manner, meaning that not all tests start and finish with the core experimentation team but rather product managers, business owners, and channel owners have the ability and resources to execute on their own hypothesis. These experimenters focus on enabling personalization across channels through measurability, bringing complex customer scenarios together for the program by facilitating integrations between data and platforms to activate cross channel testing. By now, a team that is ‘flying’ has also figured out how to socialize key findings and insights from tests to drive customer centricity across product teams and management.
It is also important to note that program maturity may not just be measured by velocity. Some digital properties don’t have the traffic requirements to run a high number of tests. In this case, a better measure of maturity is win rate. If your win rate is sitting above 70% chances are you might not be taking big enough risks. A win rate closer to 50% indicates more data driven decisioning, larger scope tests and product testing, as opposed to small feature or merchandising tests.
What does it looks like? Long-tail strategic thinking, equipping and training product management teams for hypothesis generation, development and QA practices, ability to scale and a decentralised experimentation program.
What’s reported back to the business? Testing velocity, win rate, revenue forecasting and experimentation adoption by product and channel teams.
So which stage do you currently fall into? Are you moving from walking to running, or are you about to take to flight by thinking about how experimentation can influence decision-making across all levels of your business?