top of page

Curiosity Drives Optimisation

Updated: May 30, 2023

It is intrinsic in all of us to be curious - to want to see what happens if we make a change to something, big or small. As Walt Disney nicely put it - “We keep moving forward, opening new doors, and doing new things because we’re curious and curiosity keeps leading us down new paths.”

This is also how businesses evolve.

By being curious about their products and services and how consumers both purchase and use them.

Asking simply… I wonder if we could do this differently, better and before anyone else?

Tesla epitomises this perfectly - the curiosity of how to create an electric vehicle set Elon Musk on a phenomenal journey and a new path for electric transport, not limited to road vehicles.

But this ambitious leap is not the only way to drive change in a business. Small changes can also have big impacts.

For instance, improving a website conversion metric, or an email engagement rate can often have significant and positive commercial consequences.

But the thought process starts at the same point, with a question or hypothesis:

How could we…?

What would happen if we changed…?

Will A work better than B?

Accompanied with the willingness to test in a controlled way, evaluate the results and predict the impact on a larger scale before solidifying any change as opposed to rushing headstrong into making a change based on gut feel or desire.

We have worked with a lot of clients over the years to both coach and implement optimisation strategies, to enable businesses to evolve on evidence-based decisions. In this blog, we talk about some core principles to a robust test, learn and optimise strategic approach.

The core principles are:

Remember it is a cyclical process, not a single destination. As once you have learnt and rolled out that learning, you can continue to test and improve against it.

We’ll use marketing campaigns as the use case, but the stages apply to any use case from store design to the fulfilment of a product or service.

1) Start with a hypothesis

Be clear on what it is you want to know and test.

In marketing, you may look at:

  • The audience (the targeting of your marketing campaigns)

  • The content you are serving to the audience

  • The channel/s in which you are serving the content

  • The timing of your message

These components work together to optimise effectiveness, and a testing strategy can look at one or all of these components.

You may have a “banker” in your arsenal which is your best performing combination of audience, content, channel and timing which you want to improve on which can form the basis for your control.

Depending on where you are on your optimisation journey, you may want to focus initially on getting the basics right e.g. what is the right combination of message for different customer groups to drive engagement with your marketing.

Looking at the content, you may start with the subject line. For instance, does ‘a personalised email subject line drive a higher engagement than a non-personalised, generic subject line’.

2) Design a test scenario

Against your hypothesis, start to design your test.

For instance - who are you including in your test and why, what will they receive, how will they consume the content, what size does your test cell need to be?

Try not to overcomplicate your test or you won’t have significant results in which to make an informed decision.

We often work with clients to design a testing tree, to consider all aspects of the test ambition before going further into the specific components which we capture in a table format:

A table, as mentioned above, with the following headers is useful to accompany the testing tree when designing a test as you can build out your variables logically:

  • Audience

  • Selection Criteria

  • Channel

  • Volume Contacts (est.)

  • Offer

  • Content

  • Test Split

  • Cell Code

  • Test Cell Volume

We frequently find that some exploratory data analysis helps identify suitable variables for testing and gives the necessary information on customer volumes and past performance which are key in test design. For instance, you don’t want to design a test for it only to be relevant for a handful of recipients.

You’ll also need to ensure you have sufficient volumes in your test groups, so in the same spreadsheet add additional columns to include expected performance e.g. open, click, conversion metrics. You are then able to use significant tests (there are lots of free versions online) to ensure your test cells are big enough to carry the test and give you evidence to make a decision on what is the best performing asset.

3) Deploy the test

Where possible, the external factors need to be identical or at least comparable. So in an email broadcast, this is easily achieved with a broadcast date and time simultaneously scheduled.

In other scenarios such as testing a new store design in a retail outlet, the test environment needs to be as close as possible to the control but can rarely be identical. You will have accounted for this in your test design.

Ensure you are set up to measure the results of the test before you deploy. Add any tracking codes (cell codes on the data, UTM codes on email links) to your links, unique phone numbers or reference numbers in your content - tailored to what you are trying to test and therefore measure. This makes your evaluation process so much easier.

4) Evaluate the test

The evaluation of the test campaigns should tell what has worked best in the test scenario you have designed. These learnings should form the basis for your rollout, and future test controls as you look to improve on your next “banker”

We always recommend using significant testing tools to be confident on the rollout performance. If the cells are not statistically significant, you may not get the same result in a repeated test.

Embedding an optimisation mindset

When embedding a test, learn and optimise strategy in your business it's not only about the test design and execution.

As well as the practical steps to testing and optimisation, there is often a cultural change needed in a business to ensure investment of time and money into a testing strategy.

We touch on the need for cultural adoption in our blog and we also offer business coaching sessions on subjects including test and learn strategies.

The nirvana of test, learn and optimisation is to have an ongoing programme of learning, never to stand still. These are harder to adapt to than isolated test scenarios, but do deliver significant rewards for improved performance and commercial gains.

If you are looking for guidance in designing or embedding a test, learn and optimise strategy in your business, do get in touch. Just fill out the form on our website or email me directly at


bottom of page