top of page
  • Writer's pictureNicky Finlay

Test before you invest - Using AI and Machine Learning where it will deliver an impact

There is a lot of noise about AI and how it can solve lots of problems. But it can come with a high price tag - of technology, services and people skills to code the analytics and get the best out of the tech functionality.


But how do you know if the solution you want to build is actually going to have the impact you are looking for?


This is where test and learn can play a very valuable role.


Simulating a solution to test the impact of a potential solution can be a simpler and lower-cost way to measure the impact of an idea, before embarking on a more costly automated solution using machine learning techniques and models.


We delivered just this with one of our clients.


In designing a customer journey, we wanted to provide the customers with relevant, personalised recommendations based on their past experiences with the brand and also based on behaviour patterns displayed by customers like them.


To create a scalable solution that could automate a personalised output to tens of thousands of customers at key stages throughout the customer journey was unchartered territory for our client and there was no evidence to suggest that the investment in such functionality would deliver the positive impacts sought.


So we created an experiment.


Using customer insight and product knowledge, we built a simulated recommendation model using key behaviour patterns identified in analysis and manually identifying the next best actions we wanted the customers to take. Whilst not as sophisticated as a machine learning model, it provided a rule structure that could be coded within the marketing platform to deliver relevant content to customers who fulfilled the entry criteria.


As this was a simulated experiment, it wasn’t practical to predict all combinations of potential recommendations (there were over 1000 potential unique outputs) so we created a test environment to enable us to deliver relevant recommendations to a significant percentage of the base in order to measure the output.


  • ⅓ of the test cohort received an email driven by the recommendation model output,

  • ⅓ of the cohort received a generic, non-personalised recommendation

  • ⅓ were held out as a control cell.


We ran the test for a few months to build enough evidence and statistically robust results to identify the impact a recommendation engine would have on downstream customer behaviour.

And the results more than validated the hypothesis.


Customers receiving the recommendations significantly outperformed the generic version and the control cell.


Modelling the potential impact gave us the evidence to build the business case for investment in a machine learning module to create individual recommendations to deliver personalised communications at scale. The machine learning model has been successfully implemented and since scaled across different parts of the business, and is continuing to produce significantly positive results for the business.


So if you are thinking an AI or Machine Learning model is key to unlocking scalable analytical and predictive outputs but are unsure how to secure the investment needed, consider setting up an experiment to measure the potential impact to bolster your business case for investment.


For more information get in touch at hello@wearemojo.co.uk



bottom of page