We use cookies. You have options. Cookies help us keep the site running smoothly and inform some of our advertising, but if you’d like to make adjustments, you can visit our Cookie Notice page for more information.
We’d like to use cookies on your device. Cookies help us keep the site running smoothly and inform some of our advertising, but how we use them is entirely up to you. Accept our recommended settings or customise them to your wishes.
×

AI winter isn’t coming, its time to embrace AI

PWC predicts that artificial intelligence (AI) will add $16trn to the global economy by 2030, McKinsey agreeing on magnitude but putting it closer to $13trn. These figures must seem like huge overestimates for any marketer who’s experienced AI as technology that many professionals can’t seem to get working in their organisations, or where a use case is hard to identify.

 

The most recent AI buzz has been around for a while - long enough this time to see significant improvements from the power of cloud and parallel data processing before the hype cycle has started going south. We’ve all seen a lot of talk about ‘machine learning’ (ML) and ‘AI-enabled’ products but under the hood, nothing remarkably new is happening. In this blog, when we talk about AI we are really referencing machine learning (ML).

 

Some feel a renewed AI winter is on the horizon, most recently both despite and because of the impact of Covid-19. In the positive vein, machine learning has helped researchers access more data to build models to predict coronavirus’s prevalence and spread in a way that no other analytics methods have been able to do before. Other machine learning methods have been deployed to scan research documents about the features of the disease, thus saving researchers valuable time. 

 

On the flip side, coronavirus has also caused AI massive issues, by creating situations never ‘seen’ before in our historical data. Just look at the failings of the algorithms in the toilet paper & cupboard crisis - which saw automated models going horribly awry. A normal $10 bag of rice reached the heady heights of $59.99 on Amazon during the early period of the crisis according to Amazon price tracker Keepa, for example. With the A level results scandal in the UK also hitting the headlines, people are more wary than ever before of models, data and how they are being deployed.

 

So is AI the saviour that consultants claim it is, or the hype to be ignored until the next data trend comes our way? Having worked in the field of analytics for the last 20 years, now more than ever I see the opportunities for AI to unlock value – so you may think I’m biased. However, I’d argue that AI stands or falls by the use cases a brand chooses to experiment with. Common AI myths and misunderstandings abound, too, creating organisational hesitancy about dipping a toe in the machine learning waters.

 

We all encounter AI in some form every week – whether we notice it (proving we can spot traffic lights in a grid picture to confirm we are human) or whether it passes us by (in the production of the food that we put on the table).

 

But after identifying a cat versus a dog in a picture to prove we aren’t robots, or doing NLP for a chatbot…where are the use case opportunities and why aren’t we as brands capitalising on them? AI initiatives, like many analytics and data projects, aren’t getting traction. Let me explode some of the myths we come across.

 

Myth #1: AI is not helping us make sense of all of our data

Truth: AI needs a problem to solve first

 

  • Saying AI is not making sense of all of our data is the equivalent of saying because I own a dictionary, I should write the best poetry.  AI can only help when you know what you want to ask of it - it’s a solution to a specific question.  It might be a new way to answer a business question you are already asking (e.g. assessment of credit risk), or it might be exploiting new data to answer that question in a different way (e.g. by using speech & image data, can we answer the credit risk question better?) In some instances, data is answering questions that have never been asked before – for example in object detection for an autonomous vehicle - but these situations are more novel and often come as a result in refocus of wider business strategy. Where you get the power from AI (and thus buy-in) is when it’s automating one discrete task, at massive scale. AI doesn’t exist to answer all questions, its strength is in just answering one question very well and very efficiently. At Merkle, framing the question is fundamental to each of our analytics engagements.  We talk about our value chain a lot, as seen in this diagram:
Value chain

 

When we work with clients, we always look to establish where the true value is created in an organisation and focus on that first. Next, we focus on the actions and levers that we can effect to create that value. We consider which insights you need to reliably move those levers, and lastly aim to understand what data is needed to get those insights. It’s a different way of thinking, but ensures that all activity we design is with the end user and the value it creates in mind. In a similar vein, pragmatic, usable solutions are the basis for making AI work at scale.

 

Myth #2: To adopt AI and ML, a business needs to be a tech firm, or an organisation that has loads of data available

Truth: ML can work on ‘small data’

 

  • Obviously, ML can make sense of lots and lots of data, but ML techniques can be applied to small data too. A lot of deep learning techniques can find the less obvious relationships in data, so not having lots of data is not a limiting factor. Ways around a perceived lack of data can include putting more value upon certain parts of our data (weighting our observations), to bagging or boosting our models or to even generating synthetic data through the use of Generative Adversarial Networks (known as GANs, the algorithm you may have seen that creates deepfakes, allowing pictures to ‘talk’). There are options for all organisations to use some form of AI without needing to be a business like Amazon. Smaller organisations actually do have a lot of great data, it’s often just in harder to reach places – what is usually a sticking point is getting that data in an area where a business can do something with it. Employing an external eye is often helpful to alleviate this barrier - working with our team of engineers, we can help you set up a ‘turn key’ cloud analytics platform being fed from the appropriate sources to get you started on your AI journey without costing millions. 

 

Myth #3: You must already have a top-down culture that facilitates data-driven understanding and decision-making

Truth: AI can be a discrete solution

 

  • You don’t need a full change in culture to be more like Facebook, Spotify, or Air BnB to deploy AI successfully. AI is a point solution, the CEO doesn’t need a company restructure to incorporate a facial recognition application in the customer UX. What you do need is willingness to exploit what AI can offer, and to follow a simple plan, beginning with the agreed frame use case and then getting the right data in the appropriate data environments (note that you don’t need all of your data available from day one). Next, ensure you have the skills to analyse the data needed and the skills to get the model into production. Alongside this, create a strong plan for ethics and compliance (more on this in myth #4).
  • Many organisations already use data, and ‘rules’ about that data to help them make decisions – from triggers about caller wait times, to credit risk decisions for a bank. AI is just a version of this, but on steroids. We can guide you through the AI landscape, giving you pragmatic steps that don’t require wholesale organisational change. 

 

Myth #4: AI is too often an academic vanity project for a business; most projects never make it to production

Truth: AI built with the end user in mind by cross-functional teams offers the best chance of success

 

  • In a lot of companies, data science teams have sprung up, offering strong technical skills. Great for complex models, but poor for business and end-user understanding. In the race to get the greatest technical skills, some of the softer communication & collaboration skills have been overlooked. The great saying that the best solutions have been designed with the end in mind really does hold true. It’s vital to understand both how end-users are going to rely on the automated task at scale and how the operational live data that will flow through the models in production could differ from the data the models are ‘trained on’. Otherwise you are on a path to obsolesce before your new product or process has had its first run around the park.
  • We also see a common stumbling block in the lack of understanding around ‘AI production ready’ systems. Most data scientists don’t have an understanding of the governance needed around full AI systems. To succeed, it’s vital to use the governance principles of product software, (software dev ops), and apply these to AI systems driven by ML models. Machine Learning Operations (ML Ops) is the new role that bridges this gap – and is key to optimising the process to get models live, the data flowing through the systems and the outputs talking to the end users through their end point systems in the right way, with the appropriate governance and controls. 

 

Myth #5: We can’t trust AI

Truth: We can trust AI, but we need to find the appropriate use case to deploy it, build in appropriate controls, educate senior management as to potential risk and establish ethical frameworks before rolling it out

 

  • Systems need to be built with the human in the loop. But keeping the human in the loop doesn’t mean humans are essentially guard rails for AI (the driver in the driverless car) – the human in the loop is to ensure AI does the automation of the menial task, but leaves the strategy and the more nuanced inference of data to the human. For instance, we shouldn’t try and programme an AI solution to replace a radiographer, rather train the AI to spot the easy cases really well so that radiographers can spend more time on the cases that are harder to interpret.
  • AI can be an artificial idiot savant that can get things very wrong if faced with unexpected input.  There is a growing level of concern about the black box nature of the models and how ‘attacks’ of systems can be orchestrated – consider the examples of the face tape designed to hinder facial recognition, or chatbots responding inappropriately when they have ‘learnt’ from interactions, as happened with Microsoft Tay in 2016.  Designing systems where there is a full appraisal of the outcome, time spent assessing what could happen if things go awry and putting in the appropriate controls to ensure the risks can be minimised will be time well spent. In addition to the data side, thought must be given to what happens if the AI makes the wrong decision from an ethical point of view. This means ensuring processes are put in place to monitor decisions (to check if decisions are adversely impacting any specific groups of your base). The customer-centric process side shouldn’t be forgotten, either – your customer should know that AI has generated the decision and you should inform them of the appropriate processes of appeal. 
  • Governance and accountability of the models needs to be something that is considered prior to implementation – if a model goes wrong, who is to blame?  Often it can seem simplest just to castigate the person that is closest to it – i.e. the data scientist that built the model – but is that fair?
  • AI is being silently integrated into many different operational systems in our lives these days – from facial recognition to access your bank account, to search functionality within your photos. Some martech systems are offering ‘AI-enabled’ elements within their products, allowing non-technical teams to access the power of AI. In some cases, these model outputs can feed into other systems and models, so it can be hard to trace through if things start going awry until its too late - the river has poisoned the lake before anyone is aware. Many considerations (e.g. control, governance and ethical frameworks) need to be borne in mind when you rely on AI-enabled outputs from third parties.
  • At Merkle, we have years of expertise and learning in this field. With our privacy team we can assist in helping organisations develop their own ethics guidelines to support AI adoption, point out potential pitfalls and advise on appropriate guardrails for monitoring.

 

Myths can be mitigated, and the potential gains for your business from a well-thought-through adoption of AI are huge. Start with the basics:

  • Make sure you have a well-defined use case
  • Create a cross-functional team to develop and deploy the solution, always with the end user in mind – you are unlikely to find all the skillsets you need to implement a workable solution in one person. Don’t look for the unicorn data scientist and bring in external help where needed.
  • Build simple, then iterate.
  • Think ethics, controls and governance frameworks.

 

We have been leading the way in designing AI with the end in mind and our award winning team of data scientists and ML ops engineers understand and advise both the data scientists developing clients’ models and the IT infrastructure teams that need to implement them. We are Microsoft Gold Data Analytics & Cloud Platform accredited, Google Cloud Engineer certified and partners with AWS & Adobe, covering all the cloud based platforms clients wish to deploy.

 

Now is the time to embrace AI and see where it can take you.

 

If you’d like to discuss where you might benefit from adopting AI, do contact us.

 

You may also be interested in our latest paper “Data and analytics must unite to deliver a truly customer-centric and omnichannel set of experiences”.

Join the Discussion