Often companies will decide that they need a new or revamped ‘data capability’ without a clear idea of the value outcome they should expect in return, nor the path to achieve it.
Typically, the person pushing for the new capability comes up with a high-level evaluation of the business functions that could benefit. Then the decision makers approve the initiative as being the ‘right thing to do’. There is no clear alignment to the business strategy underpinning this decision.
From this point on, investments are made in technology and people with the optimistic belief that, once up and running, this new capability will significantly improve business performance.
Around 6 to 12 months into the project, questions start getting asked about the return on investment, and things can turn sour. This is because the expectation chasm has not been correctly managed.
No business wants to be staring into their own expectation chasm, having sunk hundreds of thousands of pounds and 6-12 months’ of their team’s efforts into a data programme that is failing to live up to expectations.
The problem usually starts with a vague top-down estimation of expected benefits value, constructed with insufficient detail and rigour, before the programme’s focus turns quickly to investment in technology and people.
Unsurprisingly the programme then delivers a generic technical data capability. It’s not focused on the specifics required to enable use cases to be delivered, with end-to-end integration into the business domains and a clear path to execution. The opportunity to start generating the expected value is either severely limited or non-existent.
No data programme wants to be staring into an expectation chasm such as this!
Here at The Data Practice we can help you prepare and plan to avoid falling into this unwelcome scenario – and if you’re already there, we’ll get you over it. We have a methodology, the Data Navigator, that’s straightforward and practical to implement.
For a new data capability, the place to start is the high-level assessment of the business functions that could benefit. However, instead of rushing into decisions or solutions, the next step is to look in detail at exactly where better data will make a difference.
Where would better data realistically contribute value? We’re talking identification of specific use cases, which could be totally new, or improvements to existing processes.
Who’s involved in drawing up the use cases? I’d say it’s essential to do it jointly between a business function stakeholder (eg Marketing, Operations), who will be responsible for utilising the new outputs to generate the value, and the data project leader who will be responsible for delivering the new data outputs to the business stakeholder.
At the prioritisation and planning stage of the use cases it is important the business function stakeholder puts their name to:
…and they’ll need to get lock-in from the finance team, who are likely to be very interested in how this business-outcomes-focused approach to data contributes towards the overall business strategy.
The initial list of use cases doesn’t need to be complete or perfect. No-one can guarantee that all of the use cases will generate the benefits and values that are predicted.
The discipline known as ‘Data Science’ is not known as ‘Data Certainty’ for a reason. It is a process of experimentation and measurement to prove or disprove a hypothesis, and it’ll require iteration.
The proposed list of use cases also does not need to be exhaustive at the outset. Just sufficient to justify the investment in the technology and people required to revamp or build the data capability (or deliver the first phase of the capability).
The agreed list of initial use cases should be drawn up into a 6-12 month delivery window. I have found that this is a manageable timeframe for identifying and acquiring technology and people, upskilling, initial use case execution, and managing expectations of the benefits.
The implementation of a data strategy can be – and often is – a business transformation strategy, and this means that implementation is all about people taking on new tasks/roles - doing things differently in service of the business strategy. Potentially it’s not just a data or IT thing, but a business change thing. An extreme example would be implementing the complete automation of a function - and the consequential change in organisational design and the very real change in people’s day-to-day jobs.
This is critical both operationally (eg day to day, week to week) and at aggregate level.
Operational measurement is required to understand whether new insights or models are performing to expectations. Are you seeing increased sales conversions? Are there process cost savings? If not, with proper measurement in place its easier to investigate the problem and seek to correct it.
Measurement and tracking are required at the aggregate level as well, so you can report back to the senior decision makers whether the benefits of the new data capability overall are being realised (across use cases).
Once the initial use case roadmap is finalised and delivery is under way, it’s time to start planning again, for the next 6-12 months. How can you extend the scope of the project, perhaps deeper into already supported functions, or more broadly to support other business functions? Here again, taking a use-case led approach will pay off – both to give the data programme a specific and prioritised steer where to focus efforts, and again to manage and align stakeholder expectations regarding potential benefits.
Want to find out more? Get in touch - we like nothing better than to talk about data.
David May is an associate with The Data Practice. He's a Data and AI strategy consultant with over 30 years’ experience in delivering data, analytics and machine learning solutions to improve business performance. Most recently David was a Global ML Product Owner at Vodafone Global.
Photo credit: Tiana