I have recently attended the Chief Analytics Officer Summit, organized by Corinium Intelligence. They organize a quite different type of event, not much in terms of format but rather in terms of audience.
As the name suggests, the conference does not simply deal with CAOs but it specifically targets them. In other words, is an event about CAOs for CAOs, and you might happen to be in the same room with 60 CAOs from companies belonging to any industry.
The format is quite standard though — 2 days of talks with a few discussion groups, usually organized in really nice hotels in central London.
As usual, I will now try to summarize my three personal re-elaborated takeaways from the conference.
I. Analytics is not easy to integrate
Let’s start with a basic definition of the Chief Analytics Officer role:
a CAO is the guy in charge of extracting value and insights from data.
You might call it ‘Head of Data Science’ (and in fact, I don’t know of any company where these two titles co-exist because they are to me the same thing) or possibly with other names, but the interesting thing is the practical role he has within the organization. He is indeed the person in charge of transforming the company into a data-driven organization and the one who manages the data scientists and identifies the priorities to be tackled as well as the problems to be solved.
This value is not delivered in a single specific way and indeed CAOs come from disparate backgrounds, differently from their engineering counterpart (i.e., the Chief Data Officer) who are usually highly technically educated.
Of course this is not an easy job, and actually there is a set of recurrent problems that a CAO needs to deal with (let’s not consider for a second the pure data problems as well as the governance ones, which should sit with the CDO function):
- Budgeting problems: it is hard to justify the expense for a full data science team in advanced. C-level management is often skeptical about the value of data analytics, and it would hardly allocate proper budgeting to hire 4–5 data scientists at once (this is in my opinion the minimum size of a DS team). Bringing in data experts who will internally train people is a good solution for starting with analytics capabilities. A second budgeting problem concerns the projects themselves: not all the big data projects can be funded, so projects prioritization and KPI monitoring are essentials;
- Integration problems: it is not so easy to understand where a data science team should seat. It is a good idea to have an external Center of Excellence (CoE), physically and operationally separated from the business. However, this could create internal integration problems for the organization, which can be weakened by hiring internals in a first place. If from one hand they may need extra training to be brought up to speed with big data tools and skills, they may also attenuate the communication problem with other departments and bring a solid knowledge of the business and the existing processes. Extra value point: look for different backgrounds, even internally;
- Cultural problems: data science team should adopt by definition a startup culture: they should be fine with failing, they should know how to deal with uncertainty, and they would need to act according to open and transparent processes. The team should follow an agile approach and work across teams and hierarchy layers. This creates a cultural clash within big organizations, and therefore the team leadership has the burden to smooth this issue setting clear goals for the team and establishing collaborative relationships with the rest of the company. They should also work hard in managing correctly the expectations as well as fighting the company resistance to change.
II. Not all data scientists are created equals
It does not matter how good your algorithms are or how many different silos of data you have on a single customer, the success of a data science project is still highly dependent on the quality of the team working on it.
In reality, data scientists as imagined by most do not exist because it is a completely new figure, especially for the initial degrees of seniority. It is then necessary at the moment to outline this new role, which is still half scientist half designer, and it includes a series of different skills and capabilities, akin to the mythological chimera.
An ideal profiling is then provided in the following figure, and it merges basically five different job roles into one: the computer scientist, the businessman, the statistician, the communicator, and the domain expert.
However, identifying the right set of skills (see the full list of skills here) is not enough.
First of all, data science is a team effort, not a solo sport. It is important to hire different figures as part of a bigger team, rather than hiring exclusively for individual abilities.
Second, data scientists come with two different DNAs: the scientific and the creative one. For this reason, they should be let free to learn and continuously study from one hand (the science side) and to create, experiment, and fail from the other (the creative side). They will never grow systematically and at a fixed pace, but they will do that organically based on their inclinations and multi-faceted nature. It is recommended to leave them with some spare time to follow their ‘scientific inspiration’.
Finally, even though their final goal is to remove companies’ obstacles and foster data fluency internally, not all the data scientists are created equal and then they need to be used differently. In particular, it looks clear to me (and especially in specific sector such as Financial Services, for example) that you have mainly two big categories: the perfectionists, who are the ones who are more research-oriented (and that should be devoted to longer-term projects), and the ‘quick-and-dirty’ scientists, who are the ones who make things work and should be used to address and support the daily operations.
III. Every company is progressing at a different pace
It came out in several presentations that many companies are finally becoming good enough at analytics although they are not equally good when it comes to productionizing it, i.e., embedding data-driven decision making into business processes.
McKinsey Analytics calls it ‘moving from garage to factory’, which is basically an alternative way to claim the importance of reaching the right scale for analytics to have an impact.
The Data Stage of Development Structure (DS2) is a roadmap developed to implement a revenue-generating and impactful data strategy. It can be used to assess the current situation of the company and to understand the future steps to undertake to enhance internal big data capabilities.
The table provides a four by four matrix where the increasing stages of evolution are indicated as Primitive, Bespoke, Factory, and Scientific, while the metrics they are considered through are Culture, Data, Technology, and Talent. The final considerations are drawn in the last row, the one that concerns the financial impact on the business of a well-set data strategy.
Many companies are currently either Primitive or Bespoke, and they still struggle to reach the Factory level. The ways to move from one stage to the other are respectively through experimentations (from Primitive to Bespoke) and through standardization (from Bespoke to Factory), even though the essential ingredient to reaching the next step is the increasing support and engagement from the senior management.
Waiting for the next data event…many more conferences coming soon, so stay tuned!