No work without work tools. It is impossible to rethink and transform a business (its processes, its performance indicators, its methods, etc.) without rethinking and transforming its underlying tools. This duality, well-known in management sciences, becomes fundamental when, under the pressure created by big data, the focus of innovation gradually shifts from the products to the processes. However, the business line is powerless to conceive its own transformation via data. How do we get out of this trap?
The big data tsunami is flooding through every sector. While the concept (and associated concepts such as new AI, data sciences and predictive analytics) continue to cause both rapture and terror, a growing number of experts (and non-experts) are certain in their assertion that no company will be spared by the data revolution.
True as this may be, such prophecies do nothing to explain how this transformation will take place. To my knowledge, there is no significant research highlighting the considerable difficulties that companies are already encountering in their efforts at digital transformation via AI and data. Let us take, for example, a hot topic which is currently attracting great interest: predictive maintenance. Now let us imagine the situation in which your average maintenance manager must cope with this upheaval. Normally, their job description includes drawing up the maintenance schedule, monitoring execution of the plan, and managing intervention and risk management teams. There is no denying that in many industries and companies, this is an uncomfortable position, which involves dealing with challenging emergencies and hazards, inadequate staffing levels and resources, and work tools (e.g. CMMS) that became obsolete long ago, abandoned and replaced by simple Excel spreadsheets and paper solutions.
Now, these managers find themselves in the front line of the transformation effort. Their hierarchies involve them in data usage projects. They are called upon by many consulting firms and start-ups. Every day, they hear a foreign vocabulary (deep learning…). They feel threatened by the risk of losing control and frustrated that others do not understand their work, or that they themselves do not understand the technologies they will soon be expected to use. Can we rely on them to conduct this transformation successfully and effectively?
In truth, in the two dozen big data projects I have been involved in over recent years, I have noticed a very high mortality rate. From industrial maintenance to the supply chain, from insurance to law, from aeronautics to transport and mobility, the difficulties are the same, and they have nothing to do with data science. Since I am the only element that all these projects have in common, we might infer that that I am the cause of these failures. In the following analysis, I will try to construct an alternative hypothesis.
To understand the nature of the problem, we must first situate the big data wave in its historical context.
Firstly, we should note that this is not the first wave of its kind.
From the 1950s and up to the end of the 1980s, a first wave of rationalization based on statistics and operations research caused similar movements to those we are seeing today. Thus, in “Management in the 80s,” Leavitt and Whisler were already describing the evolution of management, which was making increasing use of “techniques for processing large amounts of information” using “statistical and mathematical methods,” which would allow “higher-order thinking through computer programs.” The astonishing thing about their text is that it was written in 1959! Already, in the 1950s, we can identify exactly the same dream: better management methods thanks to information allowing improved decision-making in operations.
Of course, we could argue that things are different today. Aside from the quantity and availability of data, and of better systems and algorithms, over time, a strategic dimension has come into play. For example, the certainty that the competition will force the hand of directors and compel them to take the subject seriously. However, none of these aspects indicates a change of nature: the phenomenon remains similar, even if the scale has increased significantly. It would therefore be informative to examine how successfully these companies have taken advantage of the first wave to boost their competitiveness.
History tells us that very large research programs were launched in academia and that businesses were quick to set up internal statistics and operations research departments, recruiting the best specialists of their time. However, despite impressive scientific and technical progress, over a period of 30 years, it became evident that the vast majority of the operations research projects launched did not yield the anticipated results. In a text dated 1979 (before the management of the 1980s dreamed of by Leavit et Whisler could be accessed), Ackoff announced that “Operations Research is dead even though it has yet to be buried.” The paper, which is among the many works on the crisis of operations research in the 1980s, cites several fundamental reasons. The most important is now very visible in today’s big data projects: a severe discrepancy between the technical substance of OR (algorithms, technical performance criteria, etc.), and the context of integration and organizational usage of these tools. At the time, like today, big data experts very often did not understand the organizational issues (even if they were recruited by the organization in which they were intervening), and the companies did not have the necessary knowledge to organize the integration of the tool in everyday work.
This confrontation very frequently leads to a dialog of the deaf, where the parties involved do not understand each other’s languages, priorities and goals.
For a company seeking to define the issues connected with exploiting data, the first point to understand is the nature of the link between the value proposition of this company and the data it can generate. In fact, for companies such as GAFA and their likes, data lie at the heart of their value propositions: the company exists because it has been able to develop the skills and technologies required to exploit these data. These companies are not so much digitally transformed as digital-born. In contrast, for companies in the right part – traditional industry which was not born of information technologies – the value proposition is more often based on tangible elements (products, infrastructure, etc.). In concrete terms, this means that data and data processing are not at the heart of a product or a key skill: at best, they are a derived and not an essential product, which is very often unexploited. They are a by-product which is neither desirable nor undesirable, and which is not taken into account in value and competitiveness considerations.
This orphan nature of the data means that it is impossible for a data scientist to be operational immediately and independently of the organization: their knowledge and experience do not include determining the value of these data (i.e. the prediction target and the gains offered by this information). In such a process, they will at best play a supporting role, but not a leading one. To identify the leader, we simply need to ask: who is the owner of the process that generates these data?
The answer will, without exception, point to a business.
To move forward while avoiding the traps that have been well-known for more than half a century, we first need to broaden the restrictive framework offered by the concept of big data, instead seeing digital transformation as a key concept.
From big data with the most sophisticated infrastructures and algorithms, to the digitization of the simplest paper resources, the quest is the same, and has been so for 70 years: the renewal and management of information systems, taking into account the fundamental duality between “tools” and “work.” Basically, there can be no work without work tools and the reorganization of work requires work tools to be redesigned. It is impossible to rethink and transform a business (its processes, its performance indicators, its methods, etc.), without rethinking and transforming its underlying tools. This well-known duality in management sciences becomes fundamental when, under the pressure created by big data, the focus of innovation gradually shifts from the products to the processes.
Only the business line could master the intimate duality between a work process and its tools. Only the business line can conceive its own transformation. This is a general proposal, going beyond the question of data and AI. However, as we have seen, the transformative power of formal models of rationality (artificial intelligence, operations research, etc.) and more generally of new information and communication technologies, has been known to us for several decades. We might therefore expect that the business would be used to it, and would already have the keys to such transformations (methods, approaches, experiences, good practices, culture, etc.).
In reality, it is the business line that is powerless. More and more often, it is incapable of understanding the value of its own data, incapable of implementing measures which will allow it to design suitable tools and systems to access this value, incapable of managing a client-supplier relationship for the implementation of these tools, because it can neither write (nor even conceive) the specifications for these tools. In short, the business is behind from the start, because it is so incapable of envisaging the transformation of its work processes and tools.
(Images from internet and copyrights belong to the original authors.)