(français) 法国 英国 中国

CATEGORY Robotics series

Robotics Series – 7 – Expert systems: how far can intelligence be automated?

ParisTech Review / Editors / 2014-10-24

Artificial intelligence (AI) and expert systems are less trendy in 2014 than they were back in 1974 but since that time they have never ceased developing and the processing power of today’s computers opens ever wider prospects. In the same way that robots have changed factories, the rapid advent of expert systems has changed numerous skilled office workers’ jobs. Some have been transformed, others destroyed. What is at stake is the very existence of our middle-classes, the core of modern economies. But the final word here is not written on the wall yet, inasmuch as the concept of expertise is also changing very rapidly.

Read this article in Chinese | Français


The definition for an expert system has never really been stable. The underlying ideas appeared in the 1960s with the arrival of artificial intelligence (better known as AI). The very first expert system, called Dendral, dates back to 1965; it was a programme designed to identify the chemical components of a sample material using a mass spectrometer a magnetic nuclear resonance (MNR). According to the American computer science expert Edward Feigenbaum, these programmes were “designed to reason in an intelligent manner about tasks that we think require a considerable amount of human expertise.”

Such systems took off in the 1970s, accomplishing increasingly complex tasks but always with a human monitoring input. Since the 1990s, a preferred definition is “interactive decision aid systems,” i.e., a way to reintroduce man into the decision loops of the computer. Expert systems became increasingly complex and integrate wide-spread knowledge bases that no search engine can replicate without capitalizing on operational human skills. In short, the more sophisticated the machine, the more its output requires deciphering in order to be of use and also a sorting process by a particularly well-trained human brain.

Tools and uses

Expert systems hold promising prospects in every area where decision-making process has to take a large quantity of data into account. In the business world, managers can be assisted (or even replaced) by expert systems – in that part of their functions we can designate as “knowledge-based” – in order to collect, consolidate, model and present data. This “decision-oriented computer science” increases considerably the managers’ visionary capacity in regard to their company’s work, enabling access to working hypotheses and configurations that they would never have imagined alone. Public management is a potential client here: in Canada there is a plan to use an expert system to determine whether candidates for unemployment benefits actually qualify or not.

The military are likewise keen to use expert systems. A modern battle field represents an active anthill, so to speak, of information of varied sources and formats that need to be processed instantaneously. The tactical officers in chafe of operations know full well that, in the field, an ergonomic overview can provide a comparative gain over the hostile forces. The weapons and ordnance industries have since long moved in this direction and numerous defence groups have successfully developed rapid decision-making protocols to handle critical situations based on intensive use of computers and communications systems , integrating huge amounts of data.

In the area of security functions, expert systems are judged in contrasting ways. They early on were introduced massively in aircraft flight controls (notably in the auto-pilot (AP) mode. Progress here cannot be denied; the AP flight mode has contributed to improving flight safety, in reducing pilot fatigue and warning the crew if a possible incident is detected; AP can keep an aircraft in ‘normal’ flight conditions, even if the crew are incapacitated or “down.” Despite this progress, however, a number of aviation experts express their doubts, all the more so after certain flight catastrophes. As they see it, automated flight erodes the pilots’ expertise and dampens their reflex reactions, which can lead to what ergonomists call flight crew disqualification. For these critics, the overall drop in numbers of accidents in fact masks the recent occurrence of some highly spectacular accidents. Prof. Raja Parasuraman, Chair of Psychology at the George Mason University, a specialist in the automation area, explains that when the auto-pilot malfunctions (or disengages), the pilot is faced with having to exercise a responsibility that he has somewhat ‘forgotten’ through lack of having to do so  regularly, and this leads to sometimes serious, occasionally fatal, pilot errors. The American FAA (Federal Aviation Administration) has issued warnings about excessive dependence on automated flight systems and recommends that the companies advise their pilots to fly manually more often.

The fields of medicine and biology are also looking closely at expert systems. We now have virtual intelligent assistants (VIAs) are expert system that analyse a patient’s data to diagnose as far upstream as possible symptomatic evidence of a possible illness. VIA is an expert system that associates the concept of a “Quantified Self” (each individual can analyse his/her own bio-data) and the prodigious power of Big Data mining and processing.

Sports activities also use expert systems. Physiological monitoring of athletes allows managers and coaches to collect real time data during the training session and thus to detect possible weak points or even possible body damage. The coach can make his/her decisions accordingly. But more interestingly, expert systems allow you to detect future sports stars and rare talents on which an entire sector’s economy is based. The most famous system to exemplify this was that set by Billy Beane in the years 2000. He used a software package that analysed the performance of players with perfectly objective performance parameters, ignoring all the ‘human’ presuppositions inherent to Baseball. The manager implemented a model-based approach to detect and buy players for his own team, players who were under-evaluated in the market in terms of their real performance possibilities. This allowed him not only to have his team rated at much better levels than his budget could normally have placed him, but also to get high profits out of trading his players back into the market.

A threat to employment?

While accompanying progress in our economies, automation has always led to social concerns. In the textile and agricultural worlds, for example, weaving looms and the tractors made many tasks – formerly carried out by men and women – fully automatic and this increased productivity tenfold. In the industrial sectors, robots have become one of the keys to better performance and in medicine other forms of robots are now used on a daily basis to carry out surgical operations with a level of precision and regularity that outperform human surgeons’ possibilities. Whenever automation has been introduced, there has been a productivity leap but also a social trauma proportionate to the number of jobs ‘destroyed’, inasmuch as they have been replaced by automated devices.

In the process of automation, workers replaced by machines are often persons who, seeking to get a so-called ‘modern’ job, had acquired rare skills (at a price and with difficulty) that were recruitment prerequisites of the market-place at the time. The argument served up to the replaced persons is that now they can be posted to added-value assignments. Unfortunately, this sort of promise takes several generations before it can be fulfilled, albeit often in a spectacular manner. Returning to the example of robotized agriculture, which caused tremendous worries in the early 20th Century, it was impossible, at the time, to guess that a century later, health, finance, ICTs, general public electronic gear, hotels and leisure have created far more jobs than the agricultural sector lost.

Since the 1970s, the drop in costs for computer computation by a factor of one thousand billion naturally encourages employers to replace costly labour forces by computers wherever possible by computers at ever decreasing prices. The latter prove excellent in routine task-work: organization, warehousing and storage, data retrieval and handling and to carry out accurate physical movements in production processes. These tasks qualify as semi-skilled and include accounting, office work and quality assessment. The pervasiveness of computers has reduced demand for these kinds of job, but has stimulated requests from workers doing non-routine tasks that finish off or round up the automated sequences. These “survivor” activities are at both ends of the scale of professional skills.

aintelli

“Survivor” activities

At one end of the scale, there are abstract tasks that require solving complex problems, intuitive thinking, persuasion and creativity. These activities are to be found in jobs where management and creation are predominant, such as law, medicine, science, engineering, publicity or design. In these sectors, the professionals usually have benefited from high-level education and training, possess the capacity to make independent analyses and use computers to transmit, organize and process data and information.

For David Autor (professor of economy, MIT), others forms of activity will continue to exist despite computerization, including certain manual tasks that call for a degree of adaptability, visual and vocal recognition and person-to-person interaction. Preparing a meal, driving a lorry in urban traffic, cleaning up a hotel room… are all activities too complex for a computer. But they are simple for human beings, only requiring what in fact are our innate skills such as dexterity, vision, knowledge of a language and a minimum background training. Workers in these categories of job cannot be replaced by a robot. Their skills are widely available and correspond to low salaries. Computerization therefore leads to a job polarization, the pressure being on a concentrated growth among the most highly qualified professions and also among the least well paid worker categories. Computerization does not reduce the number of jobs, but instead degrades the qualifications for a great many salaried workers; Those with high qualifications can find jobs in abstract, high added value assignments where growth is assured, while those with lower level diplomas will find themselves oriented towards manual jobs such as in restaurants, cleansing and security, with low salaries and very few opportunities for promotion. In short, computerization has increased inequality, a phenomenon which has grown over the past two decades and, moreover, in proportions rarely seen before. Economists quibble as to the causes for the observed distortion: is it fiscal policies, or salary policies…? Besides other managerial decisions and political choices, we simply cannot ignore this major trend, removing as it does economic value from a certain number of professional skills. Artificial Intelligence is competing with, and challenging, human intelligence increasingly. That indeed was the thesis defended in a famous book by economist Tyler Cowen, published end 2013, entitled Average is Over(dealing with the job scene in the USA).

So what could happen tomorrow?

History is not stopping here. As machines become more and more powerful and increasingly intelligent, they will be in a position to take over and accomplish more and more highly qualified tasks. A report issued in 2013 by McKinsey Global Institute (MGI) contains sufficient grounds to really worry the upper middle class round the world; for this advisory group, progress in AI and expert systems will destroy – before 2025 – some 100 to 140 million jobs for persons employed in the knowledge-based sectors,viz., all persons who “use a computer to carry out tasks calling for complex analyses, fine and substantiated judgment and creative problem solving modes.”

Maybe, however, we can nuance this vision.

First of all, the extension of social networks and procedures that invoke and use collective intelligence tend to make the powers of experts “relative” in every area: creating automats on yesterday’s expert models therefore raises some questions. The intelligence of a crowd supersedes that of the most powerful computer systems, as has been suggested recently in an experiment conducted by a research scientist at MinesTechParis, who studied what is sometimes called “human computation.”

Moreover, most expert systems operate according to a formal logic that privileges deductive reasoning: their programmes recognizes rules, check if the conditions for an application are satisfied and, if so, duly apply the rule. But now we are witnessing the arrival of new forms of reasoning that invoke trial & error, very different from hypothetic-deductive reasoning that characterized the expert cognition models. If Big Data processing seems to be a par excellence way forward towards inductive logic, the ‘traditional’ expert systems are in an even weaker position than Human beings.

At the same time, the machines themselves are changing. Up to the mid-2010s, the capacity for a computer to accomplish a task rapidly and at least costs, depended on the talents of a human programmer to produce procedural code, such that the machine could take appropriate decisions when faced with possible listed occurrences. In the future, with a machines connected to a Big Data base, able to learn and repair themselves, will the day not come soon when the machine will be in a position to make a better decision than a human being? A question like this is justified when we consider several current evolutions: rapid increase in computational and processing power of computers, automated machine learning protocols and simplified man-machine interfaces.

A major novelty: progress imbedded in machines will depend less and less on the computer scientist’s algorithms. The computer will be able to adjust and improve its own algorithms as a function of its analysis of the data and also to identify ‘invisible’ correlations, i.e., invisible to the human brain. The principle is: the more extensive the amounts of data that the processors can handle and examine, the more the algorithm becomes fine-tuned. Installing machines like this could lead, among the professions where normally there is a high human added value to a blood-letting comparable to that which decimated the factory workers in the 20th century. The economic impact here has been estimated by MGI as lying somewhere between 5 200 and 6 700 billion $US, on a global scale.

YOU MAY ALSO LIKE...

  • Robotics series – 6 – Apple or Airbus? Drones in search of a business model
    / Editors /
  • A complex path to the digital future
    / Founder and Principal, HG Communications, LLC; former Senior Director of Global Communications, The Nielsen Company /
  • Tesla’s Powerwall: sustainable or not?
    / COO France, General Electric & Vice-President R&D, Total New Energies /