(français) 法国 英国 中国

CATEGORY Contributor

Artificial Intelligence: The Fear is from the Unknown

Qin Zengchang / Associate Professor at the School of Automation Science and Electrical Engineering of Beihang University and Vice Director of Beihang Education and Training Centre at Beihang University. / 2016-05-04

“Professional people like us are living in a good time,” said Qin Zengchang, Vice Director of Beihang Education and Training Centre at Beihang University. While only 15 years ago, when he was still a postgraduate student of machine learning and data mining at the University of Bristol, artificial intelligence was not the apple of the eye.

Artificial intelligence (AI) research has gained major headways over the past few decades. Relevant technologies have grown from being just lab myths to mass market products. Some computing technologies are sophisticated enough to replace human minds and solve real world problems. Regardless of the professional systems widely adopted in areas such as national defence, finance and medicine, search engines, social networks and apps on smart phones are the real vehicles that have allowed the public to feel the massive power of AI.

Whether it is for academy or industry, AI talents detected that this renaissance would bring a different historic meaning to the technology.

“Communication and information are vital parts of our society. Smart computing technologies can propel the development of capture, storage, transmission, exchange and understanding of information. Artificial intelligence has a bright future, the development of which will be at the same rate as computer technology,” Qin told SJTU ParisTech Review.

With Google, Baidu and other tech giants joining the ranks, the public has given different interpretations of AI from their own perspectives, with some of them being off track. Which stage has AI technology reached? Is the arrival of machine intelligence a blessing or a curse? Let’s hear what the experts have to say.


Read this article in Chinese

Interview:

Is AlphaGo qualified as a major breakthrough?

SJTU ParisTech Review: The victory of AlphaGo won great attention from the public, which is getting more interested in AI. Can you please tell us more about it?

Qin: Regardless of the heated discussion from the media and public, AI has not received all positive reviews. Investment that the industry received saw ups and downs, while all academic studies of AI are striding forward. Industry and academy talk very often with each other in western countries. As a sub-branch of computer science, AI is much faster to be commercialized than other technologies. Research papers published last year have a chance to become research assignments for students from top notch Silicon Valley universities. They may even be quickly developed into real life products. This type of swiftness can be traced back to the nature of this industry. Everyone is acting quickly.

Looking at the general research trend, the “traditional” AI research was led by computer scientists. Up until today, we’ve seen disciplines such as psychology and neurology become extensions of AI studies. The ultimate purpose of an academic sprawl out is to understand AI or make AI machines, hopefully achieving both.

To characterize how scientists conduct research, we can take the example of the invention of aircraft: it was so mankind could fly in the sky like birds, but it wasn’t a machine that simply mimics birds. Inventors studied hard the theories and reasons behind the flying ability of birds and applied such design to aircraft. Computers we invent are mostly for supplementing the deficiency of mankind’s brainpower and extending the computing and information processing ability of human beings, rather than enabling machines to “think” the same way as humans. Whether it is for engineering or mathematics, AI or tools are designed to be at the service of mankind.

Neurology is another extension of research, which focuses on the consciousness and thoughts of mankind and has produced myriads of research papers and results. Recently there was an article stating how human brains manage to locate places, indicating that some cells function as a GPS system that navigates geographic locations. These research results and AI research share some common aspects.

If you want to take a deeper look at what sub-branches these fields of research now have, the best way is to flip through articles submitted to the International Joint Conference on Artificial Intelligence (IJCAI), which includes natural language processing, image identification, knowledge representation, and multi-agent systems.

As of now, what are the most widely used sub-branch studies?

It is hard to categorize. Those that were brought up more often are computer visual, voice recognition and natural language processing. These are technologies frequently used on human-machine exchanges and searches. The essence of searching is using and processing a great deal of texts – and now images and videos. Some other developments in voice, fingerprints and palm prints and biological recognition have gained major headway at a rapid speed.

What is the relationship between these sub-branches and “machine learning” that we’ve seen in the news?

Machine learning itself is a sub-branch of AI. It highlights the effectiveness of computing by taking data as a part of the training, with the purpose of “teaching” machines to learn and generalize. Gradually, machine learning grew to become the most successful sub-branch of AI, which has a permeating power of affecting every aspect of AI technology. Natural language processing and image recognition are both applying technologies that are supported by machine learning. The development of machine learning technology, in fact, not only strengthened the capability of machines to “pick up” new skills, but also is employed in other applying technologies.

How many different learning methods does a machine have?

The most common is called Supervised Learning. For example, the machine is provided with a database containing data such as “height”, “weight”, “color of the hair” and more. Each piece of data comes with labels such as “boy” or “girl”. The machine will be left with the task of building a model that connects these data and labels. With the new calculation method from this model, the machine will be able to tell the gender of this person based on his/her height and weight. This is called Supervised Learning.

Unsupervised Learning does not come with a label. A machine is trained to categorize data, even without any specification of “boy” or “girl”. It shall also be capable of sorting out data with similar features based on certain rules. If there is a new set of data input as a measure, it will be sorted into three types: children, teenagers and seniors (the machine itself cannot tell the realist meaning of such activity, data screening is done on the basis of certain features). This type of computing learning that needs no labelling beforehand is called Unsupervised Learning.

The one employed by AlphaGo is called Reinforcement Learning. To put it simply, a machine action will not be responded to instantly. After a series of actions, a process is finished along a designated route and provides a positive or negative feedback and corresponding reward or punishment value. It mimics taking a maze game. The first step you take is random. After a series of random choices, you may run into deadlock or find the exit. The machine will record and trace the entire process to choose (learn) the successful model (calculation). This is called Reinforcement Learning.

A key factor to improving machine learning efficiency is the scale of data. Is it possible that, in the future, machine learning (or AI) can leap over its reliance on data and be achieved directly through direct human brain simulation?

I’m not sure what you mean by “direct human brain simulation”, so I am probably not the best person to answer your question. But as to AlphaGo, it is now facing a huge area of searching space and needs more human experience to narrow this space in an orderly way. For example, if you want to find a pingpong ball on a basketball court, what you will be doing is blind searching inch by inch. However, after learning certain information, such as where pingpong balls are often put, the area where pingpong players are found to be active and how far the ball can roll on the floor, you are more likely to find it. Data help you with answers and solutions. Without data or information, it costs you an awful amount of time to find a tiny ball in a huge space. We must find some boundaries to help define a more possible area.

AlphaGo became a big hit recently. Would you define it as a major technological breakthrough?

Of course it has some ground-breaking significance in its profession. Personally, I don’t think it is a perfect testimony to a major breakthrough in AI technology. A machine playing go is by all means the most sophisticated technology we can think of now. But is it plausible for it to be applied to other domains? Maybe some, but I think it is too early to say it represents a big leap for the entire industry.

We often use “computing capability” or “storage ability” to gauge the development of computer technology. Are there any authoritative measures we can use to tell the maturity of AI technology?

I don’t think there is a one-size-fits-all standard. We will see different new inventions springing up in their respective domains. At some point in the future, AI will surpass human beings in all kinds of capability. In other words, the establishment of a computing technology is marked by its capacity for outperforming human beings. In terms of work related to searching and archiving, libraries have long been considered obsolete and can be easily replaced by a simple database. The emergence of the Internet rendered databases outdated and called for new computing skills for the purpose of searching. Human beings have become incapable of catching up since then. Another example is banking system security. It is getting impossible to rely on human labour for credit card fraud detection. The only functional way is to design a computing technology that tracks changes of data to draw up the contour of a default model, upon which abnormal transactions will be detected. The machine capacity has exceeded that of human’s long time ago. Technologies in image recognition and auto piloting are not at the same level of development but they are expected to catch up soon. The problem-solving capacity of human beings will be challenged on multiple fronts.

What should we really worry about?

The development of AI is divided into stages such as weak AI, strong AI and super AI. Are these professionally given names to these different stages?

These names are mostly for folksy usage in hopes of easier public understanding. They do not come with a scientific definition. You cannot find any seminar under the theme of “strong AI” or “weak AI”.

We are expecting a small heat wave of AI to come soon. From your prospective, how long lasting will this wave be? Are we going to see a retreat right afterwards?

A retreat is inevitable. I see the trend cooling down gradually, which is also a rule of nature. Nothing is mighty enough to be on top forever. There will be ups and downs. But if we look at the broader picture, AI, in its very long history of development, has been growing steadily, including talent training, people’s attention and its role in society.

We hear a lot of discussion about the singularity theory. Believers say AI will experience explosive growth in the coming decades and surpass the limit of technology to ultimately beat human wisdom. What is your take on that?

I will avert from giving any comment on such a viewpoint. An AI expert should be defined by articles he has published, achievements he has made, and whether he has been widely acknowledged by his peers or colleagues. I don’t think by publishing a book, raising several arguments or gaining temporary attention can make someone an expert or a scientist. Of course people will disagree with your research results, but let’s leave it to society for open discussion.

But we can still sense fear against AI among the public.

Judging from people I know, I don’t sense any panic among professional AI researchers. The source of fear is lack of knowledge, or the less you know, the more fear you get. To give an example that may sound somehow off track, we all fear AIDS, yet doctors are not concerned about having contact with AIDS patients. That is because they know about the disease well and what might be the worst case scenario. The fact that you know nothing about the situation opens up a whole large space of wild imagination and that is the source of your fear.

People are worried about the free will of machines. So far, no scientific evidence can support such a statement. Even human beings’ free will seems to be an enigma, let alone that of machines. Deep diving AI researchers have a crystal clear picture of the industry status quo and risks that may not be manageable. The reality is far from what people might think of.

All technologies have two sides. Is there anything related to AI we truly need worry about?

I think the core problem is how human beings want to deploy such technology. To give a wild example, a robot can save or kill, depending on the order it receives. It follows its commander verbatim and never cares about the purpose or consequences of such actions. Running this line of thinking, a robot is the same as weapon, or even an atomic bomb. AI technology only makes it more automatic. So I believe the key is who is using and how to use it.

Contrary to the atomic bomb or nuclear weapons projects, AI is something that can be developed byanyone. That creates some challenges in the regulation. Auto pilot technology is a good example.

Emerging objects or technologies are destined to challenge industry norms and legislation. It reminds me of hacking into personal data using network crawlers years ago. But is it the crawler’s fault, or man’s fault? We will not sentence the technology to jail. Apparently it was the person who abused this technology that went wrong. Risk aversion should be applied to people, not the development of technology. New legislation and rules will phase in according to the change in human conduct.

AI: Achieving capability and quasi-demand

You mentioned in western countries, academy and industry talk a lot with each other. How is it in China? Where are we in terms of talent distribution and training?

China’s AI talents cluster in two areas. One is prominent universities and research institutions. But the more important one is tech companies. This is a similar pattern to what we see in western countries. I believe the design institution set up by Microsoft earlier in China played a huge role in bringing back talents from abroad and raising a batch of local talents. With the advent of the Internet in China, companies with huge turnover, such as Baidu, Alibaba and Tencent, attracted lots of great young minds who are directly or indirectly affected by AI technology. They stand exposed to a large amount of relevant theories about AI and they have a huge impact on society.

Valedictorians from Stanford or UC Berkley will receive offers from good companies upon graduation. These high paying jobs will encourage more people to study and acquire such technology. While I was teaching at Berkley, lots of students who are not from this major showed strong interest in AI. Whether it is for the freshness and fashion sense of this technology or they simply wanted to change the way of work, my observation is the number of students who are interested is rising.

A deluge of AI start-ups emerged in recent years thanks to public awareness and capital. Do you think the market will be dominated by big players such as Google or Baidu, or can everyone be a winner regardless of size?

Google grew into a conglomerate from a start-up as well. Big companies, with the help of abundant resources, are sure to be in the leading position in such a market. But smaller companies are left with growth opportunities, which will make them huge if they get things right. Of course there will be a process of elimination as a result of the competition. It is just too early to tell.

We heard a lot of discussion of AI that arose from its fast development as a trend. This has, to some extent, facilitated the research and application of these technologies. There are two things companies need to be careful of, whether they are large or small. The first is to avoid any pursuit of unrealistic products that require futuristic technologies. The second is a misinterpretation of the market demand, or the quasi-demand. It means the market actually does not need the products the company thinks are necessary.

What profound changes to you expect AI to bring to people’s life in the short run?

Personally I think AI may start with some work such as making easy judgement. There is huge room for improvement. Through mass data and cross-platform integration, the decision making process can be optimized. For example, a doctor can only go through a certain amount of medical cases. Particularly for junior doctors, we need to ask the question of how to they cope with an unprecedented medical case when doing image diagnosis of cancer. Information technology can help garner and integrate all similar cancer cases across the country or even around the globe for automatic recognition, matching features with past cases to assist doctors in making judgement on the nature and severity of the illness.

YOU MAY ALSO LIKE...

  • The quiet revolution in the energy performance of buildings
    / Chairman, Ecophon (Saint-Gobain Group) /
  • How to regulate search engines
    / Opinion editor, FTChinese /
  • The future of financial mathematics
    / Professor of Applied Mathematics, Pierre et Marie Curie University (Paris VI), Research Director, Center for Applied Mathematics, Ecole Polytechnique /