The past 2016 saw the happening of the most eye-catching technology breakthrough: The AI robot Alpha Go beat the world champion Lee SeDol with the score of 4:1. This remarkable victory was reckoned as another milestone event of the artificial intelligence industry along its over 60 years of history. Together with the passion also came doubts and concerns: will robots replace human being? Will artificial intelligence ultimately abort the human race?
Read this article in Chinese
There is a certain degree of rationale behind such worries. That is also why we frequently heard of the proposals to call for government regulations on AI development – reminiscent of tightening the research on gene technology a decade ago. However, the essence of the problem is, how to impose effective, but not overly aggressive regulations on only a threat from imagination, something that has not become a reality yet?
Actually, the dilemma of regulation is not how to balance the upsides and downsides of the technology, but to thoroughly understand and contain the potential threats that come as a result of artificial intelligence. In other words, is the menace of artificial intelligence to be defined as “replacing human being”? If so, the only rational regulation would be to entirely ban further R&D efforts of this technology. Nevertheless, is it fair to sentence this emerging technology to death, while subjects as ethically challenging as gene engineering are still up and running? Artificial intelligence is already pervasive in our life, widely adopted in search engine, social network and news reporting. Its popularity begs us to reevaluate the concerns we held against it. Should we to overruled these worries, what are the real threats rising from artificial intelligence? We can only find the correct and effective regulation by seeking the right answer to this question. This article was written to serve this purpose.
What to Worry about AI indeed?
Stephen Hawking claimed that artificial intelligence has the potential to be the terminator of human culture. It could be the best, but could also be the worst in the human history, so he said in the opening remark of the inauguration of Leverhulme Center for the Future of Intelligence of Cambridge University on October 19, 2016. This is not the first such caveat from Hawking. He expressed similar views in an interview with BBC in 2014. Afterwards, Hawking was active in installing necessary regulations on AI research. In fact, the founding of Leverhulme Center was, in a large part, to tackle risks from AI.
Hawking is not the only one who has voiced his concerns. Elon Musk, the founder of Tesla and SpaceX has repeatedly put red flags on AI technology. During the MIT AeroAstro Centennial Symposium, he claimed AI could be the largest threat to human existence. Two years later on the Global Code Conference, he once again warned that human could sink to be the pet of AI. Following such calls for attention, 892 AI researchers and 1445 experts co-signed and published the 23 Asilomar AI Principales to prevent any deviation of AI study and application. 
The concerns not only lie in the possibility of AI to replace or even rule mankind. Oppositions also believes it will hurt jobs and widen income gaps. Yuval Noah, the author of Homo Deus: A Brief History of Tomorrow once pointed out: AI will give rise to a string of social problems including massive unemployment, which will further catapult us into the times with the utmost degree of inequality, seeing an extreme small number of individuals rising to form the group of elites with “superpower”, while the majority being abolished of their positions in an economic or political context.
On the other hand, some people disagree on whether these views are merely over-concerns. Mark Zuckerberg once paralleled AI with the invention of aircrafts: 200 years ago if our ancestors were afraid of collapse and failure, we weren’t be having flying airplanes today. If taking a look at history, every single piece of revolutionary technology rose to its statue amid doubts and worries, be it atomic energy or gene engineering, but, human society did not tumble into chaos, and human race did not go extinct. This is a fair argument to suggest the noises we heard about AI are nothing but cacophony.
Admittedly, developing AI entails great risks. Thus, regulators should not let it grow freely. As has been stated earlier, the early regulations adopted unanimously by the international community a decade ago helped rein in the risks rising from gene engineering. After OECD coined the concept of “knowledge society” in 1960s, technology was reckoned as one of the most important indicators of national competitiveness, along with land and population. Policy makers around the globe should look carefully at how to provide enough space for the development of AI. The essence of the current discussion lies no more in whether there should be regulations, but instead, in what and how.
It is not conducive to follow the words of the likes of Hawking, Musk or Noah, as they only “envisioned” the threats from AI, without providing scientific explanations, rendering them unfit to answer the questions of what and how. To provide helpful suggestions to policy makers, we must understand the principles, capabilities, potential value and risks of AI. This is the intention of writing this article.
Cornerstone of Algorithm: Data and Rules
AI, along with its algorithm, is now largely sensible to a lot of people after its booming in 2016: movies recommended to you on the video website can by a large chance be your favorite; facial recognition system at the train station can automatically check if you own a ticket of that day; you can register a hospital visit simply through talking to your phone; your DNA info has been transmitted to the healthcare system to tailor make drugs cure your disease. Search engines, social network and chatting software we use on a daily basis are all manifestations of how AI is creeping into our lives. When the computing devise is “consuming” massive amount of data, it is also providing all sorts of information, products and services relevant to you.
And how is it happening? Will the algorithm-based AI technology continue to grow and ultimate break from human control? Let’s take a look at the algorithm behind AI to answer these questions.
Empower the machines to possess the thinking power alike human being is the ultimate goal of AI technology. To achieve this target, a machine must obtain learning ability, which explains why, in often cases, we equate the concept of “machine learning” with AI. Machine learning is a process that sees a machine, been given a set algorithm and known data to come up with a model, to which it will later render judgments and analysis. Remember, machine learning algorithm is different from traditional ones. Algorithm, in its nature, is a series of orders to computers. Traditional algorithm preset established the movements under established conditions with the utmost details, while machine learning algorithm requires robots to change accordingly based on the history data. Take the movement of walking as an example, programmers need to set up every single step of the way using the traditional algorithm. However, by making the machine analyze and learn how human walk, they can themselves walk even in an unprecedented scenario.
Of course the above instance only described machine learning at its basic level. Only by digging into the depth of machine learning processes can we truly understand how it would materially affect the society. There are many genres of machine learning algorithm based on the current available technology. As of now, it can be classified into 5 schools: symbolists, connectionists, evolutionists, analogists and Bayesians. Each school believes in different machine learning logics and philosophies.
The practicers of school of symbolists believe all information can be simplified as handling symbols, making the learning process as simple as data and assumption based induction. On the basis of data (fact) and knowledge (assumption), the school of symbolists train the machine with the formula of raising assumption – data verification – raising new assumption – induce new rules to enable it to make judgment in the new environment. The school of symbolists runs in line with the philosophical experience, whereas, the key to success lies in the comprehensiveness of data and reliability of preset conditions. In other words, lack of data or irrational preset conditions will directly impact the result of machine learning. A typical example is “Russell’s turkey”. The turkey easily concluded that it will be fed at 9am daily on time after it was treated so after 10 consecutive days. However, 10-day is a short period of time (incomplete data) insufficient enough to come up with such conclusion (accepting rules after 10 days of data accumulation – unreasonable preset conditions). So tragically, on the morning of Thanksgiving night, the turkey was slaughtered instead of being fed.
The problem with data and preset conditions is not exclusive to school of symbol. It is a common problem seen in other schools. The school of connectionists simulates how human brain acquire knowledge. It automatically adjusts the weight of each node through an association network simulating neurons and back propagation algorithm to ultimately achieve learning capability. Again, the keys are completeness of data and reliability of preset conditions. School of evolutionists believes machines learning can be achieved through interactions and experiments on different rule sets under the guidance of preset fitness target, so to find the rule set that fits testing data the best – another manifestation of the importance of data and preset conditions. The school of analogists runs in the same vein. It reckons that machine can make a decision on what are the reasonable decisions based on the similarity it once encountered through analyzing existing data. It is fair to say that the completeness of data group and the default setting of the similarity of different scenarios play a critical role in machine learning. In comparison with the four schools we mentioned earlier, Bayesian has less requirement on the coverage of data group, as its advantage comes from the ability to learn and explore future unknowns. Machines following Bayesian paradigm do a back check of its previous assumption and test its credibility based on the new input data. Even so, the result is still subject to data and rules picked up by the machines beforehand. In other words, data completeness and preset conditions are still dominant factors even when it comes to machines adopting Bayesian way of learning.
In fact, if we don’t classify machine learning into different schools, all types of algorithm can be perceived as containing three parts: expression, assessment and optimization. In theory, machines had the capability to infinitely self optimize to improve its learning capability, and, as a result, learning anything and everything. But all the data, methods and principles used for assessment were fed and determined by human beings. Therefore, it is impossible for machines to replace mankind, though it may evolve to be too complicated for human to easily understand them.
AI Regulation: What Are the Real Challenges?
In his interview with the magazine “Wired”, Barack Obama once expressed that AI is still at its preliminary stage and deserves limited regulations. More investment is needed in R&D to realize the transfer between basic and application research. His remarks are in line with the mainstream views, which are always bashing the low efficiency of regulations and rent-seeking of regulators. But if we were to cast aside this bias against regulations, we will agree that the development of AI must be subject to regulation. In the 23 Principles of Artificial Intelligence published by people including Hawking and Musk, they called for the AI development to return to its normal track, which we would agree in this paper, but not solely for the reason that we believe mankind is to be replaced by machines.
To answer what and how we should regulate AI, perhaps we can find clues in the second part of this essay, where concepts of machine learning were introduced. Data and preset rules are of bedrock importance to algorithm. Thus, governance over data and rules will be lying at the core of AI regulation.
The data we provide to machines will determine the learning ability of them to generate corresponding learning results. Where do these data come from, and how do machines use these data? Incomplete data feeding will lead to learning mishaps – just like the issue of Russell’s turkey. Whereas an all-round data collection gave rise to concerns about privacy protection and interest’s allocation. Hence, data regulation is the precondition for setting up AI regulation. On the basis of protecting individual data rights, we ought to regulate and encourage data sharing and application to better guide AI development.
We need also to think about who and through which procedures we make the rules for machine optimization. AI carries real threat regardless of the fact that we are still over-concerning its development. AI is transforming our life gradually and invisibly. Machine optimization rules must be sufficiently regulated and overseen to stamp out possibilities of being abused. This problem follows the same vein as the one we have around Facebook: how do we guarantee the news feed will be unbiased without preference of a special interest group?  With more people having the habit of subscribing to tailor-made news, AI may carry the power of impacting a presidential election. This is why we would like to argue that concepts such as transparency and open-source should be included in setting out AI regulations.
It took AI over 60 years to make a leap in its applications after the maturity of Internet, big data and machine learning. It is foreseeable that it will play an even more important role in the future. There is no need for us to panic over this imminent scenario, but we have to exercise caution. Regulating AI and imposing the right and appropriate measures to achieve this goal should be set as the focal point of our policy agenda. We hope to address a “general” concern over AI as the purpose of writing this article by narrowing down the scope of policy focus. More detailed policy suggestions are beyond the scope to be covered by this essay. Further discussions can be carried out in future similar writings should we get more attentions and insights from scholars and professionals.
 Stephen Hawkings’ latest speech at Cambridge University: Exploring The Impact of Artificial Intelligence http://www.aiweibang.com/yuedu/159796677.html
 Stephen Hawkings’ latest speech at Cambridge University: Exploring The Impact of Artificial Intelligence http://www.raincent.com/content-10-7672-1.html
 Elon Musk: artificial intelligence is our biggest existential threat. https://www.theguardian.com/technology/2014/oct/27/elon-musk-artificial-intelligence-ai-biggest-existential-threat
 The 3 craziest things Elon Musk said at Code conference. http://www.usatoday.com/story/tech/news/2016/06/02/3-most-interesting-things-elon-musk-said-code-conference/85285470/
 Singleton, R. (2008). Knowledge and technology: The basis of wealth and power. Introduction to international political economy (4th ed., pp. 196–214). Upper Saddle River, NJ: Prentice Hall.
 Domingos, P. (2016). The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World. Citic Publishing House. P12
 Domingos, P. (2016). The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World. Citic Publishing House. P361
 Former Facebook Workers: We Routinely Suppressed Conservative News. http://gizmodo.com/former-facebook-workers-we-routinely-suppressed-conser-1775461006