What should we do when musk is worried about the T

  • Detail

When musk is worried about the terminator, how can we make good use of AI source: interface Author: Tian Siqi picture source: visual China the rapid update of AI technology is bringing earth shaking changes to all walks of life, including war and weapons. Every technological innovation seems to be accompanied by the emergence of moral dilemmas. Human beings who created artificial intelligence must also seize the time to consider how to regulate its use

on July 18, iron man Elon Musk, the founder of deepmind, an artificial intelligence enterprise under Google, and other leaders in the scientific and technological circles made a commitment at an artificial intelligence joint conference, announcing that they would not participate in or support the development, manufacture, trading or use of lethal autonomous weapons. The signatories agreed that the decision to deprive human life could not be entrusted to a machine

before the conference, many American technology companies were also attacked for providing artificial intelligence technology support to the government or the military. These cases include Google helping the US Department of defense analyze UAV images, Amazon sharing face recognition technology with law enforcement agencies, Microsoft providing services to the immigration and Customs Enforcement Agency (ice), etc

the future of Life Institute, the host of the Pentagon

conference, where the US Department of defense is located, believes that this Agreement means that the discussion on artificial intelligence security has finally been implemented. Max tegmark, chairman of the Institute, said in a statement:

AI weapons that can kill people independently are as disgusting and unstable as biological weapons. Of course, they should be disposed of in the same way

in fact, artificial intelligence or so-called automation technology has coexisted with humans for a long time. Gregory Allen, the author of the Research Report on artificial intelligence and national security of Harvard University, recently introduced at the 7th world peace forum that the earliest aircraft autopilot technology was developed in the 1930s. So far, the so-called automation has a history of nearly 100 years

at the same time, automation has been applied to military operations since its birth. It was during World War II that the U.S. Army first used autopilot technology to complete the adjustment trial, which was used for Norden bombing and aiming. At the same time, it also regarded it as our social device, a auto drive system that can automatically fly and drop bombs

in contemporary society, the most significant update of the concept of artificial intelligence is the inclusion of machine learning factors. Allen said that the previous computer automation technology was logic instructions manually programmed by human beings, while artificial intelligence could independently program according to machine learning algorithms and training data. When advanced algorithms, massive data and advanced computing power are combined, the power of the automation system will multiply

after decades of information technology development, the third industrial revolution has solved the problem of connection, including the connection between people and information, between people and goods, and between people. The emergence of artificial intelligence has further solved the fundamental problem after connection

Jiang Tao, senior vice president of iFLYTEK, said at the world peace forum that the best doctors of Union Medical College Hospital can easily connect with patients around the world. However, this still does not solve the fundamental problem of seeing a doctor. The treatment after connection depends on the human brain, that is, the knowledge and experience of experts. However, the emergence of Artificial Intelligence further helps human beings deal with follow-up problems. For example, the artificial intelligence system passed the national doctor qualification examination last year, which can make auxiliary diagnosis and improve the labor productivity of doctors

intelligent medical assistants who passed the national medical practitioner examination in 2017

like electronic technology, artificial intelligence technology has the nature of cross industry and has made great improvements in human life in many fields. On the other hand, in addition to the invention of Telegraph and electronic technology, electric fuze also played an important role in war. Similarly, artificial intelligence itself is exposing huge security risks, and not all applications are morally acceptable

from its own point of view, hejinsong, vice president of 360 group, pointed out that there are certain security risks in various sensors, training data, in-depth learning models and software that artificial intelligence relies on. Given a set of data, it can cause safety problems. For example, security researchers interfered with Tesla's wireless sensor in the experimental environment, making Tesla directly hit an obstacle 50 cm in front; Using the polluted pictures, we can easily make the artificial intelligence learning system recognize a dog as a pig and so on

as a new security game stage, artificial intelligence not only brings convenience to life, but also brings new threats to international security. For example, hejinsong said that artificial intelligence has greatly enhanced the ability to analyze and collect network intelligence, making network espionage more accurate and effective. At the same time, it has reduced the production cost of network weapons, which are divided into eight categories according to the action object and function of experimental instruments. At present, artificial intelligence is still in the deep development stage of industrialization, information, urbanization, marketization and internationalization. It is used for malicious deleting and brushing posts, synthesizing false audio and video, and affecting the normal order of human life

Yu Kai, founder of horizon, an artificial intelligence start-up, warned that

algorithms can control people's thoughts, which is unprecedented. We are supposed to be the super power in the universe. We can control other things instead of being controlled by machines. But now the algorithm has been manipulated and even brainwashed. The fundamental threat to us is that the algorithm will make its own decisions. They are so smart. How can we control them

on the other hand, if we want to think about measures to avoid risks from the technical level, we must first recognize the limitations of artificial intelligence technology

Yu Kai further clarified that artificial intelligence is still in a very early stage, and neural networks and deep learning are also very preliminary:

(Artificial Intelligence) is stupid compared with infants. For example, when my son was one year old, he could easily distinguish Mickey Mouse from other different animals. But if you want to train the neural network to learn the cat category, you may need a lot of 10000 data to train the system

Jiang Tao, senior vice president of iFLYTEK, also said:

it can calculate, but it can't calculate. It has no overall view, no world view and no independent consciousness. On the current evolution route, we still can't see the possibility that future machines have independent consciousness. Except for major breakthroughs in brain science and neuroscience in the future, when can it be achieved, No one can predict now

hejinsong believes that security should run through the whole chain and process of the development of artificial intelligence, encourage the introduction of a safe public service platform for artificial intelligence, carry out evaluation and Research on the malicious abuse of artificial intelligence and system confrontation, and strengthen the security protection of massive artificial intelligence data to prevent the disclosure of user privacy information

nowadays, more and more artificial intelligence systems use users' personal information such as fingerprints, voiceprints, facial features and even whereabouts to carry out learning and training. Artificial intelligence is becoming more and more intelligent, which makes our personal information like streaking. However, the development of artificial intelligence cannot be at the expense of users' privacy. Its collection should be informed, the right of choice should be responsive to users, and the use should be strictly protected. This should be an important basic principle of privacy security in the era of artificial intelligence

in view of the problem of whether the source of network attacks can be identified, hejinsong believes that it is possible to distinguish between hacker attacks and national attacks, because organized or premeditated attacks are more continuous attacks, while many hacker attacks are accidental and accidental. By tracing the source, we can basically distinguish between organizational attacks and National attacks

Allen also pointed out that similar traceability has been greatly improved in the past five years. However, AI is indeed more difficult to regulate than other technologies. The formulation of international rules should take into account the technology, cost, difficulty and means of monitoring, as well as the similarities and differences between commercial and military technologies. For example, expensive technologies can only be used by countries or very high-quality people, so it is easier to distinguish. However, artificial intelligence talents are in enterprises, and standardized products are cheap to use. Therefore, regulating the use of artificial intelligence, especially in war, can be achieved from a technical point of view, but there are many challenges

therefore, Allen believes that people should learn from the experience and lessons of the previous technological revolution to better achieve world peace while developing artificial intelligence. For example, the health investigation during the cold war is a positive example. The Convention on the prohibition of biological weapons (BWC) also needs to be developed and can mitigate the threat of biotechnology. This is a precedent that the international community can draw from the development of artificial intelligence

according to Zhang cymbal, a computer professor at Tsinghua University, the security problem of artificial intelligence is excessively politicized

in fact, there are many common interests of mankind on the security of artificial intelligence. At present, the biggest security problem is that the artificial intelligence system is very fragile, and the attack cost is very low. It is difficult to defend. The whole world should unite to solve this problem, because if this problem is not solved, it will be unsafe for any country, rather than trying to use this security problem to attack any country

Jiang Tao, senior vice president of iFLYTEK, also pointed out that the current focus is to let the public understand the relationship between man and machine in the era of artificial intelligence, so that the general public and the government can understand how artificial intelligence and human beings can work together, and avoid excessive conservatism due to fear and ignorance

in Jiang Tao's opinion, the general data protection regulation (gdpr) implemented by the EU in May this year seems too conservative, which hinders the development of artificial intelligence to a certain extent. He said:

what will the man-machine relationship look like in the future era of artificial intelligence? I think it must be human-computer cooperation: human beings have emotion, wisdom and passion; The machine can calculate quickly and can count, analyze and calculate quickly. These combined can bring great progress to mankind. In the future era of artificial intelligence, it is not artificial intelligence that is more powerful than human beings, but human beings who master artificial intelligence

Copyright © 2011 JIN SHI