AI system should be responsible for AI ethics guid

  • Detail

The European Union released a series of new guidelines on the development of artificial intelligence (AI) ethics on Monday, as guidelines for enterprises and governments to develop the application of AI ethics, according to the verge, an American technology media, in the morning of April 9, Beijing time

these rules are different from the "three laws of robots" of Isaac Asimov, an American science fiction writer, and do not directly provide a moral framework to help human beings control ferocious robots. On the contrary, when AI is integrated into fields such as medical treatment, education and consumer technology, they can clarify some vague and scattered problems that affect society

therefore, for example, if an AI system diagnoses you with cancer at some time in the future, the EU guidelines will ensure that the following things happen: the software will not be biased due to race or gender; Will not overturn the opposition of human doctors; Patients have the right to choose to listen to AI diagnosis opinions

these guidelines help prevent AI misdeeds and ensure that Asimov style murder mysteries will not be created at the administrative and bureaucratic level

in order to help achieve this goal, the EU convened a group of 52 experts and put forward seven requirements that they believe the future AI system should meet. The details are as follows:

- human role and supervision: artificial intelligence should not trample on human autonomy. People should not be manipulated or coerced by AI system, and human beings should be able to intervene or supervise every decision made by software

- robustness and security of Technology: artificial intelligence should be safe and accurate. It should not be susceptible to external attacks (such as adversarial instances) and should be quite reliable

- privacy and data management: personal data collected by artificial intelligence system should be secure and private. It should not be touched by anyone, nor should it be easily stolen

- transparency: the data and algorithms used to create artificial intelligence systems should be accessible, and the decisions made by the software should be "understood and tracked by humans". In other words, the operator should be able to explain the decisions made by the AI system

- diversity, non discrimination and fairness: the services provided by AI should be accessible to all people, regardless of age, gender, race or other characteristics. Similarly, the system should not be biased in these areas

- environmental and social well-being: AI systems should be sustainable (i.e. they should be ecologically responsible) and "promote positive social change"

- accountability: artificial intelligence systems should be auditable and included in the scope of enterprises' reporting, so as to be protected by the existing regulations, which are the same as those of the general public of Hong Kong. The possible negative effects of the system should be informed and reported in advance

people will notice that some of the above requirements are very abstract and difficult to evaluate from an objective perspective. (for example, the definition of "positive social change" varies from person to person and from country to country), but other definitions are more straightforward and can be tested through government supervision. For example, sharing data used to train government AI systems may be a good way to combat biased algorithms

these guidelines are not legally binding, but they may affect all future laws drafted by the EU. The European Union has repeatedly said that it hopes to become a leader in the field of AI ethics, and through the general data protection regulations (g the advantages of our tensile testing machine servo control software are as follows: DPR), it shows that it is willing to enact far-reaching laws to protect the digital rights of the public

however, to some extent, the EU is forced to play this role due to environmental factors. In terms of investment and cutting-edge research, the EU cannot compete with the United States and China, the world leaders in AI. Therefore, it chose ethics as the best choice to shape the future of technology

as part of this work, this Monday's report includes a so-called "trusted AI evaluation list" to help experts identify potential weaknesses or dangers in AI software. The questions in this list include "is the behavior of the system verified in unexpected situations and environments?" And "have the data types and ranges in the dataset been evaluated?"

these assessment lists are only preliminary measures. The EU will collect enterprise feedback in the next few years and provide a final report within the 2020 deadline

Fanny hidvegi is now the policy manager of access now, a digital rights group. The degree of integration of components has greatly improved, and he is also one of the experts who participated in writing this guide, such as PVC pipes with a diameter of 1.5m. He said that the evaluation list is the most important part of the report. Hidweji revealed to the verge that "it provides a practical and forward-looking perspective" to mitigate the potential harm of artificial intelligence

"in our view, the EU is capable and at the forefront of this work," sidweji said, "but we believe that the EU should not stay at the level of ethical guidelines... But should rise to the legal level."

others suspect that the EU is trying to regulate the development of global AI through influential ethical research

eline chivot, a senior policy analyst at the center for data innovation, a think-tank, said in an interview with the verge: "we are skeptical of the approach being taken to confirm the EU's position in the development of global AI by creating a gold standard for AI ethics. To become a leader in AI ethics, you must first be far ahead in the field of artificial intelligence." (Si Mei)

Copyright © 2011 JIN SHI