Home Tech The EU proposes a new AI regulation bill, violating companies may be...

The EU proposes a new AI regulation bill, violating companies may be fined 6% of revenue

2
0

Smart things (public account: zhidxcom)

Compilation | Qu Wangmiao

Edit | Jiang Xinbai

Wise things April 22 news, according to foreign media reports, this Wednesday, the European Union’s executive body proposed a bill to restrict the use of some artificial intelligence technologies, and will create a so-called “artificial intelligence high-risk uses list” “Regarding related industries, companies that severely violate the regulations may be fined 6% of their total annual revenue. This is one of the biggest efforts to regulate high-risk applications of artificial intelligence so far.

However, for this proposal to become law, it will take several years to obtain the approval of the European Council and the Parliament, and in the process to make adjustments to its prescribed scope and specific regulatory measures.

In 2018, the new privacy law “General Data Protection Regulation (GDPR)” promulgated by the European Union provides a template for widely applicable data supervision rules and is supported by high fines. However, according to the predictions of relevant people, this time The bill does not necessarily have the same effect.

  1. It is forbidden for police to use face recognition, but judges can be exempted widely

    The bill directly prohibits some behaviors. As a complement to the social credit system, the proposal will also prohibit the use of “subliminal technology” or artificial intelligence systems that insult the disabled, “distort the behavior of others,” and may cause physical or psychological harm.

    EU officials hope to use the bill to restrict the police from using facial recognition technology.

    Although the police usually cannot use the so-called “remote biometric system” in public places, such as face recognition, the judge can approve immunity, such as child kidnapping, terrorist threats, etc., can also be used to lock out fraud and murder Waiting for the suspect of the crime.

    “The scope of the exemption is incredible,” said Sarah Chander, a senior policy adviser for European Digital Rights, a non-governmental organization network. Such a list “a little bit contrary to the claim of facial recognition. Is the purpose of the ban.”

  2. The high-risk artificial intelligence provider “hands in homework”, the bank takes the lead

Providers of artificial intelligence systems used for high-risk purposes need to submit relevant documents to the regulatory agency, in which they specify how their systems work. Margrethe Vestager, the executive vice-chairman of the European Commission (European Commission), the executive body of the European Union, believes that this type of system also needs to be in the design of the system, the way it is used, and the quality of the data used to train artificial intelligence. , Showing “appropriate human supervision.”

Large banks took the lead in introducing their artificial intelligence algorithms to regulators, and the application of these algorithms can help prevent the global credit crisis. According to Andre Franca, former director of Goldman Sachs’ model risk management team and current data science director of artificial intelligence startup causaLens, more companies will eventually adopt the same approach.

For example, Dr. Franka said that in the past 10 years, banks have had to hire teams to help provide regulators with the mathematical code behind their artificial intelligence models, sometimes with more than 100 pages of code per model.

Dr. Franca said that the EU can also send a supervisory team to the company to personally review whether its artificial intelligence algorithms fall into the high-risk categories specified in the proposal, such as systems that recognize faces or fingerprints, and algorithms that may affect personal safety. He also said that the European Central Bank’s regulators often review the codes submitted by banks through several days of seminars.

The EU also stated that the new bill will not establish new regulations for most artificial intelligence applications such as video games and spam filtering. But some low-risk artificial intelligence systems, such as chatbots, need to inform users that they are not real people.

“We have to clearly tell users that they are interacting with a machine,” Westger said.

Deep forgery, that is, software that performs face-changing in the video, also needs to indicate this. Ukraine-based NeoCortext has developed a popular face-changing app, Reface. The company said it is marking it up and will work hard to follow EU guidelines. “For fast-growing start-ups, there is now a challenge that is to regulate the rules.” Neocortext CEO Dima Shvets (Dima Shvets) said.

  1. There are divergent opinions, and it will take time for the new bill to come into effect

    Although some digital rights activists appreciate some of the content of this proposal, they say that there seem to be too many loopholes in other content. Others in the industry believe that this proposal will bring advantages to Chinese companies that do not have these restrictions.

    “This will make building artificial intelligence in Europe very expensive and even technically infeasible,” said Benjamin Mueller, a senior policy analyst at the Center for Data Innovation of a technology think tank. The United States and China will watch with interest the European Union destroying their start-ups.”

    However, some lobbyists in the technology industry believe that the bill is not too strict. It merely imposes strict supervision on certain so-called “high-risk artificial intelligence applications”. These applications include software used in critical infrastructure and police use to predict crime. algorithm.

    Christian Borggreen, vice president and head of the Brussels office of the Computer & Communications Industry Association, believes that “it is a good thing for the European Commission to adopt this risk-based approach.” Computer & Communications Industry associations represent many large technology companies such as Amazon, Facebook, and Google.

    Julien Cornebise, Honorary Associate Professor of Computer Science at University College London and a former Google research scientist, said that the new regulations will not necessarily have the same impact as the GDPR, which was introduced by the European Union in 2018. The General Data Protection Regulation (General Data Protection Regulation), which restricts a large number of organizations that collect, transmit, retain, or process personal information related to all member states of the European Union.

    “Artificial intelligence is a mobile benchmark,” he said. “Our mobile phones are doing things that were considered’artificial intelligence’ 20 years ago. One risk of the new bill is that it regulates the’artificial intelligence’ The definition is changing, and it may become inapplicable or be eliminated soon.”

    Conclusion: High-risk applications of artificial intelligence urgently need supervision

    In recent years, the European Union has been trying to take the lead in drafting and enforcing new regulations designed to curb the so-called “excessive behavior” of large technology companies and curb the potential dangers of new technologies, covering areas ranging from digital competition to online content review And so on, very extensive. Previously, the EU’s GDPR provided a template for widely applicable regulations, so other countries and some states in the United States have adopted similar practices.

    As Westger said, “Our regulations are designed to address the human and social risks associated with specific uses of artificial intelligence. We think this is urgent. The European Union is the first to propose such a legal framework.” In the future, With the development and wider application of artificial intelligence technology, there may be more