The Malta Independent 16 April 2024, Tuesday
View E-Paper

Artificial Intelligence Regulation: The challenges ahead

Philip Micallef Sunday, 14 April 2019, 09:17 Last update: about 6 years ago

The Government has just announced that it will be regulating Artificial Intelligence (AI). But let us be smart when it comes to Artificial Intelligence regulation.

The premise is that we are going towards an AI-based future, mostly for the common good. The progress of AI needs to be encouraged through investment, training and education and, when it is incorporated into existing applications, the regulators should consider the risks as well as the benefits before intervening: the regulation of AI should not be used to arbitrarily burden or slow down its development. That said, however, safety and ethics should be primary concerns as we move AI systems from the laboratory into the much more unpredictable real world.

ADVERTISEMENT

 The difficulty of Artificial Intelligence regulation

Unsurprisingly, however, all is not completely straightforward.  The subject  of AI regulation poses considerable challenges. Issues of fairness and transparency are key, and two different concerns come to mind:

  • The need to prevent automated systems from making decisions that discriminate against certain groups or individuals.
  • The need for transparency in AI systems, in the form of an explanation for any decision.

A US Congress report on the  regulation of AI stated:

Use of AI to make consequential decisions about people, often replacing decisions made by human-driven bureaucratic processes, leads to concerns about how to ensure justice, fairness and accountability...

Transparency concerns focus not only on the data and algorithms involved, but also on the potential to have some form of explanation for any AI-based determination. Yet AI experts have cautioned that there are inherent challenges in trying to understand and predict the behaviour of advanced AI systems.

The European Union has also been thinking along the same lines.  In the EU document, apart from the usual point about what personal data can be collected, the same two concerns are raised and legal counter-measures proposed.

The risk is that attempting to regulate for fairness could effectively outlaw any fully automated system from making a decision about a person. Equally the requirement to a "right to an explanation of the decision reached after algorithmic assessment" could also lead to other unintended consequences.

 Non-discrimination 

This is a right, and if discrimination is intentional then legal measures need to be taken. It can also be accidental, as in the case of a fully automated system which is trained, rather than precisely encoded. Early image recognition systems attracted some bad press when dark-skinned people were labelled as 'gorillas'. This was most likely due to an imbalance in the training set and Google apologised and fixed the problem.

Now just imagine if, instead of being a fun app to label someone's photographs, it had been a medical application, maybe a dermatology app or device trying to identify potentially cancerous moles. If most of the training data came from light-skinned people, this app could malfunction on the samples from those with dark skin. The same thing could happen on very wrinkled skin, or the skin of someone with a rare condition that affects the 'normal appearance' (whatever that means) of skin. In all of these cases, it is important for the training data to contain examples of benign - as well as malignant moles - on the widest possible sample of skin types.

The best way to address the issue of fairness is to start from a more diverse - and therefore larger - dataset. Notice that the goal is to ensure that smaller groups are treated fairly. This is a statistical argument, not an attempt to deal with individual cases.

 Transparency

The right to an explanation, is less of a right. It is impossible to achieve, so it should not be legislated.

Two arguments can be made: firstly, we do not ask the same of human-based decisions. Ask your bank why a request for a loan was rejected, and you will be told that "your credit score was too low", or that the loan officer took the decision. Not a very transparent decision!  So you are faced with a very simplistic model, based on a couple of variables such as your income, credit rating or age. It does not capture your full financial picture, but is simple enough to fit in our heads.

 Or you must deal with a very experienced loan officer who decided to reject your application. You are now faced with a human mind, which is probably not fully aware of all the biases and misperceptions affecting it. By sizing up an applicant by their look, dress or accent, all their judgments will be lumped under "experience", but where is the true explanation? Even if you demand a more precise explanation, it will be an after-the-fact rationalisation.

We accept the human mind as an inscrutable black box, even though we are fully aware of its limitations.  With AI, we are struggling with a new type of black box. The same debate is happening around autonomous cars. We accept 1.3 million deaths in traffic accidents per year, worldwide, most of them due to human error, but we instinctively demand perfection from the new black boxes.

Secondly, attempting to extract an explanation out of a modern Deep Learning model is bound to fail. Think for a second about the problem of deciding which advertisement to show, or which video to suggest, both problems that are best solved with AI. The companies that solve these problems have access to a very large amount of personal information on their users: browsing and search history; age, gender and education level and many more personal attributes that they either know or can easily infer.

The new models can ingest and make effective use of thousands or millions of variables and are vastly better than the simple graphs of yesterday, where a couple of lines delineated 'good' from 'bad'. The decision on what to suggest is now based on lessons learned from all the data of millions of other people: which advertisement was shown to whom, and who clicked on what. This is utterly impossible to explain in one sentence, or a paragraph, or a 1,000-page book. We cannot explain a really complex mathematical function learned from a mountain of data in a way that will satisfy a human. This is what we are facing and legislating the need for an explanation will not make that contradiction disappear.

Functional regulation

The AI genie is out of the bottle for good. AI is rapidly becoming one of the top competitive advantages for companies and countries and once it makes its way deep enough into other industries, it might become the primary competitive advantage. Countries that feel compelled to implement AI regulation that is too aggressive will fall behind, with serious economic and security consequences. It is therefore important that regulators are very clear regarding what we need to regulate: the inner workings of a Deep Learning model is a poor choice because such AI regulation is tantamount to attempting to regulate mathematics. Instead, we should focus on specific applications of AI and regulate based on the performance of the function, not how it is achieved. Autonomous cars should be regulated as cars: they should safely deliver users to their destinations in the real world, and overall reduce the number of accidents; how they achieve this is irrelevant.

 


  • don't miss