According to the mythological story, once Pandora opened the box (or rather, jar) entrusted to her husband, unspeakable evils and horrors were unleashed into the world. As Pandora hastened to close the box, one thing was left inside – Hope. There’s some concern that a similar story seems to be unfolding before us today.
On Wednesday (29th March), the Future of Life Institute published an open letter entitled “Pause Giant AI Experiments.” The letter, signed by the likes of Elon Musk, Steve Wozniak, and (as at the time of writing) over 1800 other AI experts, researchers, and business leaders, warns of the potential risks that we could experience should the unbridled competition we have witnessed these past few months in the AI-field carry on unabated. Since November, powerful AI systems have been made available to the public – most notable among them being OpenAI’s ChatGPT. The potential and disruptive nature of these tools is not yet fully understood, and still being slowly discovered.
The signatories ask, “Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders.” In light of this, the open letter proposes a 6-month pause on training of more powerful AI systems than those already existing, in order to put in place safe protocols and independent regulatory bodies that can offer oversight and control.
These are all valid questions which should not be left unanswered, or worse still, left to be decided by those who have a financial interest in seeing AI technologies widely adopted. This would be as ridiculous as allowing gun manufacturers to regulate gun laws or contractors to regulate planning laws. Ridiculous, yet not unheard of. The importance of the role of national and supranational bodies is made glaringly evident now more than ever.
As representatives of the people, the role of government is to ensure the safeguarding and protection of their citizens, especially those most vulnerable and susceptible to exploitation. And when it comes to AI, we all fall in the latter category. Be it by the exploitation of our personal data, by being taken advantage of as consumers, by suffering biased treatment, by losing our jobs, or by being deceptively fed a continuous barrage of fake news, we can all potentially fall victim to the misuse of AI technologies. We therefore have a vested interest in seeing to their regulation and control.
We must be clear on what the issue is, however. AI technologies, in and of themselves, are not the problem. They cannot be said to be objectively bad or good. Or rather, to use an Aristotelian analogy, they can be said to be ‘good’ insofar that they achieve the purpose for which they are created. Aristotle speaks of a knife being ‘good’ insofar as it cuts well and is not blunt or rusty. Of course, what one does with a ‘good’ knife is another matter entirely. And this is precisely where the crux of the present debate lies. To construe AI as a sort of Terminator-like creation that’s out to ‘get us’ would be to miss the point. The issue, on the contrary, lies with creators and developers of AI technologies. What are the goals being baked into such systems? How are such systems guided in seeking to achieve their goals in an independent fashion? What information is being used by such systems, and where is it coming from? Simply put, regulatory bodies are not needed to oversee AI systems themselves, but rather to oversee the creators and developers of AI systems.
The US and EU have already (prior to the publication of this open letter) expressed and began putting into place plans to create such regulatory bodies. What about on a national level? In 2018, ‘Malta.ai’ - the Government’s vision for AI - was launched amid great hype, with a guest appearance by Sophie the robot (now almost as obsolete as Pandora herself!). However, the website seems to have gone into hibernation, with the last visible update being in March 2019. The Malta Digital Innovation Authority (MDIA), also created in 2018, doesn’t seem to fulfil this regulatory role so far, either. Apart from offering AI related grants, the only mention of AI in their Strategic Plan for 2023-2025 is that “the Authority intends on expanding the technologies under its remit, starting with cybersecurity and artificial intelligence (AI).” A recent newspaper article by representatives of the Authority makes some passing references to EU regulations on AI, but otherwise seems to give the simple message that ‘AI is good for business.’ A lot of talk on reaping benefits, therefore, yet much less in the way of ensuring harms are mitigated.
The dire need for a state entity with the expertise and legal powers to not merely attract and encourage innovation in AI technologies, but more importantly to oversee and regulate the use of such technologies is evident. Such a body would need to be dynamic and quick to respond - in order to not hinder the progress of industry and business stakeholders, while still being able to comprehensively asses the potential consequences of whatever new AI system is presented to the local population.
Beyond Pandora, we may glean another caution from biblical literature. In the Gospels, Jesus confronts the Pharisees - the legal experts of the time - on the role of the Sabbath. From its original purpose of serving man by being a day of rest, free from the hardships of work, the Pharisees turned it into a rigid and oppressive observance that ceased to be liberating. Likewise, Government today must not give in to the new ‘Pharisees’ - AI creators who seek profit at the cost of anything. The purpose of AI technologies must always be that of being tools that serve and liberate humanity, as opposed to oppressing it. We need not share the same fate as Pandora.
Fr Jean is reading for a doctorate in philosophy of mind at the University of St Andrews, Scotland