When we talk about artificial intelligence (AI) – which we have done lot recently, including my outline on The Conversation of liability and regulation issues – what do we actually mean? AI experts and philosophers are beavering away on the issue. But having a usable definition of AI – and soon – is vital for regulation and governance because laws and policies simply will not operate without one.
This definition problem
his definition problem crops up in all regulatory contexts, from ensuring truthful use of the term “AI” in product advertising right through to establishing how next-generation automated weapons systems (AWSs) are treated under the laws of war.
True, we may eventually need more than one definition (just as “goodwill” means different things in different contexts).
But we have to start somewhere so, in the absence of a regulatory definition at the moment, let’s get the ball rolling.
For regulatory purposes, “artificial” is, hopefully, the easy bit. It can simply mean “not occurring in nature or not occurring in the same form in nature”. Here, the alternative given after the “or” allows for the possible future use of modified biological materials.
Life is a game with obstacles encountered and when there is a chance, we have to seize it.
knottier problem of “intelligence”
From a philosophical perspective, “intelligence” is a vast minefield, especially if treated as including one or more of “consciousness”, “thought”, “free will” and “mind”. Although traceable back to at least Aristotle’s time, profound arguments on these Big Four concepts still swirl around us.
In 2014, seeking to move matters forward, Dmitry Volkov, a Russian technology billionaire, convened a summit on board a yacht of leading philosophers, including Daniel Dennett, Paul Churchland, and David Chalmers.