Artificial Intelligence (AI) has been making its way into our society for several decades. From smart hoovers and virtual personal assistants to refrigerators as smart as their owners. By looking around, it is not difficult to see many use cases where AI assists the user on a voluntary basis. But AI is also present in many other, more hidden domains: what we get to see on our social media, the proposals we get (or don’t get) for credit and risk insurance…
Does this trend also appear in the law? The “legaltech” experience shows that this is certainly the case for legal practitioners. But is this also the case for the litigant?
Artificial intelligence can best be described as the computer system through which machines perform tasks that usually require human intelligence. Spontaneously, we think of speech recognition, automatic lawn mowers, the personally suggested YouTube videos,…
But it doesn’t stop there. Thanks to AI, robots and computers are able to anticipate how people think and function. This allows them to discover certain truths (or lies). That such skills, even if they come from a machine, are a very useful tool in proving one’s case, seems evident.
Today, it is already possible to reconstruct distorted fingerprints or check the authenticity of signatures in a matter of seconds. distorted fingerprints or check the authenticity of signatures in a matter of seconds. This seems to be a correct use that could benefit legal certainty and processing.
Explainable Artificial Intelligence
The biggest problem today is that these technologies are not always objective or transparent. This depends on the factor whether the AI is explanatory (XAI or Explainable Artificial Intelligence) or not.
If it does, the software will write out its entire reasoning. Or at least offer the possibility to do so.
Suppose it is an application for trademark protection, then the software will say why and which conditions are met or not,before making its judgement.
Spontaneously, the slogan “Justice must not only be done, it must also be seen” echoes in everyone’s mind.
The law of evidence: beyond reasonable doubt
Like AI, the law is also evolving (but at a slower pace). In 2019, new legislation on the law of evidence was announced. The main objective was to cope with new technological developments. These new rules came into force on 1 November 2020.
The legislator has created an exhaustive number of means of evidence to qualify facts. These are, for example, deeds or an extrajudicial confession.
The importance of these means of evidence will depend on the context, the value and the object of the dispute.
For example, a dispute concerning an actual act valued at EUR 2,000 in a B2B context will take place in a free evidence system. This means that it is not the law, but the judge who will determine the value of the evidence provided.
The crucial point is that any piece of evidence is lawfully obtained. It must be lawful in itself and also have been obtained lawfully. This is where the shoe pinches. After all, how reliable is a programme that was specifically designed for a case by the plaintiff? Questions concerning objectivity and transparency will therefore have to be answered in great detail in order for the used evidence to be accepted.
No soul in science – justitia ex machina?
Now the question arises as to whether fledgling artificial intelligence can play a part in the operation of the magistracy in its judicial function?
Judging is, of course, based on the law, but it is also a deeply human activity. It has become clear how much AI can accomplish, but the genuine human touch is something that is missing. Victims will find it hard to accept that a computer will decide on compensation for personal injury after a person has been paralysed in a tragic accident. The judicial system would be completely written off in the media as inhumane if a computer were to decide on its own which parent has exclusive and autonomous custody of his or her child.
So, as it looks today, the essential role of the judge will remain unaffected and may even become more attractive. For less challenging and often repetitive aspects of the job, the judge will be able to be assisted by AI. Based on certain facts entered by the judge, an algorithm can make a proposal for a new ruling from a database of previous rulings. For a human being, this process can easily take many days, but with the help of AI, it could be reduced to a few minutes.
It is therefore essential that artificial intelligence should rely on previous judgements to generate its own judgements. New challenges will therefore remain reserved for human judges, as will new insights for existing challenges.
Human error: trained by AI?
And isn’t that where the shoe pinches? For what happens when the AI again and again makes the perfect preparation of a verdict, and the judge only has to “confirm” it?
Do we not then create the risk of an automatism, whereby after dozens of balanced and fair judgements prepared by AI, the judge “authorises” a new proposal for judgement as too obvious and with insufficient critical scrutiny?
A fair trial?
Another problem is the compatibility with procedural principles and human rights. Article 6 ECHR guarantees a fair trial for every citizen. This implies, among other things, that the judgment must be accessible and must state the reasons on which it is based. A judgment is only reasoned if it contains, among other things, an operative part. These are the reasons for a judgement or ruling. For AI to be useful in the judicial function, it must be fully explanatory!
In well-defined and rather technical areas of law (e.g. traffic offences), AI could already come to a decision completely independently. Even here, however, this will be difficult to legitimise. After all, according to Article 6 of the ECHR, a judge must make the decision independently and impartially. The judge may therefore not be influenced by other persons or cases and has to make a neutral decision. We can therefore already assume with a fair degree of certainty that a judge will never be bound by the outcome of the predictive algorithm. Recently, the plea for a fundamental right to human intervention in the decision-making process has become louder and louder.
To what extent the judge will be able to be assisted by AI in the future is a question that only the European Court of Human Rights will be able to answer with certainty.
Bear in mind that the average age of magistrates today means that this generation was already fully educated when the information society was just beginning. Therefore, they often have little or no faith in the technical revolution. The AI will have to be very explanatory if it is to overcome their scepticism.
XAI is hot today, but it is also a necessary condition for the developers to be able to realise anything with a social impact outside their R&D labs. If you have any questions about this, please contact us at firstname.lastname@example.org.
Written by Charles Deberdt, Trainee deJuristen, and Kris Seyen, Partner deJuristen