An artificial personality, able to think and to make decisions. This definition of artificial intelligence makes it difficult to assign an adequate legal qualification to it. What about the attribution of legal responsibility? What about the allocation of liability for damage caused by the use of AI? In summary, should AI and robots have rights and obligations?
Currently, robots are qualified as ‘legal objects’. This is because they cannot be regarded as natural or legal persons. After all, they do not have a state, assets, competence or liability. Should robots have legal personality? Is it possible to give legal personality to objects so that they can act in legal transactions with their own rights and obligations?
For the time being, not at all. The possibility of giving an identity to an object is non-existent in Belgian and European law. Nevertheless, it is being considered. For example, robot Sophiahas already obtained citizenship status in Saudi Arabia.
In 2018, the European Parliament was already considering giving robots Electronic Personality, a legal status similar to that of human beings. Parliament wrote in a resolution that robots should be (partially or fully) liable for their actions. In that case, robots would be given legal personality, and would then be able to act in legal proceedings. In that respect, they could act as plaintiffs or defendants.
Experts in robotics, artificial intelligence, law, medical science and ethics strongly advise against this. Indeed, by assigning legal personality to AI, manufacturers and other actors could escape their legal responsibilities, which could lead to abuse.
The ethical implications of AI are closely intertwined with liability. For example, who is liable in the event of an accident with a self-driving car?
On the basis of the Belgian Product Liability Act (Wet Productaansprakelijkheid), the manufacturer is objectively liable for damage caused by a faulty product. A robot, with its current qualification as a legal object, falls in any case within the scope of the Product Liability Act. The liability of the custodian of the defective item (article 1384, paragraph 1 of the Belgian Civil Code) may also be invoked. This is the tricky part for owners or manufacturers of robots: after all, a robot can cause damage as a result of an autonomous decision. The owner or producer does not seem to have anything to do with this. Is the robot necessarily defective? This line of reasoning argues, of course, in favour of the attribution of electronic personality, but it creates a striking contradiction: after all, robots would be given rights and duties, with a view to protecting people’s rights.
What would happen now if robots were qualified as legal subjects in the future? Would it still be possible to apply the existing regimes? It does not seem easy to hold a robot liable on the basis of Article 1382 of the Civil Code, since the subjective element of the fault must be present. Indeed, in order to qualify an act as faulty, the perpetrator must be guilty at the time of the claim and the act must be imputable to him.
Promise, this is the last question: can robots be guilty? In other words, are they aware of their actions and do they have the necessary control over them?
The social debate will be a long time coming. If you have any questions on this matter, you can always contact us via email@example.com.
Written by Emiel Koonen, Legal Adviser theJurists, and Kris Seyen, Partner theJurists