
The European Parliament is looking into the possibility of classifying advanced robots as electronic persons. Many scientists consider this inappropriate. Both of them are right
somehow.
As heise reports, autonomous, self-learning systems could be granted
'personhood' rights in the future. A decisive criterion for this can be the awareness of one's own existence. As if we weren't stirred up enough already by science fiction classics in books and
movies and the Boston Dynamics robots, autonomous, self-learning robots could now be granted similar rights
and duties as humans.
Politicians usually lag behind the current social and ecological problems by several months to years. The example given is one of the rare occasions where politicians want to take action before
there is an acute need for it. But why?
A question of liability
The European Parliament cites the question of damage caused by robots as
the main motivation. A stumbling block was certainly the first fatal accident involving an autonomous driving car. In this case it was immediately speculated whether the driver or the vehicle was
to blame.
What if a robot can make its own decisions and causes damage to people or things? Is the machine liable? Or the person who originally equipped it with its artificial intelligence and let it loose
on the public? What if it is impossible to determine what led to the machines' fatal decision and no one can be held responsible?

It's not necessarily scenes from Robocop or Westworld that we have to think of here. China, the country of cheap labor, has equipped entire bank branches with robots instead of branch employees. If the robot advises on financial investments in the future and makes a mistake, the bank probably bears the risk. Things are getting more difficult with the Japanese robot lady, who would like to become mayor of the western Tokyo district of Tama City. Decisions by the mayor can have far-reaching consequences for the community. The makers of the robot are of course convinced that their machine can make better decisions than a human being. That remains to be seen.
Scientists from the field of robotics against the idea
Those scientists who object to the initiative in the EU Parliament in an open letter do not accept the argument of liability and plead for a "decision without haste or bias" in this matter. They point out that some form of civil rights could be derived from a personhood status and this would be incompatible with the Convention for the Protection of Human Rights and Fundamental Freedoms. Regardless of the fact that a personality status does not automatically solve the problem of liability. In the meantime, 221 scientists have signed the letter and their number is growing steadily.
Both sides are right somehow
The fact is that the question of liability should ideally be answered legally before the first case arises. In this respect, it is prescient that the EU Commission
is dealing with this issue. Apart from the fact that there can't be a solely European answer to this problem – it is simply wrong to enforce an overhasty legal regulation of robots which have not
yet been developed as far as such a law would suggest.
It is unthinkable how humanity would handle this matter if we elevated machines to electronic persons, and if they made a serious mistake, they would simply be punished or “euthanized” like a
fighting dog. If we release the man behind the machine completely from his responsibility, it cannot possibly end well for the general public.

Dunja Hélène Ruetz's favourite subject is security. As a technology junkie, she is convinced that safety should always come first. The prospect of intelligent autonomous robots in everyday public life makes her shudder. After all, even Data got out of control a few times and became a danger to his colleagues and the Enterprise.
Kommentar schreiben