Lawyer Monthly Magazine - May 2019 Edition
Sounds like a ridiculous question, doesn’t it? A question that sounds like it almost belongs in terminator, is slowly becoming our reality. With robots and machines expected to perform half of all productive functions in the workplace by 2025, companies may be wondering if they need to develop any regulations to protect their shiny, new automated workers. If we were to fast forward to when AI has advanced enough to create a world with robots that can convincingly simulate human intelligence (i.e., think iRobot), we (the judiciary and governments) will be faced with an array of challenging questions regarding robots’ rights. Can a robot be harassed or abused? Can they be discriminated against? Should robots be compensated for their work and if so, how? Do robots have a right to their own union and if they were to invent something, do they own that IP? We can take Amazon as an example of how robotics is not as simple as we think it is. Back in October 2018, the multinational tech company faced bad press after their machine-learning specialists realised that their AI recruitment ‘robot’ - which they had been working on since 2014 - was discriminating against female applicants. Effectively, their AI tool had taught itself that male candidates were supposedly more preferable; it was trained to vet applicants by observing previous patterns in applications and CVs and, unsurprisingly, as the tech industry was male-dominated, most of the CVs it analysed were from men. Thus, even though the tool was devised to be neutral and unbiased, the AI managed to single out phrases it deemed less preferable, due to the lack of them on successful applicants they previously analysed, such as ‘women’ in the phrase, ‘Chair at Women in Technology International’ in the list of recognised accreditations on the CV. So where theAI systemwas thought to remove human bias all together and was specifically programmed to not be biased against women, it failed, leaving Amazon in a potentially sticky situation which could have led to litigation revolving around employment discrimination. But if things had escalated, who would have been prosecuted? The machine-specialists specifically designed their robot to eradicate human bias. Were they negligent and did not try hard enough? Or, if they had programmed it correctly, does it fall on the AI’s shoulders, as, after all, the system was the one picking and choosing. Or will it fall on Amazon’s giant lap, as surely there is at least one human, somewhere, vetting the process? There are even controversial talks within the EU to give robots a ‘personhood’ status. It feels like a surreal topic to be debating, but EU lawmakers are considering whether or not the human inventor or robot should be held accountable for their actions. In this debate, the EU are tampering with the notion that if personhood status was given to robots, they could be insured individually and be held liable for damages; thus if something was to go wrong at work, they would face the consequences. Some say it is common sense and will not guarantee that they will have the same human rights, such as marriage, but will put them on par with corporations. Others believe it is just a new way for manufacturers to avoid the responsibility of the actions of the machines they created. If responsibility lands on the robot’s shoulders, they also could be given the same rights as humans, especially in workplace situations, in order for them to better integrate and work symbiotically with their human colleagues. This could potentially overhaul employment legislation, impacting unions, workplace violence, termination, IP and a whole array of other employment issues. And even though the philosophical elements of robot’s legal rights are far from a priority at this given time, security, privacy and safety are all high on the list of topical points of discussion that robotics will bring to the surface in the regulatory world in the next decade. From whether (human) employees should be informed they are dealing with a robot - as it may not be obvious to the employee if they are liaising with someone outside their department, such as HR - to who is liable for workplace bullying, if the robot is not a legal person but is technically the harasser, robot’s workplace rights are actually more complex than we would have ever thought.
Made with FlippingBook
RkJQdWJsaXNoZXIy Mjk3Mzkz