Switch to Classic mode

Career AdviceLegal Case studies for employers

Artificial intelligence and its legal implications in the workplace

Published on Saturday, 03 Jun 2017

Robots and the ever-increasing use of artificial intelligence (AI) is currently a blazing hot topic. On a daily basis, we are hearing tales of driverless transport systems to highly effective intelligent tutoring systems reflecting an individual student’s cognitive needs.

Close to home, the Taiwanese manufacturer Foxconn has introduced more than 40,000 robots, known as ‘foxbots”, into its factories in China. While this is in the context of a Chinese workforce of more than a million people, it is part of Foxconn’s evolving plan towards full automation. Foxconn, the world’s largest manufacturer has said it has the capacity to introduce more than 10,000 foxbots each year into the Chinese workplace.

All sectors of business are effected by the AI revolution. IBM’s AI platform, Watson, is advising doctors on treatments in several US hospitals and will be  used to review complex medical histories in Germany so as to identify potential diagnoses. Meanwhile, in financial services, RBS and NatWest recently announced they will be using virtual chat bot “Luvo” to deal with simple customer services queries in the UK. Initially, the robot will be able to answer 10 questions, but is intended to  increasingly assist with complex issues by learning from human interactions.

The issue of robots and AI is also a highly politicised debate with many different approaches and agendas playing out. Even Donald Trump has entered the fray:  “We’ll make robots too. It’s a big thing. Right now we don’t make robots. We don’t make anything. But we’re going to. I mean, look robotics is becoming very big and we’re going to do that, ” The New York Times reported him as saying last November.

There are clearly differing views about the impact of AI in the workplace and whether or not it is a force for good. Whatever an individual’s current view though, it seems abundantly clear that all businesses – from banking to retailing – are likely to be affected by AI and employers will need to work through the issues. The primary issue is, of course, the impact of any robotic or AI development on human employees.

Whenever a workplace is in flux and the needs of a business change, there will be winners and losers. For those whose skills are no longer required there will be the inevitable redundancies with the costs, human challenges and personal casualties that always follow. We are also likely to see key changes in the skills needed and the composition of the workforce as AI is likely to depress wages for lower-skilled work, while human AI innovators become more highly paid.

Beyond the issue of increased productivity, there is the matter of workplace integrity. For example, what happens if a robot plays a part in harassment or degrading behaviour? Microsoft’s chat bot, Tay, was removed from Twitter soon after its launch in March 2016 because it had learned and tweeted racist and offensive remarks.

The purely legal analysis currently focuses on the fact that a robot is probably not capable of harassing an individual because it is not a legal “person”. That, however, is clearly not the end of an organisation’s responsibilities legally, morally, or reputationally. Even within the current legal confines (which are running well behind the pace of technological innovation), we consider that a robot may well be capable of creating or contributing to a hostile workplace environment. As AI develops and becomes more ubiquitous, will the law change to allocate responsibility and provide redress for such conduct? Employers may at the very least need to deal with grievances and employee engagement issues in such circumstances.

AI is also being developed to read facial expressions and body language. In the context of recruitment, if this kind of data regarding job applicants is captured, employers will need to consider data protection issues when storing the information. Prospective employers will also need to be aware of potential discrimination issues with the way such information is used. For example, “scanning” applicants in this way may identify physical distinctions and other characteristics which could fuel allegations of less favourable treatment at the selection stage.

Robots do not have a legal personality and it seems improbable that this will change in the near future. However, the European Parliament suggested in May 2016 that robot workers should be classed as “electronic persons” and Microsoft founder Bill Gates has been calling for robots who replace human workers to be taxed. While neither idea has yet received widespread support, it is clear that this debate and how we move forward legally and morally will have an impact on us all.

 


This article appeared in the Classified Post print edition as Artificial intelligence and its real-world impact.