Artificial intelligence (AI) profoundly affects our lives. From helping to decide what advertisements to show us, assisting in predicting the weather, or detecting cancer, AI is all-around and rapidly transforming our lives.
Recognizing that some uses of AI, unchecked, may negatively impact society, the European Commission has proposed regulations to address the proliferation of AI. One area where the Commission views the use of AI as being particularly risky is in decisions concerning employment — i.e., systems that are utilized for selecting, promoting, terminating, monitoring and evaluating employees.
The goal of the proposed EU regulations is to unify the legal framework for the development, marketing and use of AI. The regulation pursues a high level of protection of general interests such as health, safety and fundamental rights.
Recital 36 of the proposed regulation states that “AI-systems used in employment, workers management and access to self-employment, notably for the recruitment and selection of persons, for making decisions on promotion and termination and for task allocation, monitoring or evaluation of persons in work-related contractual relationships, should be classified as high-risk, since those systems may appreciably impact future career prospects and livelihoods of these persons.”
The technology will enable human resources personnel to objectively analyze candidates and employees’ skills and weaknesses. For example, the German government, recently enacted a new law on the modernization of the German Works Council Act. This new law includes explicit co-determination rights of employee representatives on the introduction and use of AI in the workplace.
In France, algorithms are being used to predict which companies could go bankrupt as a result of the COVID pandemic. Spain’s government has just reached an agreement with trade unions and business associations that requires employers to give information to workers’ legal representatives on any algorithms or AI system used to manage the employees that may affect their working conditions. This exemplifies the overall importance of AI technology for policy makers.
The impact of the new AI regulation will depend on the final wording developed in the ongoing policy-making process. Littler Mendelson P.C. expects the rules will be strict, as fundamental rights are affected and as the EU is also protective of such rights already in other areas (e.g., data protection).
Due to the shortage of skilled workers in the market that is boosted by demographic change, new methods for hiring are fundamentally important for recruiting and for keeping people successfully employed in the long term. That’s where AI comes into play and can be useful.
After the COVID-19 pandemic, both managers and employees will be used to working virtually. As a result, hiring could shift from offline job interviews and assessment centers to online selection of employees. There are a lot of possible applications in this field. Some of them are systems that use biometric identification and some that simply match keywords and provided material. Still, both are classified as high-risk systems under the proposed EU regulations.
For instance, chatbots, autonomous matching through a job portal, and pre-selection of the application documents are systems that do not use biometric identification.
Chatbots can be used to answer candidates’ questions about the application process on the website of the employer, such as: “What documents do I need to submit?” This has the advantage that applicants can communicate with companies regardless of the day or time.
Additionally, it is possible to autonomously match an employer and an employee through AI on a job portal. On the basis of documents uploaded by the applicant, the system suggests companies and jobs for the user that should be a good match. On the other hand, recruiters who have a position open are also presented with a list of possible candidates compiled by the algorithm.
In particular, systems that are using keywords for the screening of applications are very common. In such systems, the algorithm makes an initial pre-selection for HR personnel by going through certain keywords in the received applications that are relevant to the available position. These systems are called “applicant tracking systems” (ATS).
The first part of the application process is handled electronically. By sorting the candidates, the system filters out the most suitable candidates. Such systems technically could be used to send automatic rejections to candidates who do not meet the requirements (whether this will be allowed by law is another question). When looking for special qualities and wanting that system to read between the lines e.g., an applicant with good leadership qualities, the AI has to gain an impression of that candidate. To do so, for example, the system analyzes the personality of the applicant via voice or video sample, with the aim of preventing mismatching and discrimination. This type of system thus considers biometric identification.
Biometric identifiers include facial recognition, fingerprints, voice recognition, iris recognition, retina scanning, finger geometry, DNA matching, digital signatures, walking gait, typing patterns, physical gestures (such as hand motion) and computer mouse use.
Biometric ID systems could be used in assessment centers, where they can lead or support the process. For example, if a video interview is conducted, whereby the applicant creates a recording in which they answer the questions modeled by the manager, the AI can transcribe what is said, analyze the intelligibility of the voice along with language competence, and capture visual signals using emotion tracking software or facial recognition.
Those biometric ID systems and above-mentioned AI systems will be considered high-risk applications in the proposed EU regulations if they are used for employment purposes. In the workplace, such systems may become relevant for performance tracking of employees, accessing controls to the workplace, securing confidential information or timekeeping. Therefore, they must meet increased requirements.
It is crucial to guarantee that such systems are accurate. Before placing a high-risk AI system on the EU market or operating it otherwise, the system must undergo a so-called “ex-ante third-party conformity assessment,” which will include documentation and human monitoring requirements.
While the rules for law enforcement regarding biometric ID are very strict, the public debate currently focuses on the use of such systems by other public authorities or within the private sector. Critics argue that the current wording of the proposed regulations is not strict enough for these areas and needs further revision and stricter regulation. Employers will need to monitor future revisions of the EU regulations to understand to what extent they may use biometric ID in the workplace. However, it is difficult to predict whether or when the regulation will come into force — first negotiations will take place in the European Parliament and the European Council.
If the two institutions fail to reach an agreement, a conciliation committee is convened. Only when the text agreed by the conciliation committee is acceptable to both institutions is the legislative act adopted. If the legislative proposal is rejected at any stage of the procedure, or a compromise cannot be reached, the proposal is not adopted and the procedure ends.
Using biometric ID in the EU in hiring is not that common yet, because capturing the personality of a person is extremely difficult. That’s why some companies have not continued pilot projects with this kind of software. Furthermore, problems could arise from the fact that there is a risk of abuse by candidates using a nicely prescribed text on the basis of which they would be rated exceedingly positively by the AI system; a market for "perfect" texts could quickly form whereby applicants will rehearse and submit these texts. Additionally, the system only works with data that has been provided to it by humans, this means it adopts the prejudices contained therein. Therefore, discrimination can occur if not all possible dimensions (e.g., all kinds of race, sex, age etc.) are covered by the template for the system
Finally, transformation can succeed only if HR personnel is prepared for new applications. Even the best digital technology is of no use with users who cannot understand and operate the technologies. To implement technologies in the workplace, the right personnel training and development strategy is needed.
Significance and Potential Effect on U.S. Employers
The EU appears to be aiming to recreate the regulatory influence it gained with the General Data Protection Regulation (GDPR). Therefore, the impact of the legislation could be similar to that of the GDPR, which has become the global privacy norm for the world's top corporations. Many of the concepts in the proposed regulations are directly inspired by the GDPR. The regulations are not meant to conflict with or contradict the GDPR. They are rather intended to work in combination with the GDPR, which restricts the use of remote biometric ID systems and controls the design, development and use of certain high-risk AI systems.
The regulations could have an effect on American employers in the form of European regulators demanding access to a company's data, source code and algorithms under the proposed regulations. While this approach may have precedent in certain restricted instances, this is a wide regulatory expansion that could lack crucial safeguards. It is possible, for example, that this would make important intellectual property and commercial secrets vulnerable to hacking. So-called gatekeepers, (for example, American corporations), could be compelled to share their data and algorithms with European counterparts.
Additionally, regulators around the world, including the U.S., are grappling with how to regulate the proliferation of AI — the 20th century tools we currently have are ill-suited for 21st century issues. Regulators abroad will no doubt study the EU regulations intently as they form their own policies.
About the Authors
Jan-Ove Becker is a shareholder at vangard|Littler advising on labor relations matters and newly emerging legal playing fields like robotics, artificial intelligence and automation.
Maria Rutmann is a legal research assistant at vangard|Littler.