The recent enactment of New York's AI employment law has spurred similar regulatory efforts in various states and nations. What implications does this hold for the intersection of AI and hiring practices?
The emergence of any groundbreaking technology that permeates multiple facets of our lives naturally provokes a healthy dose of skepticism. Within the realm of human resources, a field where the futures of individuals are often influenced by other humans, the need for such skepticism is particularly heightened.
Governments worldwide are enacting legislations to closely monitor this technology, with the latest affecting recruiters and professionals engaged in talent acquisition in the ever-awake city. As of July 1, hiring teams within New York City now face new regulations overseeing the utilization of AI in the hiring process, termed "automated employment decision tools" (AEDTs) by the Department of Consumer and Worker Protection. This measure aims to ensure that companies employ fairness and impartiality when incorporating AI into their hiring processes, addressing concerns about its potential bias.
This law was officially ratified by the City Council in 2021, generating both support and critique. Currently, the regulation only examines potential bias based on gender, race, and ethnicity, omitting other factors like age or disability from its scope.
In compliance with the law, organizations are required to disclose their use of AI to job applicants as part of the decision-making process. They must also undergo yearly assessments conducted by independent entities to verify the software's absence of inherent bias. Violations are subject to fines, and companies must disclose audit results publicly.
On the applicants' side, job seekers have the right to inquire whether their potential employers are employing AI to collect and analyze their data, along with the specific information being analyzed. This New York law marks the initial step in the United States, signifying a transformative juncture in the swiftly evolving age of AI. This development could serve as a blueprint for other local entities globally to ensure equitable hiring practices. District of Columbia, as well as states like California, New Jersey, and Vermont, are formulating their own frameworks to oversee AI in hiring. Similarly, Illinois and Maryland have enacted legislation to limit particular AI applications in contexts like workplace surveillance and candidate screening. The European Union has already introduced its version of an AI act, covering not only recruitment but also areas such as transportation and education. These regulations are in their infancy, and they are likely to mature over time.
Comments