California's draft AI privacy rules signal an ambitious approach that surpasses those of other states. Released on Nov. 27 by the state's privacy agency, the proposed regulations aim to regulate a broad spectrum of automated tools, including artificial intelligence, resume screening filters, and facial recognition.
The rules propose granting California residents the right to prevent their personal data from being used in automated decision-making technology. This is particularly significant given the agency's location in Silicon Valley, potentially influencing how tech companies develop AI that heavily relies on data sets for training, which may include personal information.
While the formal process to adopt these regulations is yet to begin, the draft stands out for its comprehensive scope, reflecting the agency's ambition to take a leading role in addressing privacy concerns. The agency's board members plan to discuss the draft, acknowledging its broad approach compared to other state privacy laws.
The draft introduces a broad definition of "automated decision-making technology," encompassing anything that employs computation to facilitate human decision-making, regardless of human involvement. This contrasts with laws like the EU's, which focus on solely automated systems without human intervention. Consumer advocates appreciate the broader definition, arguing that humans can essentially endorse AI decisions, creating a potential loophole.
The draft also includes detailed "pre-use" notice requirements, informing state residents of their opt-out rights before processing any personal data. Consumer advocates praise this provision, while tech groups suggest offering opt-out options after the automated decision is made to maintain business efficiency.
California's agency seeks to expand opt-out situations beyond those of other states, potentially adding conditions related to profiling students, employees, people in public spaces, behavioral advertising, minors under 16, and data use for AI training. The draft provides some leeway for businesses, exempting them from providing opt-out or information access rights in certain situations, such as automated tools used for security, fraud prevention, or safety.
However, critics argue that these exceptions could allow companies to circumvent opt-out rights, raising concerns about potential misuse of data for security purposes. Further drafts are expected to refine details around exceptions and address challenges posed by advanced AI tools with unknown internal workings, such as ChatGPT.
Comments