Yale Students Support Connecticut AI Employment Bill

Yale Law School students from the Worker and Immigrant Rights Advocacy Clinic testified before the Connecticut Labor and Public Employees Committee on March 10 in support of S.B. 435. Representing the Connecticut AFL-CIO, the students urged the legislature to require employers to disclose use of automated employment-related decision systems, to treat algorithmic discrimination as discrimination, and to vest enforcement authority with the Attorney General. Their written and oral testimony recommended strengthening transparency, meaningful human oversight, and enforcement mechanisms to prevent discriminatory hiring, firing, and surveillance practices by AI systems in both private and public-sector workplaces.
What happened
A team of students from Yale Law School's Worker and Immigrant Rights Advocacy Clinic (WIRAC) submitted written and oral testimony to the Connecticut Labor and Public Employees Committee on March 10 in support of S.B. 435, an act to regulate automated employment-related decision systems. The students were testifying on behalf of the Connecticut AFL-CIO and recommended stronger transparency and enforcement to guard against discriminatory outcomes. "S.B. 435 offers a chance to change that and take a step towards the economic future that Connecticut needs," the students wrote.
Technical details
The testimony focuses on three enforcement and governance pillars embedded in S.B. 435: disclosure, accountability, and limits on public-sector use. Under the bill employers would have to disclose when they use automated employment-related decision processes and report any adverse actions taken based on those systems. The bill explicitly affirms that algorithmic discrimination is still discrimination and would prevent state agencies from deploying AI that "materially affect[s] Connecticut residents' rights, liberties, and public benefits" without legal authorization. The students highlighted the bill's vesting of investigatory and enforcement authority in the Attorney General and proposed tightening provisions around meaningful human oversight and transparency.
- •Enforcement authority granted to the Attorney General for investigations and remedies
- •Mandatory disclosure of automated decision system use and any adverse actions to affected workers
- •Explicit prohibition on algorithmic discrimination, treated as conventional discrimination
- •Public-sector restrictions preventing unauthorized AI that materially affects rights
Context and significance
This testimony fits into a broader trend of state-level regulation aimed at operational controls over AI in hiring, management, and surveillance workflows. For employers and vendors of hiring tools this raises near-term compliance obligations: documented provenance, impact assessments, audit logs, retention policies, and demonstrable human-in-the-loop safeguards. For designers and ML teams, the practical consequence is that bias testing, explainability, and recordkeeping must move earlier into development lifecycles and deployment pipelines.
What to watch
Follow whether Connecticut's legislature adopts the students' recommended enhancements to enforcement and transparency, how the Attorney General interprets investigatory scope, and whether similar bills in other states adopt comparable disclosure and enforcement models.
Scoring Rationale
This is notable state-level AI policy activity with concrete enforcement and transparency proposals that affect employers, vendors, and ML teams. The scope is important regionally but not yet a national precedent, reducing its overall systemic impact.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problemsStep-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.



