Report finds AI developers fail LGBTQ+ user needs

QueerTech's report, released in partnership with Abacus Data and reported by BetaKit, surveyed 100 Canadian AI product developers in December 2025 and found notable gaps in inclusion. The report says nearly 20% of respondents believe they have never encountered a safety consideration for LGBTQ+ people, and that less than half of developers believe their AI products meet the needs of LGBTQ+ users, versus 65% who say their products meet the needs of the general population, according to BetaKit. QueerTech told BetaKit that 11% of submitted survey responses were homophobic, transphobic, or hateful. QueerTech co-founder and CEO Naoufel Testaouni is quoted by BetaKit saying about half the responses reflect ignorance and half are malicious. BetaKit also quotes AI Minister Evan Solomon: "If AI is built around narrow teams and narrow use cases, [by] people with narrow experiences, they will give narrow results."
What happened
QueerTech released a report, produced in partnership with Abacus Data and reported by BetaKit, based on an online survey of 100 Canadian AI product developers conducted in December 2025. Per BetaKit, the report found that nearly 20% of respondents said they had never encountered a safety consideration for LGBTQ+ people. The report also found that less than half of respondents believe their AI products meet the needs of LGBTQ+ users, compared with 65% who said their products meet the needs of the general population, according to BetaKit. QueerTech told BetaKit that 11% of submitted survey responses were homophobic, transphobic, or generally hateful in tone. BetaKit published sample responses that ranged from uncaring "why would we?" to hostile "not high on our list, sounds like a DEI woke nightmare company." BetaKit also reported remarks from QueerTech co-founder and CEO Naoufel Testaouni, who said about half of the responses could be attributed to ignorance and the other half were malicious. BetaKit additionally quoted AI Minister Evan Solomon: "If AI is built around narrow teams and narrow use cases, [by] people with narrow experiences, they will give narrow results." BetaKit noted the survey sample was small but that respondents worked across leadership, engineering, product, operations, and responsible AI roles.
Editorial analysis - technical context
The findings point to two separate risks that practitioners routinely monitor: dataset and labeler bias, and design decisions made without representative domain expertise. Industry-pattern observations indicate that insufficient exposure to LGBTQ+ experiences among product teams can lead to blind spots in data collection, annotation guidelines, and evaluation metrics. Companies and projects that have previously attempted to operationalize inclusive gender representation commonly emphasize explicit dataset auditing, inclusive label taxonomies, and targeted user testing as mitigation steps.
Industry context
Observed patterns in similar transitions show that developer attitudes, including hostility or indifference, increase the chance that exclusionary behaviors persist in product design and deployment pipelines. For practitioners, the operational consequence often appears as higher false positive or false negative rates for gendered classifications, misgendering in language models, or insensitive personalization. BetaKit's reporting that HR and recruiters have observed discrimination linked to pronoun use further underscores the downstream workplace and hiring implications highlighted in the report.
What to watch
Indicators an observer might follow include whether future, larger surveys replicate the 11% hateful-response rate reported by BetaKit; whether teams adopt formal gender-diverse evaluation rubrics or adjust annotation guidelines; and whether regulators, procurement teams, or major customers begin requiring demonstrable inclusive-testing practices for AI products. Reporting to date does not include a public statement from all named organizations on remedial steps, and BetaKit characterizes the sample as small, which means replication and larger-sample studies are important for assessing scale and generality.
Scoring Rationale
The report highlights measurable developer-level bias and inclusion gaps that affect model fairness and safety, a notable concern for practitioners. The sample size is small, limiting immediate generality, so the story is important but not industry-shaking.
Practice with real Ad Tech data
90 SQL & Python problems · 15 industry datasets
250 free problems · No credit card
See all Ad Tech problems

