UW computer science center studies AI bias and benefit in hiring and recruiting

Graduate student Kate Glazko explores generative technology and its impact on people with disabilities.

Kate Glazko, a doctoral computer science student, studies the biases of AI systems like ChatGPT while working to make generative AI more accessible for people with disabilities.

A few years ago, Kate Glazko was exploring internships online when she noticed that some of her tech industry contacts were using generative AI to streamline their recruiting and hiring. She wondered how that might affect applicants, including herself, who had language about disability in their resumes. Through her subsequent study, she discovered that it harmed their chances.

Using nearly identical versions of the same resume but adding disability-related language in details like scholarships and rewards to the second, Glazko found that ChatGPT ranked the resumes with words like “disability” and “autism” lower, even if they highlighted groundbreaking work that led to an award. Looking across six disabilities, she noted that “autism” fared the worst in the rankings. It correlates to the real world, Glazko says, “where some of these disabilities are more stigmatized.”

Glazko, who works with Professor Jennifer Mankoff at UW Center for Research and Education on Accessible Technology, researches how generative AI might benefit people with disabilities as well as the potential harms AI tools might bring.

In her most recent study, published this spring, Glazko discovered that neurodivergent peers and colleagues were using generative AI in similar ways to meet access needs. Working with eight AI “power users”—a term for those who use AI regularly and are learning and leveraging AI tools to improve their work—she led the team in finding that they had a number of uses in common. They used it to save time and mind, adapt their language and tone to meet societal expectations, help regulating emotions and understanding and consolidating information.

In one example, multilingual individuals who experienced stress and anxiety when communicating through email used generative AI to refine their text to meet academic and professional norms. But they realized the downside was that it took their distinct voice out of the communications and masked their identities, Glazko says. In another, individuals used AI for social support, not wanting to make demands on peers and mentors. But they were missing out on real social connections. Glazko is driven to explore these topics as people with disabilities are more actively and frequently using generative AI.

“I’ve always been excited about the possibility of doing research,” she says. As an undergraduate at the University of Southern California, she studied robotics and took part in a hackathon that focused on building accessible technologies. The combination of her studies and the prizewinning project—a mobile game for the visually impaired—could have led to graduate school, but Glazko encountered obstacles due to undiagnosed disability. Instead, she went to work as a software engineer. A few years later, she joined a research lab at Stanford and in 2023 found her way to the UW.

“After learning new approaches for managing my disability, I decided to take a chance to fulfill my dream of doing research,” she says. Glazko was already an early GAI adopter and could see both benefits and risks from the technology. “Some of the problems we’re seeing in generative AI—like built-in biases—have actually been around for a long time,” Glazko says. Companies have been using tools like hiring algorithms that disadvantage women candidates, for example.

“It’s a very interesting time with these new tools coming out,” she says. “People are starting to build their own solutions.” Already, people have developed apps for autism and attention deficit disorder support, and they’re growing in popularity, Glazko says. “We can learn from the people using these tools.”