Computer scientist is helping the deaf and blind feel at home in a wired world

Richard Ladner is having computer problems. The volume is turned up and the “mute” box is un-checked, but there's no sound coming out of his speakers.

Somehow it’s reassuring to know that even the UW’s Boeing Professor in Computer Science and Engineering encounters a recalcitrant PC now and then. The deeper irony is that Ladner’s own work is aimed at making technology easier to use — although he’s not addressing garden-variety computer glitches. Instead, Ladner is developing a variety of accessibility technologies to help people who are blind or deaf use computers, communicate and — perhaps closest to his heart — learn.

The work is starting to attract notice. Last year, Ladner received a $10,000 Purpose Prize, an award for social innovators over age 60. One of his projects recently won a $50,000 award from the Mellon Foundation.

Witness Ladner’s reaction to his misbehaving desktop machine and the reason for his success becomes obvious. Over the next hour, Ladner fires off an e-mail to ask about the status of a server, pops under his desk to check that the speakers are fully plugged in, tries to do the same operation on a different computer — and all while carrying on a wide-ranging conversation. Finally he restarts his machine. Tentative diagnosis: too many browser tabs open.

Ladner has been trying to demonstrate WebAnywhere, an application that helps blind people navigate the Internet by transforming Web pages into speech. Unlike other screen-readers, WebAnywhere, the brainchild of Ladner’s graduate student Jeff Bigham, is free, requires no special software and can be used on any computer. It doesn’t just make the Web accessible to the blind, it opens up an entire lifestyle that sighted people often take for granted — looking up a bus schedule on a friend’s computer or checking e-mail at an Internet café.

Another project, MobileASL, would make sign language conversations possible over cell phones, enabling deaf people to communicate while on the go. The tricky part is that a sign language conversation needs to be transmitted at 10 frames per second in order to be intelligible. So Ladner’s group has been collaborating with electrical engineering professor Eve Riskin to improve data compression for the relatively slow U.S. cell phone network.

Ladner, the hearing son of two deaf parents, has seen firsthand the difference that technology can make in the lives of the disabled. “I remember when my parents got a TTY” in the early 1970s, he says. A TTY, or teletype, machine enables deaf people to communicate over a telephone land line. “It changed their lives forever.”

Anna Cavender, a doctoral student in computer scienc and engineering, signs “see you later” on a cell phone using the MobileASL system developed by Richard Ladner.

Yet his own interest in working on accessibility technologies came much later. Ladner joined the UW faculty in 1971, the ink on his Berkeley mathematics Ph.D. barely dry. For the better part of three decades, he wrestled with arcane questions of theoretical computer science: How fast can computers multiply two numbers? How can we make them multiply faster?

Then, in 2002, a graduate student named Sangyun Hahn joined the computer science department. Hahn, who became blind as a young child in South Korea, was smart and driven, but struggled in his classes. A machine converted the words in his textbooks to Braille, but the graphs had to be painstakingly translated into tactile format, one by one, by hand. He couldn’t keep up with his classmates, because he simply didn’t have access to the material.

“I think both of my parents hit the deaf glass-ceiling,” Ladner recalls. Teachers at a school for the deaf, “they were in the highest profession they could be in at the time.” Now, as an adult, Ladner was witnessing one of his students encounter a similar barrier. And now he could do something about it.

Of the slow and tedious translation of graphics, Ladner says, “I thought it could be done faster.” He and some students developed a computer program, the Tactile Graphics Assistant, to automate the process. The key, in this case, was an algorithm that enables the computer to recognize the text in a graphic, so that it could be automatically stripped out, converted to Braille, and reinserted into the image. “We can take a whole bunch of images that are similar and do them in a batch. And that’s where you get the high speed,” Ladner explains.

Hahn, who expects to receive his Ph.D. later this year, had finished most of his course work by the time the software program was completed, but calls Ladner “the best adviser I have ever had. Since I met him here, he has been trying to find a solution, together with me, to any issues regarding my study as a blind person. He has been a great supporter of my study.”

Next, Ladner would like to create a user community for the Tactile Graphics software, to enable people to share the work they have done. “I think this is the future of accessibility,” he says: creating networks to link people together, and getting more people — not necessarily just deaf and blind people — involved in the work.

Indeed, several of Ladner’s projects have a social networking component. The ASL-STEM forum, for example, is a kind of Wikipedia for specialized sign language relating to science, mathematics and engineering. People can post videos of signs they know, or propose a new one, and then users discuss which one is best.

Granted, there aren’t that many blind people who need to be able to read graphs relating to higher mathematics, or deaf people who need to know the sign for “algorithm” — yet.

“That’s something I’d like to change,” Ladner says, “to make these professions more accessible.”