Table of Contents
I had many exciting plans for the end of my sabbatical year. Breaking my elbow wasn’t among them. Suddenly, all of my work as a computing and information-science professor — writing, and especially programming — had to be done with one hand or by voice. It was a pain. At the same time, it provided a strong reminder of why I do what I do — studying our individual and collective struggle to understand computing and harness it for play, power, equity and justice — and accelerated my desire to develop a truly accessible programming language.
Computer programming has never been easy. The cryptic documentation, the obscure syntax and the confusing error messages are all things we just seem to tolerate. But being unable to use my dominant hand underlined the fact that programming caters mainly for non-disabled people. My temporary disability meant that my work could no longer keep up with my thoughts. Even speech-recognition software customized for coding was error prone and slow. My inability to type two-handed keyboard shortcuts meant I had to reconfigure numerous settings and memorize dozens of new shortcuts.
People with permanent disabilities know these challenges well — at every turn, programming deters people with disabilities from participating fully, and therefore deters them from participating in science. Some of the most popular platforms for learning to code require a mouse, and so exclude people with motor disabilities. Most code-editing programs, including those used in science, assume users have sight, excluding anyone who is blind or visually impaired. And the Internet, which is an essential tool for finding documentation and help when programming, is broadly incompatible with screen readers, which are commonly used by people who are blind, visually impaired or dyslexic.
The difficulties extend beyond physical abilities. Programming languages and tools are built around assumptions about natural-language skills — in particular, that users can read and write in English. Programming-language keywords, documentation and online help are almost always written in English first, and are rarely translated into more than a few other common languages. As a result, anyone whose first language isn’t English — that is, the majority of people on the planet — is at a strong disadvantage, even when learning the basics. And if they don’t speak English, and rely on speech input or screen readers, they are much more likely to struggle, because these tools rarely support languages other than English.
Even before my injury, I had been giving these problems a lot of thought. I was using my sabbatical to develop a new programming language called Wordplay, which strives to avoid assumptions about ability or natural-language fluency. Others have tried this before, albeit in more focused efforts. The Japanese programming language Dolittle (in Japanese, doritoru), for instance, enables users to write code in that language directly, and the language Quorum caters specifically for people with visual impairments. Hedy, which is used to teach programming concepts to children, has been translated into 39 languages. But to my knowledge, none has tried to address ability and language fluency universally, striving for a kind of equitable design that serves everyone, regardless of their language or abilities.
Inventing a new language to meet these goals wasn’t easy. It meant reimagining every part of the programming experience: removing all natural-language keywords (such as ‘if’ and ‘while’); allowing programming identifiers (such as variable and function names) to have multiple, language-tagged names; and enabling both left-to-right and right-to-left characters to coexist in code, to support bilingual users. It required a programming editor that can automatically translate code between languages, while preserving the code’s behaviour, to support multilingual teams and classrooms. It meant displaying code in a way that can be navigated, screen-read and edited using a mouse, keyboard and speech, as well as other accessible technologies. And it required the invention of new forms of interactive text-based program output that could be automatically translated into other languages and described by a screen reader, like a form of live captioning.
The changes have been about more than just broader support for different inputs, outputs and languages, however. Some of the most fundamental concepts in programming language design are deeply colonized. The ideas and the words ‘true’ and ‘false’, for example, stem from the strict logic of the nineteenth-century mathematician George Boole and discrete mathematics. Ideas such as false don’t always translate cleanly to other languages or cultures. Even choosing symbols to represent these concepts risks giving primacy to one culture over another. The selection of symbols with no widely recognized meaning (⊤ and ⊥, from logic) seemed more inclusive, even at the expense of clarity in a particular language.
Although Wordplay is still in development, my preliminary work on it is promising. I’ve been able to write programs with my one functional hand using speech input, even on my smartphone. I hope to release the new language this autumn, offering a vision and example of a more equitable future for code. With luck, some of these ideas will carry over into more widely used languages and tools, especially in science, and will therefore lower the barrier to entry for many would-be programmers.
If we want science that serves everyone, and we think representation is part of achieving this goal, we must begin creating tools that are accessible to everyone — including those of us with broken elbows.
This is an article from the Nature Careers Community, a place for Nature readers to share their professional experiences and advice. Guest posts are encouraged.
The author declares no competing interests.