Signing avatars are an essential component of automatic spoken/signed translation systems. Translating a written/spoken phrase or sentence needs to be displayed in sign language and this is what signing avatars do — they display sign language as computer animation. In this role, signing avatars have the potential to increase accessibility and the use of sign languages in everyday life. For instance, the flexibility and interactive nature of an ideal signing avatar means that an avatar’s signing speed or appearance can be changed, its content can be modified to meet the demands of the situation, and simple interactions can be programmed according to individual user needs.
What makes a good avatar?
The sign language displayed by an avatar must be not only understandable but easy to read and acceptable to deaf communities. This means that a signing avatar must communicate clearly, just like how animated characters communicate in movies and cartoons. However, animated characters in movies are limited to saying only their lines from the script, which is comparable to a tourist being limited to only the phrases in a phrasebook. To be useful in translation systems, an avatar must have the flexibility for signing new sentences, not just repeating sentences from a phrasebook. In other words, a signing avatar needs the flexibility of a video game avatar, but it needs to do much more than a video game’s actions.
A look behind the EASIER project scenes
Meeting the challenge of maintaining avatar clarity while providing flexibility in generating new sentences is one of the reasons that current signing avatars have not yet matched the responsiveness and natural movements of human signers. Deaf and hearing researchers in the EASIER project are using multiple strategies to develop an avatar that displays signing that is acceptable and clear but can create new sentences. These are:
- Involving deaf communities and gathering their feedback. This is an essential part in the process! Without continued feedback from deaf users, there is no good understanding of what needs to improve. The EUD, along with deaf community members, have been supportive and generous in sharing their concerns and priorities. This invaluable feedback forms the guiding principles for setting priorities within the EASIER project.
- Research using sign language corpora. Studying recordings of human signers gives more information about the timing and coordination of hand and body movements while signing. This information can feed into statistical analyses that ultimately help improve how we use avatars to generate completely new sign language sentences without pre-existing video recordings.
- Linguistics knowledge about sign languages. Descriptions of the structure and processes of sign languages, created by linguists, can be used to describe sign language animation. Computer scientists can conver t these linguistic rules into the mathematics of motion which creates the animation.
- Collaboration with deaf communities. Once more, this collaboration is an essential part of the process. But EASIER’s collaboration with deaf communities goes beyond gathering feedback to exploring possibilities for new, creative technologies to make it easier and more convenient to provide that feedback. As an example, the deaf and hearing researchers of the EASIER project have retooled online questionnaires to be more accessible and more deaf-friendly by more effectively using video recordings of sign language, rather than using text. Having more accessible online questionnaires has resulted in more feedback, which shapes the continued improvements to our avatar technology.