Animating Online Text Into Sign Language


Uploaded by CUNYMedia on 06.04.2012

Transcript:
What happens is, its electrical resistance changes, this material, when it's bent. And
that's how it tries to interpret how much it should show the joint bending right there.
My research is in the area of designing animations of American Sign Language that would have
benefits for people who are deaf or hard of hearing and who use American Sign Language
to communicate.
If you look at educational statistics for people who are deaf, graduating high school,
the average literacy skill in written English for deaf adults is around the level of a fourth-grade
child.
Many people who grow up deaf do not develop enough skill at reading and writing in English.
So part of it has to do with the idea that, normally, infants will hear their parents
using the language around them. And they practice and learn those patterns.
For deaf child, they're never going to experience spoken English being used around them, in
context, as an infant, and they're not going to develop that skill in English.
And if you can produce animations that are easier to understand, and move more in the
way that humans do, then people who are deaf might be able to understand the information
that's presented in the form of sign language more easily, than if it had been presented
in English text.
People have been trying to design animation systems for American Sign Language maybe for
about fifteen years or so.
Right now, what you'd have to do to make a really beautiful sign language animation,
is do the same kind of work that a movie studio would do to make an animated picture.
Where for a minute of video, you have hundreds, well dozens, of hours of time of a human planning
all the movements of the character.
What our laboratory would like to do is let you tell us: "I want a sentence with these
six signs." And our software will produce a natural looking sentence with those six
signs.
So, we use a variety of equipment. We use motion capture data gloves. We also use a
body suit that looks like a spandex suit that has little sensors attached to it, similar
to the technology in the Wii remote that's part of people's video game systems.
They also wear a head-mounted sensor that tells us where their head is in the room.
So, that way, we know how their eye is aimed, and we know where their head is, and so we
can tell where they're looking in a room.
Within the U.S., our laboratory is the leading site for studying how to produce animations
of American Sign Language.
I think what's unique about us, within the United States, is that we have a lot of participation
of people who are deaf in our project, and we collect a lot of samples of signing from
those people, so that the movements of our animated characters are based mathematically
on the ways that actual humans were moving to do this signing.
I have been working with Matt on this project for four years, and in the first three years,
we have been working hard to collect the motion-capture corpus, using the equipment. And I feel very
interested working with deaf students. I find my work is very useful for them.
(So, what I'd like you to do is aim your head, and sort of focus it towards the center, and
then hold your head still, but then just look with your eyes, at each of these points. So,
are you ready? Yes. OK.)
This is my first year. I always like human computer interaction and using all that I'm
learning hopefully to help people.
(Number two.)
Up to now, I am working on this stuff and reading on how it is possible to represent
facial expressions, and what we have to use, what tool we have to use, to model them.
(Number eight.)
I think one of the big advances that's going to come about from sign language animation
technology is that we're going to see animated characters with signing in more places.
Basically, it's producing a smarter dictionary of signs, that lets someone, with less effort,
produce the customized version of each sign that they need, to create a particular sentence
or message.