So, at the moment, what we do is that people can call to sign language interpreters on their zoom, or online, or some, other team’s conversations, and so the sign interpreter joins the conversation and does the interpretation. But because such video channels also have automatic subtitle abilities, so that after some time we will be able to show the correspondence between the Taiwan sign language and the subtitle. And next we will then be able to train such models, so that the avatar can help signing when a sign interpreter is not available, or the interpreter becomes a coach to the sign robot and so on. And that will then enable us to support one more national language with this sign language in our everyday conversations.

Keyboard shortcuts

j previous speech k next speech