Skip to main content
Skip table of contents

SG Com Roles

Normally the audio going into SG Com is used to generate lipsync animation, to make it appear that the character being animated is producing the speech. Alternatively, however, you can generate animation that makes it appear that the character is hearing the speech, and reacting to it emotionally in real time. This is similar to idle behavior in that there is no lip sync, but with the added benefit of nonverbal reactions to the speech. These two possibilities are called roles, and the two roles are speaker and listener.

The switching of a character’s role can happen many times over the course of a conversation, whenever the character goes from speaking to listening or back to speaking. This requires feeding multiple alternating voices into the same Engine.

As you switch the role of a character, you can also manipulate the mapping of its auto modes to reflect the emotional interaction between speaker and listener. For example, if speaker and listener are friendly, then they might be aligned in their auto modes: e.g. both happy when the speaker’s voice is Positive, or both sad when the speaker’s voice is Negative. This is empathetic mirroring. But if there is aggression and intimidation going on, then when the voice is Positive, the speaker might be in some manic state while the listener is not amused, and when the voice is Negative, the speaker might be angry while the listener is afraid.

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.