# soulgraph presence

{% hint style="info" %}
try it out in the playground on <https://soulgra.ph>
{% endhint %}

**why?**

agents will never feel truly alive if they are confined to text; real-time voice (and soon, video) with emotion, personality, and presence is where it's at. for this, you need low latency (500-800ms voice-to-voice) and interrupt handling with word-level accuracy, or it doesn't feel very real-time, or human-like.we found out the hard way that running the infra for this sucks. it's expensive too. so, with *soulgraph presence*, developers can build natural, responsive voice interactions with just a few api calls.

**how?**

*soulgraph presence* provides the distributed infra and orchestrates the WebRTC transport, speech-to-text, text-to-speech and agent interface. voice and video interactions are moderated by the personality defined in soulscript, creating a natural feedback loop--when users communicate through voice, they share richer emotional context, leading to more accurate memories in soulgraph memory, which in turn enables richer personality evolution in the agent.

<figure><img src="https://1700925831-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FvJsON4P9HGLHLjt0y4y9%2Fuploads%2F80XejSFhIMqxExxLcuPx%2Fshapes%20at%2024-12-17%2001.46.40.png?alt=media&#x26;token=8b45f107-1e15-4dcf-9fac-180aca550fa0" alt=""><figcaption><p>soulgraph real-time voice infra</p></figcaption></figure>

we leverage a bunch of great, open-source libraries to make this happen. soulgraph's contribution is the orchestration layer and a purpose-built api designed for agent frameworks to make it as easy as possible to get up and running.
