User
Write something
Voice agent observability with tracing
Are you using tracing in your voice agent? I thought about this today, because The team at LangChain built voice AI support into their agent debugging and monitoring tool, LangSmith. LangSmith is built around the concept of "tracing." If you've used OpenTelemetery for application logging, you're already familiar with tracing. If you haven't, think about it like this: a trace is a record of an operation that an application performs. Today's production voice agents are complex, multi-model, multi-modal, multi-turn systems! Tracing gives you leverage to understand what your agents are doing. This saves time during development. And it's critical in production. You can dig into what happened during each turn of any session. What did the user say and how was that processed by each model you're using in your voice agent? What was the latency for each inference operation? What audio and text was actually sent back to the user? You can also run analytics using tracing as your observability data. And you can use traces to build evals. Tanushree is an engineer at LangChain. Her video below shows using a local (on-device) model for transcription, then switching to using the OpenAI speech-to-text model running in the cloud. You can see the difference in accuracy. (Using Pipecat, switching between different models is a single-line code change.) Also, the video is fun! It's a French tutor. Which is a voice agent I definitely need. How to debug voice agents with LangSmith (video): https://youtu.be/0FmbIgzKAkQ LangSmith Pipecat integration docs page: https://docs.langchain.com/langsmith/trace-with-pipecat I always like to read the code for nifty Pipecat services like the LangSmith tracing processor. It's here, though I think this nice work will likely make its way into Pipecat core soon: https://github.com/langchain-ai/voice-agents-tracing/blob/main/pipecat/langsmith_processor.py
0
0
Zadarma with livekit or pipecat in selfhosted?
Hello, I´m new in the community :) I´m playing with pipecat and livekit in self-hosted and the problem I have is that I need compatibility with Zadarma SIP (https://zadarma.com/) and I can´t use it :_( I need to use zadarma because Twilio and Tlenyx are so expensive and also I can´t buy phones from a specific part of Spain. Do some one use zadarma SIP with them? Thanks!!😁
What is the best way to handle local DIDs and inbound call routing in EU?
I’m working on a system that needs to receive inbound calls through local phone numbers in Europe, and I’m trying to understand the simplest and most reliable way to set this up across different telecom providers. My main questions: - What’s the best way to connect provider-supplied DIDs to a voice agent? - Do you typically use SIP trunks, call forwarding, or another method? - Any tips for keeping latency low and routing stable in Europe? - Anything important to know about compliance or number assignment requirements? If you’ve built something similar, I’d really appreciate any insight even quick notes help. Thanks!
5 Months. 16 Repos. 1900+ Commits. This Is How.
Think "Spec Kit" finely tuned for startup apps... video below!
open source outbound AI
Is there any fully free, open-source method to make outbound AI voice calls? If anyone has an idea or experience with this, please share.
1-30 of 73
powered by
Open Source Voice AI Community
skool.com/open-source-voice-ai-community-6088
Voice AI made open: Learn to build voice agents with Livekit & Pipecat and uncover what the closed platforms are hiding.