User
Write something
Pinned
Want to Host a Live Session?
I’m planning a few LiveKit and Pipecat live sessions over the next weeks, and I’d love to open them up for community contributions. If you’re interested in hosting a session or sharing your expertise, feel free to DM me. Here are some topic ideas to spark inspiration: - Latency optimization: Strategies to achieve sub-600 ms latency - Interruption handling - Industry-specific use cases: Real estate, dental, medical, HVAC, restaurants, hotels, etc. - Integrations with niche software that rarely gets covered: ServiceTitan, FieldEdge, and Housecall Pro for HVAC and home services; Dentrix, Open Dental, and Eaglesoft for dental practices ; Epic, Athenahealth, and Oracle Health for medical practices; Toast, Aloha, and Square for restaurants; Opera PMS, Cloudbeds, and Mews for hotels; Buildium, Propertyware, and Follow Up Boss for real estate and property management. - Telephony provider integrations beyond Twilio: Telnyx, Zadarma, RingCentral, Ringba - Custom PBX integrations - Multilingual implementations: Best voices for specific languages, best transcription models for specific languages and best prompting strategies, deep dives into a single language - Non-technical Voice AI topics: Project management, hiring and evaluating developers, marketing Voice AI products and services, finding clients, content creation, proposal & contract creation, compliance You’re also welcome to use your session to showcase your own product or service, as long as it aligns with the theme of open-source Voice AI.
Pinned
Best Time for Live Calls?
We are an international group with members across multiple time zones, which makes it challenging to find a time that works for everyone. I’d still like to identify a slot that most of you are likely to attend. Please mark your available times on this calendar. The tool will then suggest the best options. No login is required (but recommended, so you can make changes later): https://community-scheduler.com/#/event/3a8549cf-8c82-40e6-855b-1a2fed0afe20
Voice agent observability with tracing
Are you using tracing in your voice agent? I thought about this today, because The team at LangChain built voice AI support into their agent debugging and monitoring tool, LangSmith. LangSmith is built around the concept of "tracing." If you've used OpenTelemetery for application logging, you're already familiar with tracing. If you haven't, think about it like this: a trace is a record of an operation that an application performs. Today's production voice agents are complex, multi-model, multi-modal, multi-turn systems! Tracing gives you leverage to understand what your agents are doing. This saves time during development. And it's critical in production. You can dig into what happened during each turn of any session. What did the user say and how was that processed by each model you're using in your voice agent? What was the latency for each inference operation? What audio and text was actually sent back to the user? You can also run analytics using tracing as your observability data. And you can use traces to build evals. Tanushree is an engineer at LangChain. Her video below shows using a local (on-device) model for transcription, then switching to using the OpenAI speech-to-text model running in the cloud. You can see the difference in accuracy. (Using Pipecat, switching between different models is a single-line code change.) Also, the video is fun! It's a French tutor. Which is a voice agent I definitely need. How to debug voice agents with LangSmith (video): https://youtu.be/0FmbIgzKAkQ LangSmith Pipecat integration docs page: https://docs.langchain.com/langsmith/trace-with-pipecat I always like to read the code for nifty Pipecat services like the LangSmith tracing processor. It's here, though I think this nice work will likely make its way into Pipecat core soon: https://github.com/langchain-ai/voice-agents-tracing/blob/main/pipecat/langsmith_processor.py
0
0
SupaAgent AI Customer Support New Build
Hi All, I'm new to the group and wanted to share a project I've been working on. This past week, I built a Voice AI and Text Chatbot customer support platform for businesses to easily set up their own AI Customer Support agents. Check out the demo video below and let me know what you think of it so far. Demo: https://www.loom.com/share/d39cbafebe664efda3ff059e0226fd4c Tech stack: AI & Voice LLM Framework: Agno (for text agent) Voice Framework: LiveKit Agents SDK LLM Provider: OpenAI (GPT-4) TTS/STT: OpenAI (Whisper for STT, TTS-1 for speech) Voice Options: OpenAI voices + ElevenLabs integration Real-time Communication: LiveKit (WebRTC) Integrations Calendar: Google Calendar API, Microsoft Exchange (planned) SMS/Voice: Twilio WhatsApp: Twilio (via WhatsApp Business API) Instagram: Twilio Channels (planned) Security: Custom encryption for credentials (Fernet)
SupaAgent AI Customer Support New Build
Zadarma with livekit or pipecat in selfhosted?
Hello, I´m new in the community :) I´m playing with pipecat and livekit in self-hosted and the problem I have is that I need compatibility with Zadarma SIP (https://zadarma.com/) and I can´t use it :_( I need to use zadarma because Twilio and Tlenyx are so expensive and also I can´t buy phones from a specific part of Spain. Do some one use zadarma SIP with them? Thanks!!😁
1-30 of 111
powered by
Open Source Voice AI Community
skool.com/open-source-voice-ai-community-6088
Voice AI made open: Learn to build voice agents with Livekit & Pipecat and uncover what the closed platforms are hiding.
Build your own community
Bring people together around your passion and get paid.
Powered by