📣 Pretty Much Done - Here's The Release Scoop
So, been a long weekend. Pretty much re-wrote everything from scratch to fit a more scalable infrastructure to set us up for long term growth and near, if not on target, with 100% uptime on both chat and voice.
𝐈𝐧𝐛𝐨𝐮𝐧𝐝 𝐕𝐨𝐢𝐜𝐞:
latency tested for inbound inbound picked up was 800 ms to 1 second consistently with 100% pick up success rate with all IDs meaning your clients will never miss an AI inbound call ever again*. Asterisk because nothing in life is 100% but we'll get it close enough lol
𝐎𝐮𝐭𝐛𝐨𝐮𝐧𝐝 𝐕𝐨𝐢𝐜𝐞:
Latency and load testing was awesome under outbound. The same thing, we were able to get a call out in about 800ms to 1.5 seconds consistently with all available information without fail. This was re-written to include user / account based rate limits. Even though this theoretically could take unlimited volume, we also want to protect you from Spamming if you forgot to drip in your workflow.
𝐂𝐡𝐚𝐭:
Chat is astonishingly fast and the connection to HighLevel is more reinforced with timeouts and error handling. The same metrics here but some contingencies for handling pre-March 2024 thread IDs. We were able to put in a system that constantly is checking statuses for LLMs (like on an every second basis) so we can make decisions on where to redirect it to protect against vendor outages.
𝐊𝐧𝐨𝐰𝐥𝐞𝐝𝐠𝐞 𝐁𝐚𝐬𝐞:
We got most of the RAG system moved over, having trouble with one or two files types but working on that. This way we can get knowledge faster and more contextually aware on a per-run basis in chat. Also the ability to pull live data, or update data like a site scrape on a 24 hr basis is being worked on.
𝐆𝐨𝐇𝐢𝐠𝐡𝐋𝐞𝐯𝐞𝐥 𝐀𝐜𝐭𝐢𝐨𝐧𝐬:
GHL actions have been reconfigured - like Make AI Call, context action and get ai call + adding a couple more to handle edge use cases like transcript summary and transcript to any language inside of GHL to send to clients / reporting. Get convo logs etc reformatted and moved over too for speed and reliability.
Also, failed calls will now be connected and sent to your call center for triage.
+ more.
--
I wanted to release this weekend, even though tests are awesome - couple things i want to do first which is test weird / edge cases (because those come up a ton, more than you may think) and I want to wait until after business hours to mitigate any 'cold starts' on the endpoints moving over - meaning that we do it and there is a 5-10 minute delay before those changes 'take effect'.
I'm exhausted but, dude am I excited. This puts us on track for being able to have SLAs for 99.9999% uptime on both chat and voice with are SEPARATE from the front end application. So even if we go down - we wont go down. The initiative after this is deployment resources and ease of use tools like a Easy Prompt builder that calls tools for you etc.
So, there you go - on track but going to wait until after business hours before taking effect which gives us more time to test and run some scenarios to add more handling for it.
44
34 comments
Jorden Williams
8
📣 Pretty Much Done - Here's The Release Scoop
Assistable.ai
skool.com/assistable
We give you the most dominantly unfair advantage in the agency space. The most installed GoHighLevel AI ever.
Leaderboard (30-day)
Powered by