Activity
Mon
Wed
Fri
Sun
Oct
Nov
Dec
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
What is this?
Less
More

Memberships

LeadIndicator.ai

437 members • Free

Assistable.ai

2.6k members • Free

72 contributions to Assistable.ai
Was there an update?
Random tool narration out of nowhere... anyone else or just me?
Hey Boris, quick things to check: - Make sure your tool calls are marked as internal in your prompt (e.g. “this is an internal tool, don’t say this to the user”). - Re-save your assistant and refresh the session —> sometimes after an update, assistants need to be re-synced to respect the prompt.
Open API key and rebiling
@Jorden Williams I'm still not really clear on how this works. Do I need to create an Open API account and add money to that wallet? I noticed that when I did not do that, the bot failed (this was all on my own website). I saw the rebilling video, but the part that's still confusing. Does Assistable track the chat time and rebills the client; which shows up as a credit in my Stripe account and from there I would pay my Open API account? Or do I need to ask the customer to create their own Open API account and top it off when balance goes low?
Steven it's this bit that trips many individuals up. You don't ask your client to create their own OpenAI account. Here's how it works on Assistable: - You (your agency) make an OpenAI API key under your own account. That's where charges apply. - You may enable rebilling for subaccounts in Assistable. With rebilling enabled: If rebilling is disabled, then the subaccount will try to use the subaccount's wallet balance in Assistable. If such balance is zero and no rebilling happens, then it will make the bot not work (what happened). So the possible configurations are: 1. Your OpenAI key + rebilling on -> most highly-recommended, you're always in control, customers just pay usage to you. 2. Subaccount balance only -> they replenish their Assistable wallet directly, without rebilling you. You don't have to ask your customers to create OpenAI accounts — you just take care of everything on your agency side.
🌎 Language Switching
Hey y’all! I have an international business that is interested in an AI receptionist that can speak whatever language the user calls in. Are there limitations with how the assistant switches languages on the fly at the moment? I just want to make sure the experience feels natural and professional, not like the assistant barely scraped by in high school Spanish 😀
Hi Jon, the assistant will actually switch languages, but there are a few limitations you should keep in mind: - Recognition: It notices when someone starts speaking in another language and switches over, but usually takes a sentence or two to adjust. - Consistency: Once it switches, it tends to stick with that language. Rapid back-and-forth can get messy. - Quality: For big languages (Spanish, French, German, Portuguese), it sounds very natural. For less common ones, it can feel a little rough. - Voices: Using a multilingual-friendly one gives the smoothest experience.
Request to large?
Anyone know what this is and how to fix it?
Request to large?
Hi Eric that error indicates that request going from your end to the model became too large with your current limits. In particular: - Your org has a tokens per minute (TPM) of 30,000 limit. - It attempted through 45,709 tokens, and this went beyond the limit. - The model doesn’t like it because it cannot handle more than your limit admits. A couple of solutions to this problem: 1. Shrink prompt size — If your assistant prompt or knowledge base is quite extensive, edit it down or break it into several steps so each request remains small. 2. Output shorter — If generated responses are longer than desired, mend settings such that the model outputs remain short. 3. Increase your rate limits — You may appeal a higher TPM limit from OpenAI if you frequently require bulkier requests: https://platform.openAI.com/account/rate-limits Short answer: Either broaden the request or lift the limit of the account — otherwise the model will continue spitting out this error once the limit has been reached.
Response speed + Sensitivity to Interruption
Curious what you guys have found to be ideal? I'm currently using Emily with 0.98 response speed and 0.6 sensitivity to interruption but going to play with reducing both
Hi Boris. Your “best” mix can vary depending on what style you want with your AI, but the following are some of the general rules we’ve discovered to be true, - Friendliness: Close to 0.9–0.95 feels a bit too rapid yet just right. Close to 1.0 or greater feels too rapid, and much farther back makes the AI too slow. - Sensitivity to Interruption:Good value of 0.5–0.65. Smaller values make the AI more patient (waiting until a caller fully finishes), while higher values make it cut off a little too early. Your current settings (0.98 speed, 0.6 interruption sensitivity) are good but if you desire smoother, more natural-sounding speech, reducing response speed a tiny bit (0.9–0.92) often has the advantage of making speech feel more natural. It's okay to get creative with your audiences since some business areas such as a quick pace (i.e., sales) and others prefer a longer duration of time (i.e., wellness and medical).
1-10 of 72
Hari Prathap Balamurugan
4
90points to level up
@prathap-balamurugan-9582
Assistable Skool Group Moderator & Support Team Member | AI Agent Builder | Expert in Assistable AI & GoHighLevel | Helping members through support.

Active 1h ago
Joined Aug 14, 2025
Powered by