Hi
I have built a scheduling and answering workflow with ElevenLabs and n8n.
I developed n8n calendar workflows (no AI) in n8n and made them available via webhooks as tools to ElevenLabs. The ElevenLabs system prompt provides all the details to the ElevenLabs LLM (using Gemini 2.5 Flash). This is working well but sometimes I feel that I can do with a stronger model like Gemini 2.5 Pro but it is not available and likely to be slow. Also I notice delays in responding while making multiple tool calls.
I'm thinking if my approach is the best approach, particularly I wonder if it is going to be better to make an orchestration agent at n8n that has the same n8n tools but to be utilized as sub workflows (no ai) instead of webhooks. In this approach the main system prompt will be at the n8n agent and it will make quick tool calls locally.
In this approach the LLM at ElevenLabs will handle the discussions with the user via voice and delegates most requests to the n8n orchestration agent (via webhook). Hence I can use very strong model at the n8n side and tool calls will be faster. The ElevenLabs system prompt will be much simpler.
I still didn't try this but I wonder if anybody has a preference on which approach to go? The current approach to allow ElevenLabs to access remote tools individually, while the proposed approach will mean calling a remote agent for almost everything but that agent can have faster access to local tools. The n8n agent may need to ask for extra information which would be slow in this case.
My goal of the proposed change will be to both have a stronger LLM and reduce multiple remote tool calls and so reduce delays.
Any ideas?
Regards
Shadi Ghaith