How to Queue and Rate Limit LLM Calls in n8n (One Request per Minute from Batch)
Hi everyone, I’ve built a workflow where, given a company’s industry code, an LLM generates a sector analysis and updates a specific field in our CRM.
My problem is that there could be multiple bulk requests happening at the same time, and I want to implement a queue system to avoid hitting the LLM's token-per-minute limit.
I tried using the Batch node to create a queue, which works in grouping items, but I can’t manage to enforce a fixed interval of one request per minute. Is there a reliable way to process one item from the batch every 60 seconds?
Would love suggestions or patterns you've used to handle this kind of isssue with LLMs inside n8n!
Thanks in advance 🙏
Emanuele
6
14 comments
Emanuele Robba
2
How to Queue and Rate Limit LLM Calls in n8n (One Request per Minute from Batch)
AI Automation Society
skool.com/ai-automation-society
A community for mastering AI-driven automation and AI agents. Learn, collaborate, and optimize your workflows!
Leaderboard (30-day)
Powered by