Building a Self-Hosted AI Chatbot with n8n: Is My Plan Feasible?
Hello everyone,
I'm planning to build a self-hosted AI chatbot using n8n and would appreciate your feedback on my proposed setup and whether it can handle the expected workload.
Project Overview:
Core Technology: Self-hosted n8n on AWS.
Expected Workload: Approximately 400-600 messages per day.
AI Model: Gemini 2.5 Flash or 2.0 Flash (first tier).
Planned AWS Instance: I'm considering upgrading to a t4g.medium (2 vCPU, 4 GB RAM) or a c6d.large. I'm looking for advice on which would be more suitable.
My Proposed Workflow is as follows:
A webhook receives an incoming message.
The message is passed to an n8n AI agent powered by Gemini.
This agent will have simple memory capabilities and access to a company database for context-aware responses.
A code node will then be used to clean and format the AI's output.
Finally, the cleaned response will be sent back via the webhook.
Future Enhancements:
I also plan to implement a system to set usage limits for users to prevent spam and abuse which will add some node .
My Questions:
Can a self-hosted n8n instance on an AWS server, such as a t4g.medium, comfortably handle 400-600 messages per day with this workflow?
Is the t4g.medium a good choice, or would the c6d.large be significantly better for this use case?
Are there any real-world examples of companies successfully using a self-hosted n8n setup for their AI chatbots?
I'm open to any suggestions or best practices you might have. Thanks in advance for your help!
2
1 comment
Building a Self-Hosted AI Chatbot with n8n: Is My Plan Feasible?