DON'T use AI Agent node in N8N
Hi guys! I’ve spent a lot of time building AI agents for real clients and found that agents made with the basic n8n node are very bad at tool selection, remembering context, and following prompts. I use only OpenAI LLMs because they provide the widest range of features, and I interact with them directly using OpenAI nodes or HTTP requests. After working on 3–5 cases, I noticed that simply replacing the AI Agent node with the OpenAI Assistant node (without changing prompts or tools) significantly increases agent performance. I decided to research how the AI Agent node works internally, and I want to share what I found.. n8n is open-source, so anyone can check how each node works internally. I looked at the code for the AI Agent node and explored how it operates. It uses the LangChain framework to make the code LLM-agnostic and add features like memory, tools, and structured output to any LLM model. Key Points: ⏺️ Universal Convenience, Performance Cost The AI Agent node lets you swap between AI models easily, but it uses LangChain’s “universal” format. This means your messages get converted back and forth, which can cause detail loss and miss optimizations specific to OpenAI. ⏺️ Main Bottlenecks 1. Message Format Translation OpenAI is optimized for its own format, ChatML (roles like system, user, assistant). LangChain uses its own message objects (HumanMessage, AIMessage), which must be converted to ChatML before reaching OpenAI’s API. This leads to subtle translation losses and missed improvements. 2. Context Management: OpenAI’s API handles conversation history and context much more effectively than LangChain’s generic method, especially for long chats. OpenAI uses smart summarization for older messages, while the AI Agent node only takes the last X messages from the memory node. I’m not saying you should strictly avoid the AI Agent node, but you need to understand the pros and cons of each approach to use them in the right cases.