Just found out that n8n has released a new node called "Model Selector" last week! It allows you to pick a chat model based on conditions. There are multiple benefits with this: - Optimization: Route tasks to the most cost-effective or fastest model depending on your needs—save on API costs or boost response times. - Resilience: Automatically fall back to alternative models if your primary service is down or rate-limited, ensuring your workflows stay robust. - Experimentation: Easily A/B test different models and compare outputs, helping you find the best fit for your use case. Track release updates here: https://github.com/n8n-io/n8n/releases