Just completed my submission for the“Build the Ultimate Web Crawler Agent with Firecrawl” challenge
💡 What the challenge is about
The idea is simple (but powerful):
👉 Use Firecrawl + AI Agent in n8n
👉 Crawl the web
👉 Enrich the data
👉 Turn it into something actually useful
Not just scraping… but building a decision-making agent
🧠 What I built (Case: Aisha – Package Evaluator)
I built an agent that answers:
👉 “Should I use this npm package or not?”
Instead of manually checking:
- npm downloads
- GitHub activity
- issues
- docs
👉 The agent does everything automatically and sends a ready-to-use report
⚙️ How it works
• Firecrawl → finds npm + GitHub URLs dynamically
• GitHub API → stars, issues, last commit
• npm API → weekly downloads
• AI Agent → generates insights + recommendation
• Slack → clean output for decision-making
📊 Output (this is the cool part)
Instead of raw data, it gives:
• Risk Score (Low / Medium / High)
• Adoption Level (Very popular / Niche)
• Issue Health• Alternatives with trade-offs
• Final recommendation → Use / Consider / Avoid
👉 Basically… a mini tech decision engine
😅 Challenges I faced
• Scraping didn’t work for JS-rendered data (npm downloads )
• AI-only approach was slow and inconsistent
• Getting correct GitHub repo dynamically was tricky
• Handling invalid packages / edge cases
🔑 Biggest takeaway
👉 The real magic was combining:
Firecrawl (discovery) + APIs (accuracy) + AI (reasoning)
🤔 Curious
If you had this tool…
👉 Would you actually use it before picking a library?
Drop a package name below 👇I’ll run it through the agent 😄