Hey folks 👋 just wanna share a small lifehack — haven’t seen anyone post this combo here yet.
I’ve been using **Serper.dev** (Google Search API) with **n8n** to automate Google dorks, and it’s super useful for **enriching company or key people profiles** — without having to do manual searching. 💡 The cool part is it’s **not limited to LinkedIn or profiles only** — this is just a sample use case. You can point it to any type of public data indexed by Google (like GitHub, company sites, job pages, etc.).
How it works:
- Run your Google dork → Serper.dev returns clean results - n8n grabs those links → fetches public fields you want
- Save it to Google Sheets or CSV
Example dorks:
Mini n8n flow:
- HTTP Request → Serper.dev (`q` = your dork + `serper_api_key`) - Set/Function → extract `organic_results[].link`
- SplitInBatches (size 1) → HTTP Request (fetch HTML)
- HTML (Extract) → pick your fields (title, headline, location, etc.)
- Output → Google Sheets / CSV
⚡ Pros:
- Easy setup, no scraping headaches
- Works with any public Google-indexed data
- Great for enrichment or research tasks
⚠️ Cons:
- Limited to **public** indexed pages only
- HTML structure may change → extraction needs maintenance
- Add delays and respect ToS / robots.txt to stay safe
Links:
If anyone’s interested, I can drop a ready-to-import n8n JSON for this setup.