Today I want to give you the unfiltered reality of what's going on in the market. Why there are so many LLM rank trackers, some are cheap, some are expensive, and some are going to the extreme level now.
Let me break it down.
THE TWO TYPES OF LLM TRACKING
Whenever someone says they're going to "track LLMs," there are really only two ways anyone in the world can do it:
- API Calls: with web search enabled, or without web search
- UI-Based (Frontend): which has two versions: logged-in session and non-logged-in session
That's it. Whatever tools you hear about on the internet, they're just selecting one of these and building their infrastructure on top of it.
And the cheaper tools? They're almost always making API calls. So you might ask why are API calls different from the UI-based versions?
HOW LLMs ACTUALLY WORK (CORE PRINCIPLES)
To understand the difference, we need to understand how LLMs work at the core.
Let's say you're building an LLM from scratch. What you'd do is find a massive amount of data, push it into your model, and process it.
Simplest way I can explain it, imagine you turned off your internet and hit the search button on your Windows file finder. You're just trying to find a file on your own machine. That's essentially what an LLM does with its training data.
Offline data, whenever a non-web API call is made, this offline/training data is what gets triggered.
Web-enabled data, whenever a web-enabled API call is made, the LLM has the capability to go search the live internet.
"SEARCH THE WEB" BUT WHERE ARE THEY SEARCHING?
You and I have Google or Bing to search. But for LLMs, where are they actually looking?
Think of it like this imagine the entire internet is your PC, and all that data has been converted into an index. Like the index pages of a book. You look at the index, it says "if you want to learn about X, go to page 247." That's how it extends its knowledge.
The LLM starts with its initial training data. Let's say it has 10 documents. Those 10 documents contain links, external references, bits of information. From those, it pulls more. Now think about the compounding effect 10 documents with 10 links each becomes 100, then 1,000, then millions, then billions of connections. An entire network gets created.
That's one way LLMs reference knowledge back to sources. And some LLMs like ChatGPT have been found to integrate with Bing, so they're likely using Bing's search algorithm for web retrieval. But LLMs use both methods, and their primary preferred way will always be the easiest accessible path just opening their own book first, then going to the web.
If you want to see this in action yourself, go to OpenAI's developer playground. Select any model that doesn't have web access. You can practically explore everything I just described.
And if you're even more curious about how the training data gets built there's a platform called Common Crawl. It's basically an NGO that scrapes the web and packages it up like a fresh book every month. It's cheap for LLMs to get data from sources like this because they're getting everything in one place. There are so many sources like this, and there's a massive network happening behind the scenes.
That's how the entire API-based tracking works.
NOW THE UI-BASED TRACKING — THIS IS THE CRITICAL PART
This is where our ideal customer actually lives.
Non-Logged-In UI Tracking:
You might have recently seen tools claiming they do "UI-based tracking" tools like Radarkit (radarkit.ai) for example. What they're doing is basically automating browser profiles at scale in the cloud. If you've ever heard about robots automating your browser, it's something similar but happening at scale. Think of it like opening an incognito tab, going to ChatGPT, typing a query, scraping the results, and feeding them to a dashboard. That's the non-logged-in version.
Logged-In UI Tracking:
This is where all of our ICPs actually are.
Let me give you a real example. Say I'm serving a client who's a personal injury attorney in NYC. If my client's ideal lead is someone who urgently needs an emergency lawyer, searching in ChatGPT while logged into their account, that's a logged-in session.
What happens in a logged-in session? Personalization filters get applied on top of everything.
This session is heavily influenced by the previous history of that particular user's account. You've probably noticed it yourself sometimes when you search in ChatGPT, it throws in extra things pulled from your past memory.
In our example, let's say the lead is originally from Spain. The model might start surfacing Spanish-speaking attorneys or lawyers with experience serving Spanish clients. Or maybe from past interactions it learned that this person isn't a high-budget client, so it automatically adjusts its recommendations that way.
This is the environment where your ideal customer is actually searching.
WRAPPING UP
Whenever you see rank trackers claiming "we do this, we do that" everything they offer falls into these categories. API (with or without web), UI non-logged-in, or UI logged-in.
Now you need to make a decision about what your actual requirement is and which type of tracking fits your needs.
I'm wrapping up here for now. But if anyone's interested, let me know which topic you want me to go deeper on next:
- Fan-out queries
- LLMs Cache Version (temporary memory LLMs build about your website)
- Listicle LLM manipulations
- LLM manipulation for e-commerce
- LLM manipulation for local SEO
- Deep Research: what's actually happening under the hood
- SERP vs AI Mode: why top-ranking Google results aren't showing up in AI overviews