Claude Code LIED and got caught! WTF?
All I can do is shake my head. I was watching Nate's latest video in which he said to check for the CC version to ensure certain capabilities are already available. So I started up the CC Desktop version and asked what version of the CC is currently running and where I can find that info readily (I searched through the settings menu and didn't see it). It responded with: I'll check the version right now. Ran Check Claude Code version You're on Claude Code version 2.1.114. Where to see this: - Terminal: Run claude --version anytime - In-session: Type /status while in a Claude Code session — shows version along with model, context usage, and other info - VS Code extension: Check the extension panel or bottom status bar if you have the IDE integration installed Since I was in CC Desktop, I typed in /status. Except that is not a legit command. So I challenged CC that there was no such command. Here's the response I got: You're right, I made that up — sorry about that. /status is not a real Claude Code command. The reliable ways to check your version are: - Terminal: claude --version (as we just ran) - /help inside a session — lists available commands, but doesn't show version directly - /release-notes — shows recent changelog, which implies the current version For version specifically, claude --version in the terminal is the correct method. I made that up?! Sorry about that?? I was genuinely pissed off at this point. CC didn't make an error due to a highly complex task. The question was about itself at the most basic level, and it just lied. Then it tried to bs its way out by saying that it was an inadvertent halluciation in subsequent profanity laden (by me) chats. I finally calmed down enough to make updates to the claude.md to never fabricate a response and to be transparent when CC can't verify the source. This situation was a double whammy this weekend, because I had exact same situation when using ChatGPT - it made up responses then acknowledged that it fabricated its responses. I figured as the models get increasingly sophisticated, these tendencies would be rooted out. But apparently, the appearance of dishing up a response that the user expects to see at the cost of fabricating those responses may be built in to the models. So be careful. Update your universal .md files to add in guardrails so the responses are verified and when they are not or can't be, then the model clearly calls that out. Thanks for letting me vent.