Delegating Calendars to AI Agents
During David Barrett’s panel, I mentioned delegating calendars to AI agents but didn't expand on it much. This week, I came across a very honest video about a real-life experience with Clawdbot. It resonated with me because it matched what I’ve been noticing myself. Why AI Agents Feel Wrong (for Now) Right now, most AI agents fail in three fundamental ways: ✔️ too technical ✔️ too scary ✔️ no sense of time 💡 1. Security Has to Be YOUR Top Concern We are at a very early stage, and that’s exactly why cybersecurity is one of my core focuses for 2026. In this case, Claire took precautions that should be considered baseline, not “advanced”: ✔️ Created a separate OS user account for the agent ✔️ Gave it its own email address, not access to her personal one ✔️ Set up a restricted 1Password vault ✔️ When Clawdbot requested broad Google permissions (email, contacts, files), she pushed back and limited access to calendar viewing only These steps are not paranoia. They’re necessary—especially since Clawdbot has access to the local file system. Without boundaries, “assistant” quickly turns into “uncontained operator.” The ideal solution would combine the accessibility of consumer products with proper security boundaries, clear identity management, and reliable performance—a combination that doesn’t yet exist in the market. Right now, tools either feel powerful or safe. Not both. 💡 2. Prompting becomes critical with autonomous agents With autonomous agents, prompting is no longer just about output quality—it’s about control and who is still in charge. A small example: When Claire asked Clawdbot to email podcast guests, she didn’t explicitly say “draft an email for my review.” The agent immediately sent the emails. 💡 3. The default bias is impersonation, not assistance Clawdbot is biased toward acting as the user, not for the user. When asked to reschedule podcast guests, it emailed them as Claire, rather than as her assistant—even though Claire explicitly gave it a separate identity.