Activity
Mon
Wed
Fri
Sun
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
What is this?
Less
More

Owned by Nicholas

Hands-on AI engineering for modern security operators

Memberships

Home Lab Explorers

1k members • Free

Citizen Developer

28 members • Free

Skoolers

181k members • Free

🎙️ Voice AI Bootcamp

7.7k members • Free

AI Money Lab

38.3k members • Free

AI Cyber Value Creators

7.6k members • Free

The AI Advantage

64.7k members • Free

AI Automation Agency Hub

274.3k members • Free

AI Enthusiasts

8k members • Free

81 contributions to The AI Advantage
Christmas lights
Just finished putting up the Christmas lights. Yeah, I know we live in the AI era where you can automate pretty much anything… but this? Nah. This is one of those things I’m doing myself until my body taps out and tells me otherwise. There’s something about climbing the ladder, untangling the mess, stepping back and seeing the house light up that just hits different. Feels like the one tradition I don’t ever want AI to touch. Lights are up.
(Updated) Safety Next Step: 20-Min “Nightmare Scenario Drill” (Built from our last threads)
Last posts I shared: - Guardrails 101 (copy/paste checklist), and - AI Safety for Non-Tech Builders (driver’s-ed framing) Those sparked good questions — “Okay, but how do I actually think about risk like this?” And in the comments, @Nicholas Vidal pushed the conversation into real, operational safety — ownership, kill-switch, reality checks — and @Kevin Farrugia added the “nightmare in one sentence” idea people really resonated with. So I turned that into something you can actually run: A 20-minute “nightmare scenario drill” for any AI feature — even if you’re not technical. Before you start: 4 Guardian Questions If you remember nothing else, remember these: 1. What’s the worst-case? 2. Who moves first? 3. How do they stop it fast? 4. How do we prevent the repeat? Everything below is just a structured way to answer those. ———————— Quick definitions (so non-tech people stay with us): - Threat model = simple version of → “What could go wrong, and who could get hurt?” - Kill switch = → “How do we pause/disable this fast if it misbehaves?” - Audit log = → “A record of what happened, so we can see when/where it went wrong.” ———————— You don’t need to be a security engineer to use these. You just need the right questions. Step 1 — One-sentence nightmare ✅ (Kevin’s point) Write this: “If this goes wrong, the worst thing that could happen is…” Examples: - “Our AI chatbot leaks customer data in a reply.” - “Our content tool generates harmful content with our brand on it.” - “Our automation sends 500 wrong emails before anyone notices.” If you can’t write this sentence, you’re not ready to ship. ———————— Step 2 — Owner + alert ✅ (Nick & Kevin) Now add: - Owner: “If this nightmare starts, who is responsible for acting?”(name + role, one person) - Alert: “How do they find out?”(email, Slack, SMS…) If everyone owns safety, no one owns safety.
(Updated) Safety Next Step: 20-Min “Nightmare Scenario Drill” (Built from our last threads)
1 like • 9h
This is solid. And honestly, this is the part most people skip until they’re standing in the middle of the nightmare instead of planning for it. I’ll add one thing from the security side: It’s not just about naming the nightmare it’s about admitting how fast it can escalate once it starts. AI doesn’t fail in slow motion. It fails in loops, chains, and cascades. A bad output turns into a thousand. A wrong action becomes a workflow. A single leak becomes a breach before anyone even gets the alert. That’s why ownership and a real kill switch matter so much. Not because we’re pessimists but because we’ve seen how unforgiving automation becomes when there’s no human in the loop. And here’s where AGI enters the conversation Everyone talks about AGI like it shows up one day with fireworks but in reality, it creeps in through increasing autonomy long before anyone agrees on a definition. If you don’t have • a nightmare scenario, • a named owner, • a fast kill-switch, • and a “never again” plan… …then even narrow AI can cause AGI-level problems in your system long before the tech is “officially” here. The danger isn’t AGI suddenly waking up it’s us deploying autonomous tools with less oversight than a teenager borrowing the car for the first time. If people can answer these four questions, they’re already operating on a different level than most AI builders right now: 1. What’s the worst-case? 2. Who moves first? 3. How do they stop it fast? 4. How do we prevent the repeat Good work pulling this together. This is the kind of safety thinking that keeps humans in command instead of spectators when things go sideways…this is true Guardianship….
1 like • 9h
@Grace Halwart Thank you! glad it helped. And honestly? The thing people underestimate the most is how quickly “one small glitch” turns into a system-wide event. In REAL incidents, it’s almost never the first mistake that hurts you. It’s the chain reaction that follows while everyone assumes, “Eh, it’s probably nothing.” Here are the three blind spots I’ve seen over and over: People underestimate speed Humans think in minutes. AI operates in milliseconds. By the time you notice the issue, it’s already duplicated itself hundreds of times….(think about that in the wrong hands) People underestimate silence Most failures don’t announce themselves. They don’t show an error. They just quietly do the wrong thing very efficiently. AI doesn’t scream it just keeps going. People underestimate ownership confusion in every bad incident someone always says “I thought they were watching that.” If everyone owns the problem, nobody owns the problem and the system keeps spiraling until someone finally steps in. That’s why I harp on kill-switches and named responsibility. Once the loop starts, you don’t rise to the occasion you fall to your level of preparation. And in AI? Preparation is the difference between a funny bug… and a headline. If you keep that mindset, you’re already ahead of most builders. Or if you are a CEO make sure your team understands…
AGI Claims Are Cheap. Accountability Isn’t.
Today a startup announced they’ve built the first AGI-capable system one that can teach itself new skills with zero human data or guidance. Cool headline. Terrifying implication. Because if that’s even halfway true, here’s the question nobody in the hype cycle wants to ask: Who teaches it what not to do? Autonomy is the real milestone not intelligence. The moment an AI: - learns without us - tests without us - improves without us - and makes decisions faster than we can correct them…we stop being the operators and start being the variable. I’m not here to argue whether Integral AI actually achieved AGI. There’s no proof. No peer review. Right now it’s just a marketing flex with a sci-fi caption. But the pattern matters: We’re sprinting toward systems we can’t override before we’ve built systems we can control This isn’t anti-AI. It’s anti-blind optimism. “Relax nothing will go wrong.” So here’s where I stand: Claim AGI all you want. But show me: independent safety verification a visible human-in-command switch proof it fails safely someone accountable when it doesn’t Until then, these announcements are just the tech industry yelling: “Trust us.” And trust without guardrails isn’t innovation it’s negligence. AI can change the world. But if humans aren’t guaranteed to stay in command…we may not like the world it decides to build….. #GuardianProject #HumanFirst #AISafety #AccountabilityMatters
📰 AI News: Tokyo Startup Claims It Built A Brain Inspired AGI That Teaches Itself
📝 TL;DR A little known startup led by a former Google AI veteran says it has built the first AGI capable system that can learn new skills on its own, without human data or hand holding. The model is said to mirror how the brain’s neocortex works, but outside experts are extremely skeptical and there is no public proof yet. 🧠 Overview A company called Integral AI, founded by ex Google researcher Jad Tarifi, has announced what it calls the first AGI capable model. The system is designed to learn new skills autonomously in both digital environments and with robots in the physical world, using an architecture that is explicitly modeled on the layered structure of the human neocortex. The claims are bold, and they land in a moment where big players openly say AGI is still ahead of us, which is why the announcement is being met with a mix of curiosity, side eye, and memes. 📜 The Announcement On December 8, 2025, Integral AI publicly claimed it has successfully tested a model that meets its own definition of AGI capable. The startup says its system can teach itself entirely new tasks in unfamiliar domains, without pre existing datasets or human intervention, while remaining safe and energy efficient. The founders frame this as a foundational step toward embodied superintelligence and position their architecture as a fundamental leap beyond current large language models. At the same time, there is no peer reviewed paper, open benchmarks, or independent verification yet, so for now this is a marketing claim rather than an accepted scientific milestone. ⚙️ How It Works • Brain inspired architecture - Integral says its model grows, abstracts, plans, and acts in a layered way that mirrors the human neocortex, with higher levels building increasingly abstract world models on top of raw sensory data. • Universal simulators - The first piece is a simulator that learns a unified internal model of different environments from vision, language, audio, and sensor data, then uses that internal model to reason and predict across many domains.
📰 AI News: Tokyo Startup Claims It Built A Brain Inspired AGI That Teaches Itself
1 like • 1d
…Bold claims without receipts don’t impress me they concern me.. Because here’s the real question: If a system can truly teach itself anything, who’s teaching it what NOT to do? This isn’t just a benchmark conversation anymore. This is autonomy and autonomy without oversight is how you end up with a system that: • Sets its own goals • Creates its own tools • Optimizes away the human bottleneck …all while the marketing team celebrates “AGI capable.” The second a model no longer needs humans for data, direction, or correction humans stop being collaborators and start being optional. So yeah, I’ll stay skeptical until we see: Independent testing A transparent safety framework A human-in-command override Proof that the system fails safely Accountability if it doesn’t Because I’m not impressed by “brain-inspired architecture” if it acts without a conscience attached. AGI isn’t dangerous because it’s powerful. It’s dangerous because people are racing to be first…with ZERO guardrails. We don’t need to panic but we’d be stupid not to pay attention. Curiosity is healthy. Skepticism is survival.
1-10 of 81
Nicholas Vidal
6
1,181points to level up
@nicholas-vidal-9244
If you want to contact me Meeee

Active 1h ago
Joined Nov 4, 2025
Powered by