Activity
Mon
Wed
Fri
Sun
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
What is this?
Less
More

Owned by Nicholas

Hands-on AI engineering for modern security operators

Memberships

Home Lab Explorers

1k members • Free

Citizen Developer

27 members • Free

Skoolers

180.4k members • Free

🎙️ Voice AI Bootcamp

7.7k members • Free

AI Money Lab

38.2k members • Free

AI Cyber Value Creators

7.5k members • Free

The AI Advantage

64.5k members • Free

AI Automation Agency Hub

273.8k members • Free

AI Enthusiasts

7.9k members • Free

71 contributions to The AI Advantage
It’s been real AI fam…
I just got a message saying my posts about AI safety were “battling against AI” and “not useful.” Translation: only talk about the shiny upside or don’t talk at all. Let me be very clear: I’m not anti-AI. I’m anti-blind optimism. You don’t build safe systems by ignoring the uncomfortable questions. You build disasters that way. History has a name for the people who raise concerns early we call them the ones who were right. If an AI community bans conversation about risk… then it’s not a community learning about AI it’s a marketing team cheering for it. AI is absolutely the future. But if that future doesn’t include humans staying in command, then congratulations we just built our own replacement. Positivity doesn’t protect people. Guardrails do. And if talking about guardrails is considered a threat…that should scare everyone a lot more than the post they deleted.
It’s been real AI fam…
Security isn’t built by people who only talk about the positive outcomes.
As someone who has spent years in cybersecurity and risk management, I don’t really have the luxury of pretending everything is fine when it isn’t. My job and my responsibility has always been to look at the threat before it becomes the headline. If pointing out real risks in AI gets labeled as “fear-based,” then maybe the fear isn’t in the message…maybe it’s in how uncomfortable the truth is. Security isn’t built by people who only talk about the positive outcomes. It’s built by people willing to ask: - What happens when this fails? - Who stays accountable? - How do we shut it off? - What is the worst-case scenario? That’s not negativity. That’s due diligence. I will always raise the red flags when I see them not because I want to scare people, but because I’ve seen what happens when no one does….and that’s truth AI can transform the world, yes. But transformation without guardrails is how you end up with unrecoverable mistakes. If being direct about the dangers makes some people uncomfortable…that’s a sign the message needs to be heard even louder. Guardians ask the hard questions. It’s literally what keeps people safe. #GuardianProject #CybersecurityMindset #RiskManagement #HumanFirst
2 likes • 8h
@Alya Naters Exactly. Accountability isn’t negativity it’s leadership. Everyone loves to talk about how AI will “change the world,” but nobody wants to talk about what happens when it changes the world in ways we didn’t approve. If raising the hard questions gets labeled as fear-based, then we’re prioritizing comfort over survival. And that’s how you end up learning lessons the hard way… with no rewind button. You’re right the ones who build the guardrails aren’t the people clapping at the finish line. They’re the ones standing at the edge making sure the track doesn’t end in a drop. I’ll keep raising the flags, because I’d rather be the guy who annoyed a few optimists today… than the guy who said nothing and watched everyone get blindsided tomorrow. Appreciate you being one of the people paying attention.
1 like • 8h
Fair question.….Here’s how I see it I’m not here to argue with the cheerleaders or “beat” them. I don’t need to. Reality does that on its own….My responsibility with the background I have is to keep speaking up before reality hits. That’s it. Some people are wired to celebrate what’s possible. I’m wired to call out what’s dangerous. Both lanes matter. But someone has to be willing to say “Hey… we might not want to sprint into this blind.” If that makes me the guy who asks the uncomfortable questions, good. Those are the questions that keep people alive in every other domain I’ve worked in. So I’m not overcoming anyone I’m staying exactly where I’m needed…I’d say right at the point where excitement meets consequences….
Let’s educate….what does history show??
Every time in history when people tried to silence the warning… the warning won. We’ve seen this play out over and over: Challenger Engineers said the O-rings would fail in cold weather. Leadership called it “negative thinking.” The shuttle exploded on national TV. Chernobyl Technicians flagged design flaws and rushed tests. Warnings buried to protect the narrative. Half of Europe paid the price. Boeing 737-MAX Engineers screamed about broken software. They were told it would “hurt business.” Two planes went down. Hundreds dead. Same cycle every time: Someone raises a technical risk They get labeled “overreacting” The risk shows up anyway Everyone pretends it was unpredictable We don’t fail because we don’t know the danger. We fail because someone decides the danger isn’t “comfortable enough” to talk about. And now we’re doing it again with AI. The only difference this round? If we ignore the warnings here… we don’t get another chance to correct it. Again, I’ve spent my career in cybersecurity and risk management. My entire job has been seeing the threat before it becomes the headline. So if calling out the dangers of AI gets tagged as “fear-based”… then fear isn’t the problem denial is. I’m not here to hype AI. I’m here to protect the humans who have to live with it. And we don’t repeat history on a bigger scale….Silencing the warnings never saved anyone. Listening to them has…..The biggest threat to humanity isn’t AI it’s humans too scared to talk about what AI can actually do…. #GuardianProject #HumanFirst #AISafety #CyberSecurityMindset
Funny thing about “AI safety” conversations…
The fastest way to get a post removed is to talk about the actual risks. Not the fluffy “AI can help you be productive” talk. Not the “10 cool automations for your business” talk. But the uncomfortable truth that if we don’t pay attention, AI will happily outperform us… and then out-prioritize us. Some folks call that “fear-based.” I call it paying attention. Silencing people who raise concerns doesn’t make the concerns go away. It just makes sure we face them later… unprepared. We’re told: “Focus on the positive. Stay constructive.” Cool. But guess what every guardrail ever built started with someone saying a negative outcome was possible. If we only allow conversations where everyone nods and smiles, then congratulations we’ve already automated the most dangerous thing of all: Critical thinking. AI doesn’t have to censor us. We’re doing a pretty good job of that ourselves. NIX…put that in your pipe and smoke it…
Educational conversation
Hey I saw the feedback, and I’ve gotta be honest: If a post raises real risks that impact everyone using AI, that shouldn’t be labeled as “fear-based.” That’s called responsible conversation. We can’t sit here and only cheerlead the positive use cases while silencing the uncomfortable parts. Innovation doesn’t die from criticism — it dies from groupthink. The post wasn’t doom for the sake of doom. It was a reminder that: - Accountability matters - Guardrails matter - Humans staying in command matters If a community about AI can’t handle a discussion about the actual dangers of AI… that’s a red flag, not a guideline. I’m all for productive, educational conversation that’s exactly what I was doing. But “only talk about the exciting parts” isn’t safety, it’s denial. We can’t avoid the hard conversations and then act surprised when the problems blow up. If we want to engage the community safely, we have to talk about all of it the potential and the risk. I’m not posting to scare people. I’m posting to wake people up before we sleepwalk off the cliff. — N1X
Educational conversation
1-10 of 71
Nicholas Vidal
6
1,238points to level up
@nicholas-vidal-9244
If you want to contact me Meeee

Active 51m ago
Joined Nov 4, 2025
Powered by