I need to be honest. When I started getting serious with AI and Automations, I treated AI like a speed-boost button: crank up output, slash costs, pat myself on the back. If a bot could replace a full team, great. One more win on the KPI board. I was bringing the future into the present by replacing all these boring repetitive tasks with AI agents.
When I was presenting these projects, I was totally hype about all the cool enhancements these projects were bringing to the table. Human element was barely in then picture.
Then someone wiser than me, came to me one day and said: -Didac, you are scaring people. They don’t really understand what you are doing and they only see this as a threat. It hit me and I also realised at thst moment why people was no helpful and they were reticent to collaborate with me to build these proemjects. It all made sense then. It was an eyes opening moment and it set a before and after for my AI Ethics view and future approach.
Then, recently, a colleague looked me in the eye and asked, “Sure, we’re making things faster, more productive and we are saving on new hirings and recruitment fees…But what happens to the people we just made ‘extra’?”
That simple question hit me like a punch. I went home, stared at my laptop, and pictured a future factory floor, lights humming, screens glowing, no voices, no laughter. Efficient, yes… but hollow and sad.
Unfortunately that’s where we’re headed unless we draw a hard line right now.
So before I wrote another line of automation code, I put together a nine-point code of conduct, a promise that every agent I build will lift humans up, not push them out. I’m calling it the Human-First AI Automation Manifesto.
I’m sharing it publicly because promises kept in private are easy to break. Hold me to it. Hold each other to it. Let’s prove that tech can serve people, families, whole communities, and not the other way around.
.
📣 Tag any builder, practitioner, PM, or exec who needs the reminder that spreadsheets don’t feel fear, but humans do.
If we get this right, the future of work will buzz with more purpose, more creativity, and more belonging. If we ignore it, well… the dark arrives quicker than we think.
This only works if we are all port of it and we make ourselves accountable.
So PLEASE, if you share the same principles, share this post and/or Manifesto with your network, get a copy of it, and put it on the front page of your proposals.
Let’s choose the light. 🌱
🔖The Human-First AI Automation Manifesto 🔖
(for everyone who designs agents, orchestrations, and workflow automations)
Preamble
We stand at a turning-point: code can now act, decide, and speak at machine speed.
Yet technology’s only just claim to exist is that it enlarges human possibility.
Therefore we builders, consultants, tinkerers, and dreamers, pledge ourselves to the following articles.
Our loyalty is to people first, profit second, and hype never.
Article I – Purpose Before Payroll
We will not ship an automation whose chief benefit is cutting jobs without also creating equal or greater opportunity for the very people displaced.
Article II – Co-Creation With the Affected
We will invite frontline workers into design from day one, treating them as partners, not edge-cases to be surprised by launch day.
Article III – The Inviolability of Human Override
Every critical step shall have a clear, immediate path to a human decision-maker, and every agent a visible kill switch that anyone responsible may press.
Article IV – Radical Transparency
Agents must announce themselves, disclose their limits, and log their reasoning in language a non-engineer can follow. No hidden bots.
Article V – Data Minimalism
We will collect and retain only the data strictly required for the task, anonymising whenever possible and deleting when the task is done.
Article VI – Bias as a Show-Stopping Bug
We will test for disparate impact with the same rigor we test for crashes, and we will block deployment until unfair patterns are fixed or the task reverts to humans.
Article VII – Safety Over Speed
Deadlines do not outrank dignity. We will red-team for jailbreaks, prompt injection, and fraud before launch, and re-test on a living schedule.
Article VIII – Continuous Human Impact Monitoring
Metrics of success will include morale, up-skilling hours, customer trust, and community well-being, not just latency and cost savings.
Article IX – Named Stewardship
Every deployed agent will have a single, accountable steward empowered to pause it, audit it, and answer for its actions.
📢Call to Action📢
Read these nine articles aloud at project kickoff.
Pin them beside the Kanban board.
Hold one another to them, because automation that forgets humanity forgets its reason to exist.