Activity
Mon
Wed
Fri
Sun
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
Mar
Apr
What is this?
Less
More

Owned by Angel

CTO · 27y. Decision space for Platform Engineering, AIOps & AI DevSecOps leaders. Board-ready output every session. No theory. EN/ES.

Memberships

Ingresos Creativos

32 members • Free

AI Automation Mastery

28.2k members • Free

WaLead.ai en Español

723 members • Free

IA Desde Cero

236 members • Free

Acción Digital

44 members • Free

Fernando SmallCaps

7 members • Free

Emprendedores.com

8k members • Free

Instituto AMZ™

7.9k members • Free

[Archived] KubeCraft (Free)

11.2k members • Free

10 contributions to AI Platform Engineering IDP
I watched a business owner spend 45 minutes doing something that now takes 3.
He wasn't doing anything wrong. He just hadn't seen what was possible yet. That's the thing about running a business you get so deep in the day-to-day that you stop questioning how things get done. The biggest shifts don't always come from working harder. Sometimes it's just seeing your own process from the outside. What's something in your business you've never questioned but probably should?
1 like • 28d
This resonates a lot: you get so used to the daily friction that you start treating it as “just how it works.” The biggest leap often comes from stepping back, mapping the flow end-to-end, measuring where time actually goes, and then automating what’s repetitive. What’s one task in your business you’d be surprised to measure in real minutes/hours today?
Automation isn't just for big companies with big budgets.
That's the biggest misconception I keep hearing. Small teams. Local businesses. Solo founders. They're quietly saving hours every week without hiring anyone new or spending a fortune. The gap between "too small for this" and "this changed everything" is usually just one conversation. What's a tool or process that surprised you with how much it actually helped your business?
0 likes • 30d
Automation is valuable, but the real issue is context and scale. For small teams, local businesses, and narrow use cases, platforms such as n8n, Zapier, or Make can be effective for rapid proofs of concept and lightweight workflows. They are useful when the objective is speed, simplicity, and a very specific operational outcome. However, that is a very different discussion from enterprise-grade automation. Those of us who have spent decades in automation have already seen many generations of tools and approaches. When you are dealing with Kubernetes clusters, hybrid IT, cloud-native architectures, millions of concurrent users, and business-critical transactions, these low-code automation platforms are often not enough. They typically lack the scalability, performance, resilience, governance, and architectural depth required for mission-critical environments. They can look impressive in demos, but a critical core process in a major enterprise is not something serious organizations delegate to lightweight orchestration tools. At that level, the requirements are entirely different: robustness, observability, security, compliance, performance under load, and long-term maintainability. So yes, these tools have their place. They can deliver value in contained B2C scenarios or for very specific services with limited complexity. But when enterprise IT is operating at scale, with high financial and operational risk, they are often not the answer. What I see every day across clients is that much of this market noise creates unrealistic expectations. It can consume time and budget, while adding little value to the environments where automation truly matters.
0 likes • 30d
I agree that automation can create value, but the conversation changes completely when we move from small-scale workflow convenience to enterprise-critical operations. For small teams, local businesses, or narrowly defined processes, tools such as n8n, Zapier, Make, or similar platforms can be useful. They are fast to deploy, relatively inexpensive, and good for proofs of concept, departmental workflows, marketing automations, notification chains, or lightweight integrations. However, that is not the same as enterprise automation. Those of us who have spent decades in automation have already seen multiple generations of “easy” tools presented as strategic answers. The reality is that once you are operating Kubernetes clusters, hybrid IT estates, cloud-native services, complex integration layers, and business-critical processes with millions of concurrent users, the requirements are fundamentally different. At that level, scalability, resilience, governance, observability, security, auditability, latency control, and performance under load are not optional. They are the baseline. That is where many low-code automation platforms start to show their limits. They are often excellent for isolated use cases, but they are not designed to be the foundation of large-scale, mission-critical enterprise systems. A serious organization will not delegate core banking flows, real-time supply chain orchestration, large-scale citizen services, or revenue-critical transaction processing to tools built primarily for convenience and speed of assembly. They are not useless. They simply belong to a different category. They can work well for a WhatsApp-based customer interaction, a lead-routing workflow, a simple website launch, or a back-office notification process. But once the system has to support enterprise-grade throughput, strict SLAs, regulatory controls, and deep operational accountability, these tools are usually insufficient on their own. What I see across clients is not a lack of innovation, but a repeated confusion between tactical automation and strategic automation. The first can be solved with lightweight tools. The second requires engineering discipline, architecture, platform thinking, and solutions designed for performance and scale from day one.
Dora Metrics
Throughput and instability DORA’s software delivery performance metrics focus on a team’s ability to deliver software safely, quickly, and efficiently. They can be divided into metrics that show the throughput of software changes, and metrics that show instability of software changes. Throughput Throughput is a measure of how many changes can move through the system over a period of time. Higher throughput means that the system can move more changes through to the production environment. DORA uses three factors to measure software delivery throughput: - Change lead time: The amount of time it takes for a change to go from committed to version control to deployed in production. - Deployment frequency: The number of deployments over a given period or the time between deployments. - Failed deployment recovery time: The time it takes to recover from a deployment that fails and requires immediate intervention. Instability Instability is a measure of how well the software deployments go. When deployments go well, teams can confidently push more changes into production and users are less likely to experience issues with the application immediately following a deployment. DORA uses two factors to measure software delivery instability: - Change fail rate: The ratio of deployments that require immediate intervention following a deployment. Likely resulting in a rollback of the changes or a “hotfix” to quickly remediate any issues. - Deployment rework rate: The ratio of deployments that are unplanned but happen as a result of an incident in production. Taken together, these two factors for software delivery performance (throughput and instability) give teams a high-level understanding of their software delivery performance. Measuring these over time provides insight into how software delivery performance is changing. These factors can be used to measure any application or service, regardless of the technology stack, the complexity of the deployment processes, or its end users.
0
0
If AI could save you time in one area of your business… what would it be?
A lot of businesses aren’t short on ideas.They’re short on time, speed, and consistency. AI is changing that fast. If you could use AI to make one part of your business easier, faster, or less manual… what would it be?
0 likes • Mar 15
EN: If AI could save us time in one area, it would be incident triage + RCA (MTTR). In real delivery teams, the bottleneck isn’t lack of alerts—it’s the manual work of correlating logs/metrics/traces, identifying the likely root cause, and coordinating the next best action. AI can de-noise alerts, correlate signals across services, suggest probable causes, propose runbook steps, and auto-draft the post-incident report + Jira tasks—faster recovery, fewer repeat incidents, and measurable risk reduction. If AI could save time in one area, I’d pick: turning a business idea into a production-ready change (end-to-end T2M). But the win isn’t “AI alone” — it’s the stack: • Spec-Driven Development: keep specs as code, with an Agent.md that defines requirements, acceptance criteria, NFRs (security/reliability), and evidence needed. • GenAI agents: generate user-story breakdown, test cases, IaC changes, and release notes from the spec — consistently. • MCP Server (governed tool access): agents safely interact with Jira/Git/CI/CD/KBs without leaking secrets. • IDP (golden paths): standardized templates for pipelines, environments, and observability so teams don’t reinvent delivery. • DevOps/DevSecOps + SRE: automated quality/security gates, progressive delivery, SLO-based monitoring and fast rollback. Executive outcome: faster throughput with lower change risk + better MTTR — measured via DORA + SLOs. __________________________________________________________________________________________ ES: Si la IA pudiera ahorrarnos tiempo en una sola área, sería en triaje de incidencias + análisis de causa raíz (MTTR). En equipos reales, el freno no es la falta de alertas: es el trabajo manual de correlacionar logs/métricas/trazas, encontrar la causa probable y coordinar la acción correcta. La IA puede reducir ruido, correlacionar señales entre servicios, proponer causas, sugerir pasos de runbook y redactar automáticamente el post-mortem + tareas en Jira—recuperación más rápida, menos reincidencias y menor riesgo operativo.
🚀 CloudBees Unify: el “control plane” DevSecOps para empresas que NO quieren migraciones
Si tu organización vive con Jenkins + GitHub Actions + GitLab + mil herramientas más, el problema ya no es “hacer CI/CD”… es gobernarlo: visibilidad, riesgo, compliance, trazabilidad y releases sin fricción. CloudBees Unify propone justo eso: un lenguaje compartido para todo el SDLC, conectando tu toolchain actual y aplicando políticas, aprobaciones y evidencias auditables sin “rip & replace”. ✅ Lo que más me llama la atención (en modo práctico): 🔌 Conecta tus herramientas (repos, pipelines, tickets, tests, escáneres) en un modelo único. 🧠 Normaliza la “delivery truth”: readiness, riesgo y actividad en una sola vista. 🛡️ Security & Compliance por política: gates, separación de funciones, excepciones… con evidencia continua. 🧪 Smart Tests (IA) para acelerar feedback (CloudBees habla de hasta 80% de reducción en tiempo de ejecución de tests). 🎛️ Feature Management para releases más seguros (rollouts controlados con flags). 🧩 Release Orchestration para releases complejas multi-equipo/multi-componente. 📊 Analytics como “single source of truth” de métricas y estado. 📈 Claims de impacto (según CloudBees): 7–10x despliegues más rápidos hasta 21.000 horas de ingeniería ahorradas/año 100x menos coste de corregir vulnerabilidades antes de prod 💬 Pregunta para el grupo: ¿Qué te está frenando más hoy: (1) visibilidad, (2) compliance, (3) calidad/tests, o (4) orquestación de releases? #DevSecOps #PlatformEngineering #CICD #Governance #CloudBees #SoftwareDelivery https://www.cloudbees.com/unify
1
0
1-10 of 10
Angel Carrasco
2
11points to level up
@angel-carrasco-4648
CTO 27y | AIOps (Agentic MCP), DevOps, DevSecOps, CloudOps, FinOps, SDD, GitOps TDM, SAFe, DASA, Platform Engineering - Internal Development Platform.

Active 4d ago
Joined Jan 25, 2026
Madrid