IAG AI Lab Opportunity
Roch, in our last conversation you asked me to reflect on what assumptions would need to be true for this role to be interesting and viable. It was a thought-provoking prompt. After reviewing my notes from our calls, I've shaped seven assumptions I'd like to explore further with you and your team.
If these hold, the AI Lab can execute on its immediate commitments while building the foundation for something best-in-class: delivering IMS value, building an AI product management function, reforming governance, establishing embedded teams, and creating the operating model for commercial domain expansion.
To ground these assumptions, I've attempted an assessment of current capabilities against what a best-in-industry function requires. A caveat: this is based solely on our conversation and publicly available information. I'm working from a partial picture and have assuredly missed things. What follows is my current understanding, offered as a starting point for discussion.
This is a working draft. I've certainly got things wrong. But I'd rather share something we can react to together than wait for perfect information.
Here's what I heard: the AI Lab has built genuinely impressive technology. Models work. The IMS platform delivers value. But getting AI from prototype to production takes too long. Adoption depends on relationships rather than systems. Governance, designed for traditional software, creates friction at every turn.
These symptoms point to a common pattern. The AI Lab has invested deeply in the center of the ecosystem: AI/ML engineering, data science, model development. That investment has paid off. But the surrounding capabilities that turn models into products, and products into business outcomes, haven't been built at the same pace. This isn't a criticism. It's a natural stage of maturity.
Consider what needs to happen for IMS to deliver the €500M+ in value you're targeting. Someone has to define what success looks like for each application. Someone has to design experiences so maintenance technicians actually use it. Someone has to navigate governance without getting stuck in review cycles. Someone has to drive adoption across four airlines with different priorities. These aren't engineering problems. They're product problems, organisational problems, change management problems.
When I assess AI organisations, I typically use a capability matrix that covers the full ecosystem required to move from working models to business outcomes. It spans technical foundations (engineering, data science, platform), product disciplines (product management, design, research), operational enablers (governance, change management), and delivery mechanisms (federated teams, adoption support). This framework helps identify where an organisation is strong, where gaps exist, and where investment will have the highest leverage.
Based on our conversations, here's my initial read on the AI Lab:
| # | Capability | Status | Notes |
|---|---|---|---|
| 1 | AI Product Management | Gap | No dedicated PM function. Engineers making product decisions by default. |
| 2 | Governance | Gap | Portfolio decisions, resource allocation, value tracking, security review—mechanisms designed for traditional software create friction. |
| 3 | Change Management | Gap | Adoption depends on individual relationships, not systems. |
| 4 | Embedded Teams | Gap | No permanent presence at OpCos. Central lab model only. |
| 5 | Platform & Infrastructure | Developing | Path-to-production exists but slow. Data access negotiated per-project. |
| 6 | AI/ML Engineering | Strong | Core strength. IMS applications built and working. |
| 7 | Data Science | Strong | Ben Diaz research team exists. Need to better understand research roadmap. |
This is the biggest gap, and honestly the one I find most exciting. Everything else in this document depends on getting this right.
Everything starts with the business and user problem, not the technology. The foundation of product management is understanding what users actually need, what business outcomes matter, and how to prioritise ruthlessly across competing demands. Without this, even brilliant engineering produces solutions looking for problems.
I see four key competency areas that combine differently than traditional PM:
1. Core Product Craft
User research, ideation, prioritisation, requirements definition, experimentation, go-to-market. These foundations need to be adapted for AI contexts: user research that surfaces trust concerns and anxiety about AI changing how people work, prioritisation that weighs data availability and model feasibility alongside business value.
2. Technical Fluency
AI PMs don't write code, but they need genuine technical fluency to collaborate credibly with data scientists and ML engineers. This means understanding how models are deployed and integrated via APIs, the basics of different algorithm types and their tradeoffs, and what 'MLOps' actually means in practice.
3. Leadership and Cross-Functional Influence
This is arguably where the gap matters most. The AI PM sits at the center of a complex stakeholder ecosystem: data scientists, ML engineers, UX designers, legal and compliance, business units, front-line users. The PM's job is to translate AI capability into business value for executives, broker consensus when priorities conflict, and engage empathetically with users whose workflows are changing.
4. AI Product Lifecycle Awareness
This is distinct from technical fluency. It's understanding the product journey from data to deployment to monitoring, and knowing what questions to ask at each stage. During data engineering: is the data relevant, compliant, high-quality? During model development: do the evaluation metrics align with business goals? During validation: what's the go/no-go threshold for deployment?
We've developed detailed competency frameworks and assessment rubrics for AI PM roles across these dimensions. Happy to share if useful as you think about what this function should look like.
This doesn't even account for the emergence of agentic AI within the product development cycle itself. As AI agents become collaborators in building AI products, PMs will need to understand how to orchestrate human-AI teams, evaluate agent outputs, and maintain quality standards in increasingly automated workflows.
Think of this as the central nervous system of the AI Lab. It doesn't do the work, but it ensures the right conditions exist for work to succeed and monitors whether value is being realised. This includes: how funding decisions get made, how talent gets allocated across initiatives, how we track whether investments are paying off, how security and compliance get embedded without becoming blockers, how we ensure AI is developed responsibly and ethically.
Portfolio Governance
Before we get to security review, there's a bigger governance question: how does the AI Lab decide what to build, what to kill, and where to allocate scarce resources? Portfolio governance means having clear criteria for prioritisation, regular checkpoints to assess whether initiatives are delivering value, and the discipline to stop work that isn't panning out.
Value Tracking
IMS has ambitious value targets. But how do you know if you're capturing that value? Value tracking means defining upfront what success looks like in business terms, instrumenting systems to measure it, and creating feedback loops so the organisation learns what's actually working.
Security and Compliance
You mentioned cyber can't inspect 200+ agents. That's a scaling challenge that won't solve itself. One approach I've seen work: risk-tiered governance where high-stakes AI gets deep scrutiny while internal tools get lighter touch, with security embedded in CI/CD rather than reviewing at the end.
Change Management and Embedded Teams
IMS adoption currently depends on OpCo relationships. When a relationship is strong, things work. When it's not, AI products sit on the shelf. One model that's worked well in my experience: embedded teams at each operating company with dual reporting, accountable to OpCo leadership for local relevance while connected to the central lab for technical quality.
You mentioned that getting AI from prototype to production takes longer than it should. That's usually friction accumulating across multiple stages of what I call the AI production line. I'm assuming this is where the AI Lab is strongest, so I'll be briefer here, but these stages matter for IMS because every week of deployment delay is value that doesn't reach the business.
Data Engineering
This is where you source and prepare the raw materials. Four airlines likely means four different data platforms, four different schemas, four different security postures. If every new initiative requires bespoke integration work, you're paying a tax on every project.
Model Development
This is where data scientists build and refine prototypes. I'm assuming this is where the AI Lab is strongest. The business risk here isn't capability, it's coordination: as the team scales, shared standards for experiment tracking and versioning become essential.
Model Validation
Before anything goes to users, it needs quality control: does it perform reliably, does it work fairly across different user groups, is it secure? This is where governance friction typically shows up. The business impact: work either queues behind security review, delaying value, or teams find workarounds, creating risk.
Model Deployment
This is where the validated model actually reaches users. With four OpCos, this likely means coordinating with four different IT organisations, four different infrastructure environments. The business impact: handoff friction that multiplies as the portfolio grows.
Model Monitoring
AI models degrade over time as the real world drifts from the data they were trained on. Without automated monitoring, models that worked six months ago may be underperforming now, and no one knows. The business impact: silent erosion of the value you thought you were capturing.
Frontier AI Strategy
The field is moving fast. Beyond LLMs: world models that can simulate and predict, reasoning architectures, agentic systems. I'd want a deeper conversation with Ben Diaz to understand IAG's position. Are there opportunities to explore world models for operations simulation in parallel with current LLM work?
I tend to envision transformations like this as a multi-phase journey: delivering near-term value on existing commitments while building the capabilities that enable longer-term ambition. In this case, that might mean meaningful IMS EBITDA impact while gradually broadening the portfolio to support Commercial Domains starting with BA. No small feat.
This is a lot to take on. Generally I like to approach work like this with an agile mindset. Just like our AI products, our capabilities can develop as MVPs with a test-and-learn mentality. We won't get everything right the first time. The goal is to build, measure, learn, and iterate.
I'd typically take this on in program increments with thoughtful load-balancing to maximise impact while maintaining delivery confidence. There's art and science to sequencing work of this complexity. I have perspectives on how to approach it, but the specifics need to be developed collaboratively with the right stakeholders at the table.
The Transformation Journey
As part of overall governance, I typically work with an AI Product Ops function to help track sequencing, prioritisation, and cross-team dependencies. Together we develop a high-level roadmap that communicates the transformation arc to stakeholders while remaining adaptable as we learn.
To illustrate how that might look, I've included a completely notional sketch below. Please disregard specific sequencing, milestones, and timelines—this is simply a visual representation of how I tend to structure capability-building alongside delivery commitments. It may mirror roadmaps you already have, but I thought it might be useful to share as a conversation starter.
The specific sequencing, timing, and scope would need to be developed together based on what I learn in the first months, stakeholder alignment, operational feasibility, resource allocation, and other prioritisation factors.
Resource Implications
A rough resource sizing to support this journey, with significant caveats:
| Function | Est. Size | Role |
|---|---|---|
| Central Platform | 60-70 | AI/ML Engineering, Data Science, AI PM (new), Design (new) |
| Embedded OpCo Teams | 80-90 | Distributed across BA, Iberia, Vueling, Aer Lingus, Loyalty |
| Shared Services | 25-30 | Portfolio Governance, Value Tracking, AI Lab Product Operations |
Working backwards from that target state, here's how I'd likely approach the first 90 days. The temptation in a new role is to move fast and demonstrate impact. I've learned to resist that. Strong perspective, loosely held.
Days 1-30: Listen and Diagnose
Month one is about understanding: immerse in IMS, learn which applications are closest to delivering value and which are stuck, understand what's actually blocking adoption. Deep-dive with each AI Lab team. Meet OpCo stakeholders. Technical conversation with Ben Diaz on research roadmap.
Days 31-60: Design the Operating Model
Three workstreams run in parallel: AI Lab Operations and Governance (portfolio governance, value tracking, resource allocation, and the AI product development lifecycle), AI PM Function Design (role definition, competency requirements, career paths), and Embedded Team Model (conversations with OpCo leadership about partnership).
Days 61-90: First Hires and Foundation
Begin recruiting for the AI PM function. The first 3-4 hires set the culture. Establish the first embedded team at BA. Launch the governance operating model. Create a value tracking dashboard that connects what the AI Lab ships to business outcomes.
| Phase | Focus | Key Outputs |
|---|---|---|
| Days 1-30 | Listen and diagnose. IMS immersion. Stakeholder relationships. | Gap assessment validated. Ben Diaz conversation. Path-to-production map. |
| Days 31-60 | Design operating model. AI Lab governance. PM function. Embedded teams. | Governance framework. PM role definition. BA partnership alignment. |
| Days 61-90 | First hires and foundation. Build the team. Launch operating model. | 3-4 AI PMs hired. BA embedded team. Value tracking live. |
The IAG AI Lab has built something real. Strong AI/ML engineering, a platform that works, demonstrated value in IMS. Many organisations are still trying to get to where IAG already is.
What seems to be missing is the surrounding ecosystem: product management to translate technical capability into adopted products, governance that enables rather than blocks, change management to drive adoption, and a federated operating model that scales beyond individual relationships. These gaps appear solvable.
I've worked on similar challenges before, at National Grid and at LATAM, building product organisations that bridge technical capability and business impact. The context was different, but the pattern feels recognisable. I'm genuinely excited about the possibility of being part of building what comes next.
I look forward to continuing the conversation.