Protected Document

Please enter password to view document.

Incorrect password. Please try again.
Access granted
Confidential • Draft for Discussion

IAG AI Lab Opportunity

Initial perspectives and a framework for discussion
Anil Podduturi
Anil Podduturi
December 2025

Roch, in our last conversation you asked me to reflect on what assumptions would need to be true for this role to be interesting and viable. It was a thought-provoking prompt. After reviewing my notes from our calls, I've shaped seven assumptions I'd like to explore further with you and your team.

1
Capability building and IMS execution must happen in parallel. You were clear that IMS is the priority, and I appreciate that focus. However, I also believe that to deliver IMS successfully, we'll need to build the surrounding capabilities alongside it: product management, change management, embedded teams. Executed right, these shouldn't be distractions from IMS. They should accelerate IMS adoption and value capture.
2
The foundation is real. IMS, the engineering talent, the management committee's ambition. You have IMS applications already built. The technology works. The gap is getting it adopted.
3
Leadership is committed to the shift from AI Engineering Lab to AI Product Lab, with all that implies for how work gets prioritised and how success gets measured.
4
There's support to build what's missing. AI product management, design, change management, embedded teams. Not just advise, but actually build the capabilities and hire the people.
5
Governance can evolve to match the pace of AI development. Portfolio decisions, resource allocation, value tracking, security review. These mechanisms can become enablers rather than bottlenecks, helping teams move fast while maintaining appropriate oversight.
6
OpCos will accept embedded teams as partners in value creation, not central overreach. The federated model only works if it's welcomed.
7
There's runway to execute. IMS adoption is a multi-year journey, not a quick win. Capability building happens over 12-15 months, but value delivery should be iterative throughout.

If these hold, the AI Lab can execute on its immediate commitments while building the foundation for something best-in-class: delivering IMS value, building an AI product management function, reforming governance, establishing embedded teams, and creating the operating model for commercial domain expansion.

To ground these assumptions, I've attempted an assessment of current capabilities against what a best-in-industry function requires. A caveat: this is based solely on our conversation and publicly available information. I'm working from a partial picture and have assuredly missed things. What follows is my current understanding, offered as a starting point for discussion.

This is a working draft. I've certainly got things wrong. But I'd rather share something we can react to together than wait for perfect information.

Here's what I heard: the AI Lab has built genuinely impressive technology. Models work. The IMS platform delivers value. But getting AI from prototype to production takes too long. Adoption depends on relationships rather than systems. Governance, designed for traditional software, creates friction at every turn.

These symptoms point to a common pattern. The AI Lab has invested deeply in the center of the ecosystem: AI/ML engineering, data science, model development. That investment has paid off. But the surrounding capabilities that turn models into products, and products into business outcomes, haven't been built at the same pace. This isn't a criticism. It's a natural stage of maturity.

Consider what needs to happen for IMS to deliver the €500M+ in value you're targeting. Someone has to define what success looks like for each application. Someone has to design experiences so maintenance technicians actually use it. Someone has to navigate governance without getting stuck in review cycles. Someone has to drive adoption across four airlines with different priorities. These aren't engineering problems. They're product problems, organisational problems, change management problems.

When I assess AI organisations, I typically use a capability matrix that covers the full ecosystem required to move from working models to business outcomes. It spans technical foundations (engineering, data science, platform), product disciplines (product management, design, research), operational enablers (governance, change management), and delivery mechanisms (federated teams, adoption support). This framework helps identify where an organisation is strong, where gaps exist, and where investment will have the highest leverage.

Based on our conversations, here's my initial read on the AI Lab:

# Capability Status Notes
1 AI Product Management Gap No dedicated PM function. Engineers making product decisions by default.
2 Governance Gap Portfolio decisions, resource allocation, value tracking, security review—mechanisms designed for traditional software create friction.
3 Change Management Gap Adoption depends on individual relationships, not systems.
4 Embedded Teams Gap No permanent presence at OpCos. Central lab model only.
5 Platform & Infrastructure Developing Path-to-production exists but slow. Data access negotiated per-project.
6 AI/ML Engineering Strong Core strength. IMS applications built and working.
7 Data Science Strong Ben Diaz research team exists. Need to better understand research roadmap.

This is the biggest gap, and honestly the one I find most exciting. Everything else in this document depends on getting this right.

Everything starts with the business and user problem, not the technology. The foundation of product management is understanding what users actually need, what business outcomes matter, and how to prioritise ruthlessly across competing demands. Without this, even brilliant engineering produces solutions looking for problems.

I see four key competency areas that combine differently than traditional PM:

1.Core Product Craft
The foundational PM skills, adapted for AI contexts
2.Technical Fluency
Enough engineering understanding to collaborate credibly
3.Leadership & Influence
Translating AI capability into business value
4.AI Lifecycle Awareness
Understanding the journey from data to deployment

1. Core Product Craft

User research, ideation, prioritisation, requirements definition, experimentation, go-to-market. These foundations need to be adapted for AI contexts: user research that surfaces trust concerns and anxiety about AI changing how people work, prioritisation that weighs data availability and model feasibility alongside business value.

Are engineers or technical leads playing some of these foundational PM roles today, even informally? That's often where the seeds of a PM function already exist.

2. Technical Fluency

AI PMs don't write code, but they need genuine technical fluency to collaborate credibly with data scientists and ML engineers. This means understanding how models are deployed and integrated via APIs, the basics of different algorithm types and their tradeoffs, and what 'MLOps' actually means in practice.

Without this fluency, PMs become bottlenecks or get bypassed entirely. What's the current dynamic between technical and business stakeholders?

3. Leadership and Cross-Functional Influence

This is arguably where the gap matters most. The AI PM sits at the center of a complex stakeholder ecosystem: data scientists, ML engineers, UX designers, legal and compliance, business units, front-line users. The PM's job is to translate AI capability into business value for executives, broker consensus when priorities conflict, and engage empathetically with users whose workflows are changing.

With four OpCos, this coordination complexity multiplies. How are cross-OpCo priorities negotiated today?

4. AI Product Lifecycle Awareness

This is distinct from technical fluency. It's understanding the product journey from data to deployment to monitoring, and knowing what questions to ask at each stage. During data engineering: is the data relevant, compliant, high-quality? During model development: do the evaluation metrics align with business goals? During validation: what's the go/no-go threshold for deployment?

This lifecycle awareness is what separates AI PMs from traditional PMs who've read a few articles. Is there clarity today on who owns these decisions?

We've developed detailed competency frameworks and assessment rubrics for AI PM roles across these dimensions. Happy to share if useful as you think about what this function should look like.

This doesn't even account for the emergence of agentic AI within the product development cycle itself. As AI agents become collaborators in building AI products, PMs will need to understand how to orchestrate human-AI teams, evaluate agent outputs, and maintain quality standards in increasingly automated workflows.

Think of this as the central nervous system of the AI Lab. It doesn't do the work, but it ensures the right conditions exist for work to succeed and monitors whether value is being realised. This includes: how funding decisions get made, how talent gets allocated across initiatives, how we track whether investments are paying off, how security and compliance get embedded without becoming blockers, how we ensure AI is developed responsibly and ethically.

Portfolio Governance

Before we get to security review, there's a bigger governance question: how does the AI Lab decide what to build, what to kill, and where to allocate scarce resources? Portfolio governance means having clear criteria for prioritisation, regular checkpoints to assess whether initiatives are delivering value, and the discipline to stop work that isn't panning out.

How are portfolio decisions made today? Is there a regular cadence for reviewing what's working and what isn't?

Value Tracking

IMS has ambitious value targets. But how do you know if you're capturing that value? Value tracking means defining upfront what success looks like in business terms, instrumenting systems to measure it, and creating feedback loops so the organisation learns what's actually working.

Is there a clear framework for tracking IMS value realisation? Who owns the connection between technical delivery and business outcomes?

Security and Compliance

You mentioned cyber can't inspect 200+ agents. That's a scaling challenge that won't solve itself. One approach I've seen work: risk-tiered governance where high-stakes AI gets deep scrutiny while internal tools get lighter touch, with security embedded in CI/CD rather than reviewing at the end.

Is there appetite for risk-tiered governance? What would it take to get security involved earlier in the development process?

Change Management and Embedded Teams

IMS adoption currently depends on OpCo relationships. When a relationship is strong, things work. When it's not, AI products sit on the shelf. One model that's worked well in my experience: embedded teams at each operating company with dual reporting, accountable to OpCo leadership for local relevance while connected to the central lab for technical quality.

Would OpCo leadership welcome embedded teams as partners, or see them as central overreach?

You mentioned that getting AI from prototype to production takes longer than it should. That's usually friction accumulating across multiple stages of what I call the AI production line. I'm assuming this is where the AI Lab is strongest, so I'll be briefer here, but these stages matter for IMS because every week of deployment delay is value that doesn't reach the business.

Data Engineering

This is where you source and prepare the raw materials. Four airlines likely means four different data platforms, four different schemas, four different security postures. If every new initiative requires bespoke integration work, you're paying a tax on every project.

Is there a data mesh or federated layer that abstracts cross-OpCo access? What's the average time from 'we need this data' to 'we have it in a usable form'?

Model Development

This is where data scientists build and refine prototypes. I'm assuming this is where the AI Lab is strongest. The business risk here isn't capability, it's coordination: as the team scales, shared standards for experiment tracking and versioning become essential.

Are there shared standards for model versioning and experiment tracking across squads? How do teams know what's already been tried?

Model Validation

Before anything goes to users, it needs quality control: does it perform reliably, does it work fairly across different user groups, is it secure? This is where governance friction typically shows up. The business impact: work either queues behind security review, delaying value, or teams find workarounds, creating risk.

What's the current validation process? Is it blocking velocity or are teams finding workarounds?

Model Deployment

This is where the validated model actually reaches users. With four OpCos, this likely means coordinating with four different IT organisations, four different infrastructure environments. The business impact: handoff friction that multiplies as the portfolio grows.

Does the AI Lab own deployment end-to-end? Or is there a handoff point where coordination breaks down?

Model Monitoring

AI models degrade over time as the real world drifts from the data they were trained on. Without automated monitoring, models that worked six months ago may be underperforming now, and no one knows. The business impact: silent erosion of the value you thought you were capturing.

Is there automated drift detection? What's the retraining process when models degrade?

Frontier AI Strategy

The field is moving fast. Beyond LLMs: world models that can simulate and predict, reasoning architectures, agentic systems. I'd want a deeper conversation with Ben Diaz to understand IAG's position. Are there opportunities to explore world models for operations simulation in parallel with current LLM work?

What's the current thinking on build vs. buy vs. partner for frontier capabilities? How is the team tracking developments in reasoning models and agentic architectures?

I tend to envision transformations like this as a multi-phase journey: delivering near-term value on existing commitments while building the capabilities that enable longer-term ambition. In this case, that might mean meaningful IMS EBITDA impact while gradually broadening the portfolio to support Commercial Domains starting with BA. No small feat.

This is a lot to take on. Generally I like to approach work like this with an agile mindset. Just like our AI products, our capabilities can develop as MVPs with a test-and-learn mentality. We won't get everything right the first time. The goal is to build, measure, learn, and iterate.

I'd typically take this on in program increments with thoughtful load-balancing to maximise impact while maintaining delivery confidence. There's art and science to sequencing work of this complexity. I have perspectives on how to approach it, but the specifics need to be developed collaboratively with the right stakeholders at the table.

The Transformation Journey

As part of overall governance, I typically work with an AI Product Ops function to help track sequencing, prioritisation, and cross-team dependencies. Together we develop a high-level roadmap that communicates the transformation arc to stakeholders while remaining adaptable as we learn.

To illustrate how that might look, I've included a completely notional sketch below. Please disregard specific sequencing, milestones, and timelines—this is simply a visual representation of how I tend to structure capability-building alongside delivery commitments. It may mirror roadmaps you already have, but I thought it might be useful to share as a conversation starter.

Illustrative only: How I approach capability roadmaps — sequencing and details are notional
M1
M2
M3 →
AI Lab Execution
IMS
IMS Wave #1
IMS Wave #2
Future Domains
Commercial
Commercial Wave #1
Commercial Wave #2
AI Lab Capabilities
AI Product Management
Hiring Wave #1 AI Product Ops First Hires
Hiring Wave #2 AI PDLC Update
Full Function
Governance & Ops Enablers
Portfolio Management Value Tracking
Security & Risk Compliance
Embedded Teams
BA
Additional OpCos
All OpCos
Platform & Infrastructure
Data Mesh / Federated Layer
Cross-OpCo Access
Standardised Schemas
Model Development
Experiment Tracking Standards
Model Versioning
Shared Standards
Model Validation
Quality Control & Reliability
Fairness & Security Review
Model Deployment
Coordination & Handoff
End-to-End Deployment
Model Monitoring
Automated Drift Detection
Retraining Process
Frontier AI Strategy
Proprietary / Fine-tuning / API Strategy
World Models / Reasoning Agents

The specific sequencing, timing, and scope would need to be developed together based on what I learn in the first months, stakeholder alignment, operational feasibility, resource allocation, and other prioritisation factors.

Resource Implications

A rough resource sizing to support this journey, with significant caveats:

Function Est. Size Role
Central Platform 60-70 AI/ML Engineering, Data Science, AI PM (new), Design (new)
Embedded OpCo Teams 80-90 Distributed across BA, Iberia, Vueling, Aer Lingus, Loyalty
Shared Services 25-30 Portfolio Governance, Value Tracking, AI Lab Product Operations
Important caveats: These numbers assume the full target state vision, including commercial domain expansion. They're based on comparable transformations I've led, but would need to be refined based on IAG's budget constraints, hiring appetite, timeline preferences, and what we learn in the first 90 days. The specific numbers are a starting point for discussion, not a fixed plan.

Working backwards from that target state, here's how I'd likely approach the first 90 days. The temptation in a new role is to move fast and demonstrate impact. I've learned to resist that. Strong perspective, loosely held.

Days 1-30: Listen and Diagnose

Month one is about understanding: immerse in IMS, learn which applications are closest to delivering value and which are stuck, understand what's actually blocking adoption. Deep-dive with each AI Lab team. Meet OpCo stakeholders. Technical conversation with Ben Diaz on research roadmap.

Days 31-60: Design the Operating Model

Three workstreams run in parallel: AI Lab Operations and Governance (portfolio governance, value tracking, resource allocation, and the AI product development lifecycle), AI PM Function Design (role definition, competency requirements, career paths), and Embedded Team Model (conversations with OpCo leadership about partnership).

Days 61-90: First Hires and Foundation

Begin recruiting for the AI PM function. The first 3-4 hires set the culture. Establish the first embedded team at BA. Launch the governance operating model. Create a value tracking dashboard that connects what the AI Lab ships to business outcomes.

Phase Focus Key Outputs
Days 1-30 Listen and diagnose. IMS immersion. Stakeholder relationships. Gap assessment validated. Ben Diaz conversation. Path-to-production map.
Days 31-60 Design operating model. AI Lab governance. PM function. Embedded teams. Governance framework. PM role definition. BA partnership alignment.
Days 61-90 First hires and foundation. Build the team. Launch operating model. 3-4 AI PMs hired. BA embedded team. Value tracking live.

The IAG AI Lab has built something real. Strong AI/ML engineering, a platform that works, demonstrated value in IMS. Many organisations are still trying to get to where IAG already is.

What seems to be missing is the surrounding ecosystem: product management to translate technical capability into adopted products, governance that enables rather than blocks, change management to drive adoption, and a federated operating model that scales beyond individual relationships. These gaps appear solvable.

I've worked on similar challenges before, at National Grid and at LATAM, building product organisations that bridge technical capability and business impact. The context was different, but the pattern feels recognisable. I'm genuinely excited about the possibility of being part of building what comes next.

I look forward to continuing the conversation.

Disclaimer: This document reflects personal views developed during the interview process. It does not represent BCG, BCG X, or any client engagement.