Lessons from AI adoption in the product development lifecycle
In May 2026, Hyperact hosted a panel discussion at Lloyds Banking Group in Manchester, bringing together product and engineering leaders from startups, scaleups, and enterprises. The session explored what it takes to adopt AI in the product development lifecycle (PDLC), and how to move from individual usage to something that compounds across teams.
Below are the key insights from the discussion.
The panel
- Charlotte Nickels: Agentic & AI Product Director at Lloyds Banking Group
- Garrett Stettler: Director of Design Research at Visa
- Pippa Peasland: Head of Product at Vypr
- Richard Poole: Chief Technology Officer at RiskSmart
- Paul Francis: Director of Data Engineering at LexisNexis Intellectual Property
- Hosted by Sam Quayle: Co-founder of Hyperact
Questions
- How have your teams adopted AI in the product development lifecycle?
- What are the challenges of adopting AI in the PDLC?
- How are you measuring the success of AI adoption?
- What are the hidden risks?
- How must organisations change to support AI?
- Where are the biggest future opportunities?
1. How have your teams adopted AI in the product development lifecycle?
Whilst practitioners are speeding up, the challenge is getting teams to move at the same pace.
- The most visible change is in prototyping. Product teams no longer need to write briefs and wait for engineers to interpret them. Instead, they can build rough versions themselves, showing what they want it to look and feel like.
- Alongside prototyping, writing specifications and architecture documents is one of the first tasks teams have handed to AI, with the panel reporting time savings.
- Teams are still navigating AI as a shared capability rather than a personal one. The difference between one person's productivity going up and a whole team's output improving requires shared tooling, shared memory, shared decision-making and changes in how work is passed between people.
- Organisations that are building shared infrastructure report: fewer duplicated decisions, less lost context, and a clearer line between what individuals are doing and what the team is trying to achieve.
- One team described starting with a bare bones idea from an informal conversation, and using a shared AI system to promote and develop that idea across multiple sessions until it became a usable artefact. The AI was working in the background, surfacing and connecting things the team had already said in previous sessions.
- Another panellist has built their own AI skills on top of existing models, baking in brand guidelines and content artefacts to make outputs more consistent. They emphasised that you must be disciplined to keep it as deterministic as possible. Without that discipline, the system starts making its own decisions about how to work.
- Shared memory matters because without it, two people can make the same product decision independently within thirty seconds of each other. With it, the system already knows a decision has been made.
- On AI adoption, we must recognise that it’s still only a small number of people running ahead and building quickly. There’s a larger group remaining curious but cautious, but there are groups that are privately worried. Particularly those whose professional identity is closely tied to a specialism that now feels exposed.
- One leader described a team member coming to them having lost sleep over whether fifteen years of experience was about to be made irrelevant. Whilst it is tempting to focus adoption efforts on the enthusiasts, the people who cannot easily voice their concern are the ones who also need encouragement and support.
2. What are the challenges of adopting AI in the PDLC?
Getting something working locally is not the same as getting it into production.
- Whilst a local integration looks clean, wider concerns remain: security, authentication, and role-based data access. What happens when the integration runs without anyone watching it? The distance between a working demo and a live product is large, and that is where problem-solving is happening.
- MCP, for example, was largely designed for user-level use cases. Taking a workflow that runs cleanly on one person's machine and replicating it in a production environment, with consistent permission models and enterprise-level authentication is a different problem entirely. Most teams are still working through it. Read: Should I adopt MCP as part of my API product strategy?
- Remember AI is only as useful as what you feed it. Gathering the right documentation, checking it is actually accurate, and structuring it so a model can use it reliably takes significantly longer than teams expect. Long enough, in some cases, that it may still be faster to do the task manually. Deciding which work is genuinely worth the setup investment is itself a skill that teams are still developing.
- During the Gen AI boom, teams racing to build something first created a sprawl of disconnected agents to do similar tasks across different parts of the organisation. At the end of the day, the customer does not experience a company as a collection of internal teams. They expect a seamless user experience. Joining agents up to deliver that requires infrastructure work that the initial wave of activity largely skipped.
3. How are you measuring the success of AI adoption?
A framework for AI across the full product development lifecycle has yet to be adopted across the industry.
- Several panellists are actively trying to build their own framework, mapping everything from how a project brief starts to how long user research takes to how a feature gets into production. None of them have got there yet.
- Teams should not rely on usage dashboards and log-in counts of AI tooling as metrics because they measure access, not impact. The harder question: is our use of AI tooling actually delivering better outcomes for customers?
- Traditional delivery metrics like ticket counts become unreliable when AI is decomposing the work. The unit is no longer consistent. Teams are finding more stability in outcome-based measures that don't depend on how the work was divided up.
- Some teams are using established frameworks like HEART to bring more structure to qualitative outcomes, but even those have limits. Large language models produce outputs that read well, but that surface quality makes it easy to miss when the underlying work is poor.
- Several panellists noted concerns over clear ownership of AI evaluation. It is not usually in anyone's job description, and it does not sit clearly in teams. Features are being shipped, but whether those features continue to perform well over time needs to be tracked.
- On cost: remember that model pricing will change. Organisations that are not monitoring token consumption now will have no baseline when it does. Match model size to task, use smaller, cheaper, specialised models for narrow and repeatable work, and reserve frontier models (Claude, ChatGPT, Gemini) for complex reasoning and code generation. For highly specific classification tasks, a small language model can outperform a large one because it has not absorbed the noise that large models accumulate at scale.
4. What are the hidden risks?
Not noticing when AI has done something wrong.
- Over-reliance is the risk that comes up first. Before reaching for a prompt, think clearly about what you are actually trying to achieve. Just because the output looks plausible and a human approves it, did you consider whether it was right?
- The subtler risk is in what AI is structurally good and bad at. Large language models find patterns in large amounts of data. Patterns are things that repeat. The things that repeat most often are not usually the most interesting or valuable things.
- For example, there is considerable enthusiasm for synthetic users, but you should be cautious. Synthetic users work well early in ideation for generating hypotheses or exploring a problem space. They do not replace talking to real people, and the pressure to skip that part of research is not matched by evidence that you can safely do so.
- Teams using AI to synthesise qualitative data (customer research, open-text feedback) should be wary that the model surfaces what is common and can miss what is rare. In research, the rare things are often exactly what a skilled practitioner would have found.
- One panellist also described using AI to summarise large volumes of open-text customer data, and finding that the model surfaced what was already known, whilst missing the small, specific details that a skilled researcher would have caught. Again, in research, those details are often the most useful things.
- AI adoption has served as a forcing function, surfacing access management gaps that organisations had tolerated for years. The discipline required to use AI safely in production is, in several cases, more rigorous than anything that existed before. That is not a reason to move slowly. It is a reason to build governance infrastructure in parallel, not after.
5. How must organisations change to support AI?
The scope of roles are changing.
- As AI handles more routine code generation, the friction between roles becomes the bottleneck. One response is to widen individual scope rather than multiply handoffs. A suggestion from the panellists was the role of the product engineer: a role that combines development, testing, and product thinking to take a feature from brief to deployment without the traditional handoffs between specialists.
- Commercial functions have moved faster than many expect. In one organisation, a new role was created specifically to join up data sources, pull out insights, and automate work that no longer needs a human. The sales team embraced this quickly and the function changed shape around what AI could already handle.
- Another effect is a change in hiring logic. Rather than replacing a departing team member like for like, some organisations are laying the foundations to use agents and skills, with the view to then hire someone whose expertise sits above what the agents are already handling.
- When discussing adoption interventions, panellists described how they were finding success in one-to-one conversations. One approach: ask an individual contributor to imagine they had a new member of staff, what would they ask them to do, and how would they check their work? Another suggestion was don't start with work at all. Start with a passion project. You'll understand the tool's limits faster when you can tell when it's wrong.
6. Where are the biggest future opportunities?
Nobody truly knows, and the people who say they do should be treated with some scepticism.
- Roadmaps that extend a year out feel uncertain. Two years feels speculative. What the panel was more confident about was the direction of travel.
- Panellists hoped that AI will create space for product and design teams to return to practices that delivery pressure has consistently deprioritised such as fast prototyping, early hypothesis testing, and genuine engagement with the problem before committing to the solution. The tools now exist to do those things quickly enough that they feel less like a luxury.
- One panellist hoped that as an industry, using the word AI as a catchall phrase will wear out. Instead, we can return to calling things automation, and just part of how work gets done. It’ll become unremarkable in the way that email is unremarkable.
- The scope of roles will be wider in some ways and narrower in others. The menial tasks that used to consume hours will disappear. What will matter more is the thinking that sits above them.
- On the customer side, there are opportunities to make possible things that weren't previously possible at scale, such as personalised guidance and services shaped around individual circumstances rather than average behaviour.
- The teams most likely to do well are the ones that stay curious, who measure whether it is working, and create the conditions for people to find their footing in a world where the tools keep changing.
- Across the panel, Claude was the tool that came up most consistently, used not just for individual tasks but as the foundation for shared skills and custom workflows. That may not be true in a year. Not long ago, other frontier models like ChatGPT held the bulk of industry attention.
If you would like to attend our next event, you can keep up to date with everything we do via our newsletter or follow us on LinkedIn.
