
The project itself :
Project Overview
At both Invevo and Blue Prism, I worked on designing control centres. Centralised spaces where users monitor, manage, and interact with automations at scale.
Blue Prism is one of the founding fathers of Robotic AI Automations and worked with companies like the NHS. Invevo is an automation Fintech company.
Problem:
As automation platforms scale, so does their complexity. And users were expected to manage complex systems without a clear, and cohesive way. Utilising AI would help users explore and make choices with data.
Goal:
The goal was to expand the user base beyond a small number of large enterprise clients, making the product more accessible and appealing to small and medium-sized businesses, while still supporting the needs of larger organisations.
My role:
Working as Head of Design from end-to-end of the product.
Responsibilities:
Leading research.
Leading design.
Product strategy.
All about the user :
Understanding the Product & Aligning Stakeholders
At the start of the project, I focused on building a comprehensive understanding of the product by working closely with stakeholders across the business. This included collaborating with marketing and sales teams to understand how the product is positioned, sold, and demonstrated to customers, as well as engaging with developers to identify technical constraints and existing system limitations.
As we weren’t starting from scratch, it was critical to understand the product’s history. What had already been built, where previous decisions had succeeded or fallen short, and how the product had evolved over time. I also reviewed existing research, data, and user insights to avoid duplicating conversations and to build on prior learnings.
This process helped establish a clear view of the current state and scope of the product, ensuring that design decisions were grounded in both business context and real-world usage, while setting a strong foundation for scalable, informed solutions.
Problems we know so far…
Frankenstien build:
Both products were built incrementally, with features added in isolation rather than as part of a cohesive user journey.
Understanding:
Users lacked a clear mental model of how the system worked
Recovery:
When something went wrong, users couldn’t easily recover
The legacy product & user journeys
Understanding where we are, how we got here, what have we tried, what has failed?






Both platforms operated in technically demanding domains: Robotic process automation and financial intelligence. Where users needed to interpret large volumes of information and take action when something fails.
Rather than treating these as isolated UX problems, I approached them as system design challenges, focusing on how information, workflows, and feedback connect across the product.
Key challenges included:
Information spread across multiple areas, making it difficult to build a clear mental model.
The code wasn't in a good state.
Limited visibility into system state, reducing confidence in actions and outcomes.
High cognitive load when interpreting data or automation logic.
Increasing configurability introducing friction, particularly for new users.
Ultimately, the experience reflected how the system was built, rather than how users needed to work. The system was built with fragmented screens over many years of building on top of building with not too much thought about the end user experience. To understand this, I first looked at the users, and surprisingly a lot didn't have much of a technical background.
Growth without System Thinking…
Building on the mapped journeys, I conducted research to validate assumptions and gain deeper insight into real user behaviour.
Both Blue Prism and Invevo had evolved over many years, resulting in interfaces that felt dated, inconsistent, and increasingly difficult to navigate. I found this from looking at research we already had, such as sales meetings, previous interviews and internal discussions on historical decisions and constraints.
The UI had effectively become a patchwork of decisions made over time and built by different developers, across different periods, often without a shared system or consistent design language. This led to:
Inconsistent components and patterns: multiple versions of similar actions (e.g. buttons) with slight variations
Siloed thinking: features designed in isolation rather than as part of an end-to-end experience
Fragmented user journeys: workflows spread across disconnected areas of the product
Legacy complexity: parts of the system built by teams no longer present, leading to assumptions and uncertainty around how things functioned
Over time, this created not just UX challenges, but also underlying technical risk. Where the system began to resemble a “house of cards”, with layers of logic built on top of one another without a proper foundation.
Above is some Automation legacy UI. Fragmented screens with not much consistency.
The project schematically:
Designing for AI with automation
My goal wasn’t just to introduce intelligence, but to make complex systems more useful and usable.
AI solves real problems, not a bolt-on feature
A key challenge was avoiding the trap of adding AI as a feature rather than solving a real problem.
There was often a push toward visible AI patterns. Such as chat interfaces or assistants, but these didn’t always align with how users actually worked.
Through interviews (internal and external) and iteration, it became clear that AI is most effective when it supports workflows, not competes with them.
In many cases, the most valuable use of AI was:
Operating in the background to automate decisions or surface relevant insights
Reducing manual effort without introducing additional interaction layers
Enhancing existing workflows, rather than replacing them
This led to a more considered approach. Where AI was applied with clear purpose, integrated into the system.
AI with purpose
The most effective use of AI wasn’t adding new interfaces. It was removing friction.
AI can be highly effective when it works in the background, supporting users rather than interrupting them. In this context, it was used to:
Highlight customers likely to default (simplify decision-making)
Flag late payments or risky accounts (improve confidence with actions)
Prioritise who to chase first based on value and risk (reduce manual effort)
However, getting to this approach required pushing back on more visible, trend-driven ideas. Particularly the introduction of chatbots and AI assistants. While these can be useful in some contexts, research showed they didn’t align with how users worked in financial workflows. Users weren’t looking to “ask” the system what to do. They needed clear prioritisation and immediate insight.
I worked closely with stakeholders to reframe how AI should be applied. and shifting the focus from visible AI features to embedded intelligence. By surfacing the right information at the right time.

Balancing hiring & strategy
Issues start surfacing due to time spent hiring and vs the bigger picture.
In a small, fast-moving team who were focussing on AI direction, it was pivotal to grow the team to ensure we could deliver. The pace and complexity of the work required a high level of autonomy.
While building the team, I initially hired a more junior designer/developer with the intention of bridging the gap between design and engineering. However, due to their level of experience, they required a significant amount of support and training.
This was an important learning: while junior hires can bring value, there’s a time and place for them. In early-stage or high-complexity projects, having team members who can operate independently is critical.
Design that utilises AI
The most effective use of AI wasn’t adding new interfaces. It was removing friction.
In this screenshot, you can see how AI can be used for prompts and error finding to help guide automations rather than having a chatbot which felt inefficient and information overload.
AI in Products - What comes next?
AI is often framed as a feature but I feel it's real value can be at a system level.
AI is often treated as a feature, but its real impact is at a system level and shaping how products behave, adapt, and support users over time.
While many current approaches focus on visible interfaces like chat or assistants, these can introduce friction by forcing users to change how they work. The future of AI in products is more embedded and less visible: systems that surface what matters without being asked, adapt to context, and support decisions in the moment.
This shifts products from reactive tools to proactive environments, where AI anticipates needs rather than waits for input. For designers, the challenge is deciding when AI should intervene, how it maintains trust, and how it reduces complexity rather than adding to it.
