A framework for designers and researchers navigating the shift. Not a survival guide — a playbook for doing the best work of your career.


Start with the right question

Most conversations about AI and creative work start with "will AI replace me?" That's the wrong question — not because it's unreasonable, but because it frames you as passive. Something that gets replaced or doesn't.

The better question: what becomes possible for me now that wasn't before?

AI doesn't replace designers or researchers. It replaces tasks — the repetitive, time-consuming parts of your work that you tolerate but don't love. When those tasks shrink, something has to fill the space. That something is the work you actually got into this career to do: the thinking, the judgment, the creativity.

The honest truth about job security: AI won't replace you. But over time, someone who uses AI well will outperform someone who doesn't. This isn't a threat — it's a skill to acquire, like learning Figma or mastering qual research methods. The people who move first have an advantage.


The Leverage Model

Not every use of AI is equal. Three levels, from easiest wins to most transformative:

1. Automate — Remove the grind

Hand off repetitive tasks that eat your time but don't require your judgment. This is where most people should start.

Try this week:

  1. Take a set of interview transcripts and ask AI to pull out key quotes organised by theme
  2. Give AI a research brief and ask it to draft a discussion guide
  3. Hand over a competitive audit format you've done before — let AI do the first pass

2. Augment — Think with a partner

Use AI as a sparring partner that challenges your assumptions and helps you see blind spots. This doesn't replace your expertise — it sharpens it.

Try this week:

  1. Take a design decision you're confident in — ask AI "what are the three strongest arguments against this?"
  2. Share a research finding and ask "how would a sceptical portfolio manager challenge this?"
  3. Before your next stakeholder presentation, ask AI to poke holes in your narrative

3. Amplify — Do what wasn't possible

AI lets you operate at a scale and speed that was simply impossible before. Not doing the same work faster — doing fundamentally different, more ambitious work.

Try this week:

  1. Take a project where you interviewed 12 people — ask "what would I do differently if I could analyse 100 interviews?"
  2. Instead of 3 design directions, generate 30 and use AI to cluster them by approach
  3. Take a single research insight and ask AI to create tailored narratives for five different stakeholder audiences

The goal isn't to do the same work with less effort. It's to do better work with the same effort.


AI for Research & Discovery

This is where AI has the most immediate, tangible impact for research teams in financial services. The work involves large datasets, complex regulation, and the need to synthesise across many sources. AI is built for exactly this.

1. Ingest — Consume what was previously impossible

The volume problem in research is real. You can't read 200 interview transcripts. You can't manually process six months of customer feedback data. You can't personally review every competitor's quarterly report. AI can.

What changes:

  1. Interview analysis at scale — upload 50-100 transcripts and get thematic analysis in minutes, not weeks
  2. Survey data synthesis — ask questions of open-text responses across thousands of respondents
  3. Competitive intelligence — monitor and summarise competitor activity across multiple sources continuously

Your role: You define what to look for. AI finds it. You decide what it means.

2. Parse — Navigate complex documentation

In banking, the documentation landscape is dense: regulatory frameworks, compliance requirements, technical specifications, internal policy documents. AI can read, cross-reference, and extract from these at speed.

What changes:

  1. Regulatory analysis — ask AI to summarise how a new regulation affects a specific product or service
  2. Policy mapping — upload internal policies and ask AI to identify gaps, contradictions, or areas that need updating
  3. Technical translation — take dense technical specs and have AI create plain-language summaries for design and research teams

Your role: You know which questions matter. AI handles the document trawl.

3. Synthesise — Find patterns humans miss

This is where the combination of human judgment and AI capability becomes most powerful. AI can hold more information in working memory than any person, finding connections across sources that would take weeks of wall-mapping to surface.

What changes:

  1. Cross-source thematic analysis — find patterns across interviews, surveys, analytics, and support tickets simultaneously
  2. Contradiction surfacing — AI can flag where different data sources tell conflicting stories
  3. Evidence chain mapping — trace an insight back through multiple data points to build a robust evidence base

Your role: You evaluate whether the patterns are meaningful. AI does the heavy lifting of finding them.

4. Generate — Targeted outputs for every audience

One research study, multiple audiences. AI lets you tailor the output without starting from scratch each time.

What changes:

  1. Stakeholder-specific reports — executive summary for the C-suite, detailed findings for the product team, methodology appendix for the research community
  2. Insight activation — turn research findings into actionable design principles, product requirements, or strategy recommendations
  3. Living documents — update reports as new data comes in, rather than treating each study as a static deliverable

Your role: You shape the narrative and ensure accuracy. AI handles the formatting and adaptation.


What You're Designing Is Changing

Here's the part most people aren't talking about yet: AI doesn't just change how you work — it changes what you're designing.

The shift from screens to systems

For two decades, design has been primarily about screens. Visual hierarchy, click paths, navigation structures, component libraries. That's not going away — but it's becoming a smaller part of the picture.

As users increasingly interact with services through LLMs and APIs rather than traditional UIs, three things change:

1. Conversations replace click paths When a user can say "show me my portfolio performance this quarter compared to benchmark" instead of navigating through four screens, the design challenge shifts from information architecture to conversation design. How do you design for intent rather than navigation?

2. Interfaces become adaptive Static layouts give way to interfaces that generate and adapt based on context. The user doesn't see a one-size-fits-all dashboard — they see what's relevant to their role, their current task, their history. The designer's job becomes defining the rules and constraints of that adaptation, not the fixed layout.

3. The feature finds the user Instead of users hunting through menus for functionality, AI-powered services can proactively surface relevant actions. "You have three trades pending approval" appears without the user asking. This inverts the traditional design paradigm — from "how does the user find this?" to "when should this find the user?"

What this means for your practice

  1. Learn conversation design — how to structure dialogues, handle ambiguity, design for error and repair
  2. Think in systems, not screens — map the service as a system of capabilities, not a set of pages
  3. Design the rules, not the output — define constraints and principles that AI uses to generate appropriate responses
  4. Prototype with AI — build working prototypes that use actual LLMs, not just static mockups of conversational interfaces

The Ethics Imperative

Technology ethics aren't "a side hustle" or a problem that can be solved down the line — Rachel Coldicutt

I wrote about this in 2017, and if anything the questions have become more urgent. As designers and researchers, we have immense influence over how AI manifests in people's daily lives. Our critical thinking will determine how much control we retain over it, and our knowledge will ensure it doesn't simply "happen" to us.

1. Understand what you're implementing

There is a lot of ignorance around AI. Marketing promises that systems "get smarter each time" and provide service "tailored precisely to each customer's needs." It's often meaningless rubbish that promises everything without explaining anything.

If you're working to implement any kind of AI within your service, it's your duty to understand it — even at a basic level. You don't need to understand the maths inside a neural network, but you need to understand what goes in, what comes out, and what happens in the gap between.

2. Decide how transparent to be

Not every AI application needs the same level of transparency. A simple framework helps: plot the complexity of the AI against the sensitivity of the service.

For simple AI in any context, there's an opportunity to educate users on how the service works. But for complex AI in sensitive services — and in banking, many of our services are sensitive — users need to understand how decisions are being made. There's a risk of upset and confusion otherwise.

The challenge: the most exciting breakthroughs in AI are often the hardest to explain. At best, a service can explain the input and the output, but the messy bit in the middle may be beyond simple human comprehension. We need to address this by being more transparent than is normally comfortable about the surrounding areas of the service.

3. Ask "to what cost?"

Before deploying AI, ask: does it reduce or increase cost, and does it fulfil a known customer need?

The win-win is obvious: AI that reduces cost AND fulfils a customer need. But the awkward truth is that the processes most ripe for automation often happen far from the customer — and would impact the very people tasked with the decision. We need to be honest about that trade-off.

As Ben Evans put it: "where an iPod was a better Walkman, a Kindle is not a better book." Not every human process should be automated just because it can be. The question is whether the AI genuinely serves the user, or whether it's placing technology before user need.

4. Questions we should keep wrestling with

These aren't solved problems. They're ongoing tensions your team should actively debate:

AI as colleague — When AI goes beyond our human comprehension, we need some kind of translation layer. How will already messy human dynamics like reward and collaboration cope when a new colleague who never sleeps, demands payment, or shows emotion joins the team?

Uncanny valley — User testing consistently shows that people prefer to know they're talking to a robot rather than a pretend human. Yet so much AI conversation fixates on personality and human characteristics. What if services focussed on function over form?

"Computer says no" — AI can become the inflexible partner in the relationship, forcing humans into ever-more rigid patterns. When do we call this out and shift it to "human says no"?

AI as crutch — With each new ability we outsource, we should ask what we lose as well as gain. We need to question each new ability-outsourcing and ask what the trade-off really is.

Three actions

  1. Use the two frameworks above in your next design review — plot every AI feature on the transparency matrix and the cost-vs-need matrix before building it
  2. Involve researchers in testing AI outputs for bias and fairness, not just usability
  3. Build transparency into the service itself — don't treat it as documentation. If you can't explain why the AI made a decision, design around that limitation rather than hiding it

Four Myths Holding You Back

1. "AI output is good enough to ship" → AI output is good enough to start from. Your job is to apply judgment, context, and craft. The first draft is AI's. The final product is yours.

2. "Using AI is cheating" → Using AI is a professional skill. Nobody calls a surgeon lazy for using a laparoscope. The tool changed; the expertise required to use it well didn't decrease — it shifted.

3. "I need to become technical" → You need to become a better communicator. The core skill of working with AI is articulating what you want clearly — which is already the core skill of design and research.

4. "We should wait until the tools mature" → The tools mature faster when you're using them. Six months of experimentation compounds. The teams that wait will spend a year catching up.


Start This Week — Five Actions

  1. Rewrite a recent deliverable with AI. Take a finished report or rationale. Ask AI to restructure it for a time-poor stakeholder. Compare outputs. Notice where AI is good and where it misses nuance — that gap is your value.

  2. Use AI to challenge your own thinking. Take a confident decision. Ask AI for the strongest counter-arguments. Use them to strengthen your reasoning or find blind spots.

  3. Automate your least favourite recurring task. The formatting, the comparison tables, the stakeholder updates. Try handing it to AI. Even a 60% usable first draft saves real time.

  4. Run a "10x" thought experiment. Take your current project. Ask: if I could do 10x the analysis at the same cost, what would I do differently? Then try one of those things with AI.

  5. Pair up and learn together. 30 minutes, one colleague, screen sharing. Show each other something you've tried. The fastest way to build confidence is seeing someone at your level use it in a context you understand.


Team Principles

1. Judgment over output. AI produces volume. You produce quality. Never ship AI output without applying your professional judgment.

2. Transparency by default. If AI meaningfully contributed to a deliverable, say so. "I used AI to accelerate the initial analysis, then validated and refined the findings."

3. Experimentation over perfection. Try things. Share what you learn. The goal right now is collective knowledge, not perfect workflows.

4. Protect what matters. Be thoughtful about data in a regulated environment. Anonymise. Follow organisational policy. Being AI-ready means being smart, not reckless.

5. Share the knowledge. When you find a prompt that works, a workflow that saves hours, or a use case that surprised you — share it. AI skill compounds faster when it's distributed.


Resources

Read: Ethan Mollick — Co-Intelligence (the best book on working alongside AI)

Read: Ethan Mollick — One Useful Thing (Substack, regular evidence-based writing on AI and knowledge work)

Watch: IDEO — AI & the Future of Design (how a leading design firm is rethinking practice)

Practice: Anthropic — Prompt Engineering Guide (treat prompt writing as a craft skill)

Listen: Lenny's Podcast — AI for Product Teams (interviews with design and research leaders)

Read: Nielsen Norman Group — AI & UX (research-backed guidance on AI's impact on UX practice)


Last updated May 2026. This is a living document.