Product Design
Framework
This is an internal operations playbook — not a manifesto. Every technique here has been validated against the specific constraints of service-design work: external clients, shifting budgets, confidentiality requirements, and the need to be explainable and defensible under real pressure.
Tool names are current examples of a capability. The landscape moves fast. Capabilities are what matter. Core AI Platforms section: review quarterly.
Contents
Core Principles
Before applying any technique, every designer using AI tools in client work should hold these four principles. They are not aspirational — they are operating constraints.
1. Capability-First, Not Tool-First
AI tools will change. Capabilities — synthesis, generation, iteration, documentation — are stable. This framework is built around what AI can do, not what any product is called today. When a tool is named, it is a current example of a capability, not a permanent endorsement.
2. Evidence-Led
Every recommendation here has a validation path. Where real teams have published or shared outcomes, those are cited directly. Where first-principles reasoning fills a gap, that is flagged explicitly. No company's approach dominates this framework.
3. Service-Company Aware
This was built for service work from the start. The specific constraints that shape every recommendation:
- Multiple clients simultaneously: AI workflows must be isolated per client.
- Defensibility over novelty: Every AI-assisted output must be explainable to a client who may distrust AI.
- Stewardship, not ownership: Designers here guide the client's vision. AI tools amplify your judgement — they do not substitute for it.
- Confidentiality is a hard constraint: Client data must never be fed into external AI tools without an explicit anonymisation step.
- Speed and quality must coexist: Time saved on generation must be invested in quality review.
4. The Deterministic Output Problem
AI produces confident-sounding output for precise tasks. That output still needs checking.
AI output for precise or factual tasks — accessibility standards, component specs, legal copy, numeric data — requires human review before use. The time saved on generation does not offset the time lost to unchecked errors.
Evidence Tiers
| Label | What It Means |
|---|---|
| ✓ Verified Evidence (blue) | Published research, documented case studies, or named practitioner accounts that are publicly verifiable. |
| ◈ First-Principles Reasoning (amber) | Directional logic drawn from how AI tools actually behave. Specific numbers are illustrative, not measured. |
AI Use Cases
Discovery is where AI provides the highest-leverage assistance with the lowest risk of confidentiality exposure — because good discovery AI work operates on public signals and anonymised research.
- Secondary research synthesis: AI processes competitor analyses, sector reports, and analogous company reviews rapidly.
- Interview guide generation: AI drafts topic-structured discussion guides from a brief.
- Transcript processing: AI identifies themes, quotes, and patterns from anonymised interview transcripts at scale.
- Affinity clustering: AI pre-sorts observations into thematic clusters for team review.
- Stakeholder mapping: AI drafts stakeholder maps and influence matrices from a project brief.
- Long document analysis: Use Gemini's long-context capability or NotebookLM to process 100+ page documents in a single pass with cited summaries.
- Persistent project context: Use Claude Projects or ChatGPT custom GPTs to maintain research context across sessions. Anonymise all inputs before loading.
Primary: Claude or Gemini for synthesis. Gemini / NotebookLM for long documents and Google Drive research packs. ChatGPT for multimodal research inputs. Specialist tools: Dovetail for research repositories, Perplexity for sourced secondary research.
Prompt Template — Transcript Synthesis
ROLE: You are a senior UX researcher. TASK: Analyse the following interview transcript and produce: 1. Top 5 themes with supporting quotes (anonymised) 2. Key tensions or contradictions observed 3. Unanswered questions that warrant follow-up 4. One surprising finding if any TRANSCRIPT: [paste anonymised transcript here] FORMAT: Structured output with quotes in block format.
Prompt Template — Competitive Research
ROLE: You are a competitive research analyst. TASK: Review this [industry] and produce a landscape summary: 1. Key player positioning (max 6 companies) 2. Feature gap analysis 3. Underserved user needs visible from public signals 4. UX and design patterns across the category CONTEXT: [paste anonymised sector description] Output as a structured brief shareable with a design team.
Proof of Results
Nielsen Norman Group (2024): AI-assisted affinity diagramming reduced synthesis time by 30–40% in controlled study conditions, but increased divergence risk when outputs were not anchored to direct researcher review.
Guardrails
- Never feed raw client transcripts into external AI tools — anonymise first using the 5-minute protocol in the Confidentiality section.
- Always human-review AI-generated interview guides for leading questions and cultural insensitivity.
- Never treat AI-synthesised themes as findings — they are hypotheses.
- Never use AI to generate research citations without verifying the source exists.
Take the transcript or notes from your current discovery phase. Paste an anonymised version into Claude with the synthesis prompt. Use the output as a starting point for your next team synthesis session — not as the output itself.
AI Use Cases
- How Might We generation: Given a problem space, AI generates 20–40 HMW questions in under a minute.
- Problem statement drafting: AI drafts multiple framings of a design brief from research inputs.
- Persona synthesis: AI combines interview themes into proto-personas for team alignment. Always validate against real user data.
- Design principles generation: AI drafts candidate design principles from a brief or research summary.
- Opportunity prioritisation: AI scores and ranks opportunities against business goals and user needs when given explicit criteria.
- Persistent brief context: Load the anonymised project brief into Claude Projects at the start of the engagement.
Primary: Claude or ChatGPT for framing, HMW generation, and principles drafting. Claude Projects or ChatGPT custom GPTs for persistent project context. Gemini if research material lives in Google Drive.
Prompt Template — Define Phase Starter
ROLE: You are a design strategist.
CONTEXT: [paste anonymised research summary — max 500 words]
TASK:
1. Generate 15 'How Might We' questions across 3 themes
2. Draft 3 alternative problem statement framings
3. Propose 4–5 candidate design principles
(name, one-line description, design implication for each)
4. Rank the top 5 opportunities by user impact and business fit
CONSTRAINT: Plain language — defensible to a non-design client.Guardrails
- Never let AI-generated personas substitute for direct user research — they are placeholder hypotheses only.
- Always have a human own the final problem statement — it is the most consequential design decision of the project.
- Never present AI-drafted design principles to a client without team review.
Before your next client kick-off, use the define phase prompt on an anonymised version of the brief. Print the outputs. Use them as provocation material in the opening session rather than building from a blank whiteboard.
AI Use Cases
- Concept generation: AI generates 20–40 rough concept directions from a brief. Volume and speed matter here.
- Analogous inspiration: AI identifies how analogous industries have solved similar problems.
- Anti-pattern exploration: Prompting AI to generate the worst possible solution often surfaces useful boundary conditions.
- Multi-modal ideation: Sketch a rough concept on paper or iPad. Photograph it. Upload to Claude or ChatGPT with the prompt 'Interpret this sketch as a UI concept and generate 5 variations.'
- Motion concept exploration: Use Rive for interaction and animation direction exploration.
Primary: Claude or ChatGPT for concept generation and sketch interpretation. Rive for motion concept exploration. Adobe Firefly or Midjourney for visual concept imagery.
Prompt Template — Divergent Concept Generation
ROLE: You are a lateral-thinking design strategist. PROBLEM: [one-sentence problem statement] USER: [one-line user description — anonymised] TASK: Generate 20 distinct concept directions. For each: - Concept name (2–4 words) - Core idea (1 sentence) - What makes it different from obvious solutions RULES: - Prioritise novelty over feasibility at this stage - Include at least 3 counterintuitive concepts - No concept should resemble another in this list
Prompt Template — Sketch to Concept Variations
I have sketched a rough UI concept [attach image]. Interpret this sketch and generate 5 distinct variations: 1. A version that prioritises simplicity 2. A version that prioritises data density 3. A version optimised for mobile-first 4. A version that challenges the layout convention 5. A version the designer would least expect For each: describe the layout, key interaction, and what design principle it prioritises.
Guardrails
- Never present AI-generated concepts directly to a client — they require designer curation, development, and narrative.
- AI homogenises ideation output towards the most common design patterns. Always force-include unexpected directions.
- The anti-pattern exercise is one of the most reliable ways to counteract AI's tendency toward generic output.
Run the anti-pattern prompt on your current brief before your next ideation session. Share the worst-solutions list with the team at the start. It is a reliable divergent warm-up that surfaces useful design constraints in under 10 minutes.
Three distinct modes of AI assistance now exist in parallel: AI as writing assistant, AI as UI generator, and AI as code generator. Each requires different skills and different guardrails.
Writing & Copy Assistance
- UI copy drafting: Microcopy, button labels, empty states, error messages, and onboarding text at speed.
- Accessibility copy: ARIA labels, alt text, and screen reader descriptions for components.
- User flow diagramming: AI generates Mermaid-compatible flow diagrams from a text description of a process.
UI Generation
- Screen layout from prompt: Tools like Google Stitch, Uizard, and v0 produce Figma-importable or code-based screen designs from text. Google Stitch (formerly Galileo AI, now a free Google Labs tool running on Gemini) is purpose-built for this.
- Wireframe to high-fidelity: Upload a rough wireframe sketch. AI generates a styled, high-fidelity version applying a design system or described visual style.
- Design system-constrained generation: Provide the system's tokens, component names, and style rules as context.
- Component generation in v0: Use v0 by Vercel to generate individual React/Tailwind components from a text description.
Interactive Prototyping
- Vibe-coded interactive prototype: Figma Make, Lovable, and Bolt.new produce working interactive prototypes from a Figma frame or text prompt.
- Screenshot to working component: Take a screenshot of an existing UI element. Upload to Claude or ChatGPT. Prompt: generate a working React component that matches this design.
- Ask Claude to generate a working component: In any Claude conversation, describe a UI component or screen and ask Claude to generate it as working HTML or React. Claude renders a live preview directly in the conversation — no separate tool, no setup.
Writing: Claude, ChatGPT, Figma AI. UI Generation: Figma AI, v0 (Vercel), Google Stitch, ChatGPT. Interactive prototyping: Figma AI (Make), Lovable, Bolt.new, Replit. Screenshot-to-code: Claude, ChatGPT (GPT-4o), Cursor. Accessibility: Stark, axe DevTools.
Prompt Template — UI Generation with Design System Constraints
Generate a [screen name] screen for a [product type]. Design system constraints: Colours: [list primary, secondary, surface, error tokens] Typography: [heading and body type scales] Spacing scale: [e.g. 4, 8, 12, 16, 24, 32, 48px] Components available: [list: Button, Card, Input, Nav, etc.] Border radius: [value] Screen requirements: Primary action: [what the user is trying to do] Key content: [what must appear on this screen] Constraints: [mobile-first / desktop / accessibility priority] Output as: React with Tailwind / HTML with CSS / Figma description
Prompt Template — Screenshot to Component
I am attaching a screenshot of a UI component. [attach image] Generate a working React component that matches this design. Use Tailwind CSS for styling. Requirements: - Match the visual design as closely as possible - Include all visible states (default, hover, active, disabled) - Add appropriate ARIA attributes for accessibility - Use semantic HTML elements - Add TypeScript props interface Flag any design decisions you had to infer from the screenshot.
Honest Limitations of AI UI Generation
| Limitation | How to work around it |
|---|---|
| Defaults to generic SaaS patterns — dashboard layouts, card grids, sidebar navigation | Provide explicit design principles and reference unusual layouts in your prompt. |
| Cannot understand business context or user mental models | You must provide this as prompt context. |
| Design system compliance degrades at scale | Generate at component level, not page level. Validate each component. |
| Accessibility is inconsistently handled | Always run Stark or axe DevTools on AI-generated UI. |
| Copy is always placeholder-quality | Treat all AI-generated copy as a first draft requiring brand voice review. |
| Visual hierarchy is often flat | Apply typographic scale and visual weight manually after generation. |
Guardrails
- Never ship AI-generated UI copy without brand voice review.
- Never treat AI accessibility flagging as a full audit — run Stark or axe DevTools on all AI-generated UI.
- Never assume design system compliance in AI-generated components — validate each one.
- AI-generated user flows skip edge cases by default. Always add error paths and empty states manually.
AI Use Cases
- Concept mockup imagery: AI generates placeholder illustrations and mood board imagery for early presentations.
- Generative Fill for photo editing: Adobe Firefly's Generative Fill edits existing photography in context — removes objects, extends backgrounds, replaces elements.
- Vector asset generation: Adobe Firefly generates editable vector assets. Recraft is purpose-built for vector and UI illustration.
- Motion and video assets: Adobe Firefly Video and Runway generate short motion assets for UI demos and concept animations.
Commercial delivery: Adobe Firefly only. Exploration and mood boarding: ChatGPT (DALL·E 3), Gemini. Vector and UI illustration: Adobe Firefly, Recraft. Photo editing in context: Adobe Firefly (Photoshop Generative Fill). Motion assets: Adobe Firefly Video, Runway.
Adobe Firefly is currently the only major platform with clear commercial licensing for client asset delivery. All other tools require individual IP policy verification before use on client work. Disclose AI asset use to clients where contractually required. This legal area is evolving — review tool policies at the start of every major project.
Prompt Template — Style Anchor
Style: [flat vector / line art / isometric / hand-drawn] Stroke weight: [e.g. 2px consistent / none] Colour palette: [max 5 hex codes] Mood: [professional / playful / minimal / warm / bold] Avoid: [gradients / shadows / photorealism / faces / etc.] SUBJECT: [specific scene or concept for this asset] NOTE: AI has no memory between sessions. Paste the full STYLE ANCHOR unchanged at the start of every generation prompt for this project to maintain visual consistency.
Guardrails
- Never deliver AI-generated images to a client without verifying the tool's commercial licensing terms.
- Always maintain the style anchor document for a project — AI has no memory between sessions.
- Never present AI-generated assets as original commissioned illustration without disclosure.
AI Use Cases
- Deck structure and narrative: AI drafts a presentation flow from a brief.
- AI-native deck creation: Gamma generates a full slide deck from a text brief or document. Best for internal presentations and early concept decks — not polished final deliverables.
- Deck narrative critique: Upload your current deck to Claude or ChatGPT. Ask it to critique the narrative structure, identify gaps in the argument, and flag slides where a client stakeholder might disengage.
- Objection preparation: AI generates likely client objections to a design recommendation and drafts responses.
- Interactive prototype for sign-off: A working prototype the client can interact with produces faster, higher-quality sign-off than a static deck.
Deck creation: Gamma (AI-native). Deck critique and narrative: Claude, ChatGPT. Executive summaries and speaker notes: Claude, ChatGPT. Interactive prototypes for sign-off: Figma Make, Lovable, Replit.
Prompt Template — Design Decision Summary
ROLE: Senior design consultant presenting to a non-design client stakeholder. Write a defensible decision summary for this recommendation: Decision: [describe the design decision] Rationale: [why this approach] Alternatives considered: [what was rejected and why] User evidence: [anonymised research insight] Business implication: [connection to client goals] Tone: Professional, confident, no jargon. Length: One paragraph for email or slide note. End with: 'We recommend [X] because [Y]. Next step is [Z].'
Prompt Template — Objection Preparation
I am presenting this design direction to a client stakeholder:
[brief description of the recommendation]
Stakeholder profile: [conservative / technically minded /
brand-focused / budget-focused — describe]
Generate 8 likely objections. For each:
1. The objection as the stakeholder might phrase it
2. A response that addresses the concern while holding
the design position
3. A fallback compromise if the main response failsGuardrails
- Never use AI to generate client-specific strategic recommendations — AI does not know your client's business.
- Always human-review AI-drafted executive summaries — they often miss the nuance that makes a recommendation land.
- Gamma-generated decks are first drafts only — never share with a client without significant designer review.
Before your next client presentation, run the objection preparation prompt. Brief a colleague to challenge you using the AI-generated objections. A 20-minute rehearsal with this material produces a measurably more confident presentation.
AI Use Cases
- Spec documentation generation: AI takes a design description and produces structured component specifications.
- Acceptance criteria drafting: AI drafts acceptance criteria for design tickets from a component or interaction description.
- CSS and design token generation: AI translates design system values into CSS custom properties, Tailwind config, or design token JSON.
- Handoff documentation: AI structures design decisions, component states, accessibility requirements, and interaction notes into a complete handoff document.
Spec and documentation: Claude, ChatGPT. Figma handoff: Dev Mode AI, Code Connect, MCP Server, Git Integration — all Figma-native. Canvas-to-code pipeline: see Specialist Tools and Canvas ↔ Code section. AI in client codebase: see decision framework below.
Tools — Developer Handoff
| Capability | Tools & Notes |
|---|---|
| Figma Dev Mode with AI | Native Figma handoff environment. Generates code hints, CSS values, and component notes directly from design structure. Best starting point for all handoffs — no plugin required. |
| Figma Code Connect | Links Figma components to their actual production code in the codebase. When a developer inspects a component in Dev Mode, they see the real import statement and correct prop names — not AI-generated approximations. The Code Connect UI supports GitHub connection with AI suggestions to find the right code file — no coding required from the designer. |
| Figma MCP Server | Brings live Figma design context directly into AI coding tools — Cursor, VS Code, Windsurf, and Claude. The AI reads the actual design file structure during code generation rather than working from a screenshot. This is the infrastructure that makes Claude + Figma integration work. |
| Figma Git Integration | Designers can branch, commit, and merge Figma files directly to GitHub or GitLab repositories. Design files become living branches in the same repository as production code. Pull requests show visual diffs alongside code diffs. Requires Git fluency from the designer. |
| Canvas → code pipeline | Anima, Locofy, Builder.io, Pencil.dev — see Specialist Tools section and Canvas ↔ Code special section. |
| Design system documentation | Zeroheight with AI-generated component docs from Figma specs. Supernova for active token management. |
| Ticket and spec ingestion | Linear AI and Jira AI accept structured component specs and generate tickets. |
AI Coding Tool Decision Framework
When a designer is working in a client's codebase, the choice of AI coding tool is answered by three questions first:
| Question | Why it determines tool choice |
|---|---|
| 1. What has the client's IT policy approved? | Enterprise IT policies often have an approved vendor list. Ask the development lead before installing any AI coding tool on a client project. |
| 2. Which tool's data retention policy is compatible with confidentiality requirements? | All three major AI coding tools send code to external servers for processing. Confirm which policy the client's legal or IT team accepts. |
| 3. Does the client's dev team already use a particular tool? | Working in the same AI coding environment as the development team reduces friction and improves code review. |
Prompt Template — Component Specification
Generate a developer-ready component specification for: Component: [component name] Purpose: [what it does in one sentence] Include: 1. Visual states (default, hover, active, disabled, error, loading) 2. Responsive behaviour (mobile/tablet/desktop) 3. Accessibility (ARIA labels, keyboard nav, focus states) 4. Content requirements (character limits, edge cases) 5. Animation/transition specs if applicable 6. Integration dependencies (APIs, data requirements) Format as a structured spec a developer can work from directly.
Guardrails
- Never send AI-generated specifications to developers without designer review — AI invents plausible-sounding technical details.
- Never choose an AI coding tool for a client codebase without answering the three-question framework first.
- Always work in a branch when making code changes — never directly on main.
- Never use AI coding tools to touch backend files, API routes, authentication, or database logic.
AI Use Cases
- Analytics narrative: AI translates raw analytics data into plain-language narrative for client reporting.
- Heatmap and session interpretation: Hotjar AI and Microsoft Clarity AI surface patterns without manual review of every recording.
- User feedback synthesis: App store reviews, NPS comments, support tickets — AI identifies themes and priority issues from high-volume qualitative feedback.
- Sentiment analysis at scale: AI sentiment analysis produces a prioritised issue list from large volumes of user feedback.
- Automated usability test analysis: Maze AI generates insight summaries from unmoderated usability test sessions.
- Iteration brief generation: AI drafts an iteration brief from post-launch findings, structuring issues into priority tiers.
Analytics narrative: Claude, ChatGPT. Heatmap interpretation: Hotjar AI, Microsoft Clarity AI. User feedback synthesis: Dovetail AI, Notably. Usability test analysis: Maze AI. Documentation: Claude, ChatGPT, Zeroheight.
Prompt Template — Analytics to Design Insight
I have the following post-launch analytics data: [paste anonymised metrics: drop-off rates, task completion, error rates, time on task, NPS scores, etc.] Translate this data into: 1. Plain-language summary for a non-technical client 2. Top 3 design problems implied by the data 3. Prioritised iteration recommendations with design rationale 4. Metrics to track in the next iteration cycle Flag any data gaps that make these conclusions uncertain. Do not state causation — state correlation and hypothesis only.
Guardrails
- Never allow AI to state causal conclusions from correlation in analytics data — always frame as hypotheses.
- Never present AI-generated analytics narratives to clients without verifying the numbers cited are accurate.
- Always have a human frame the strategic implications — AI produces analysis, not judgement.
No client-identifying information may be entered into an external AI tool without completing the 5-minute anonymisation protocol below. This includes: client names, product names, URLs, employee names, financial data, strategic plans, user research with identifying information, brand guidelines, or any document a client would consider confidential.
5-Minute Anonymisation Protocol
- Replace client name with a generic descriptor: 'a mid-market SaaS company'
- Replace product names with functional descriptions: 'the customer dashboard'
- Remove all employee names, titles, and identifying quotes
- Remove all URLs, app store links, and brand identifiers
- Replace specific financial or commercial figures with ranges or remove entirely
- Replace participant names in research with 'User A', 'User B' etc.
- Read the anonymised version — if you could identify the client from it, anonymise further
What Is Safe to Input
- Safe: Anonymised problem statements
- Safe: Generic industry context without client-specific details
- Safe: Your own design thinking and hypotheses
- Safe: Public information about competitors or market trends
- NOT safe: Raw interview transcripts with participant names
- NOT safe: Client briefs, contracts, or strategy documents
- NOT safe: Unpublished product roadmaps or feature plans
- NOT safe: Any data covered by NDA
Platform Data Policies
| Platform | Key data policy note |
|---|---|
| Claude (Anthropic) | Claude for Teams and Enterprise: conversations not used for model training. Free and Pro tiers: review current policy. Claude Projects maintain context across sessions within the same workspace. |
| ChatGPT (OpenAI) | Team and Enterprise plans: data isolation available. Free and Plus tiers may use conversations for training. Verify your organisation's account tier before use with client material. |
| Gemini (Google) | Gemini for Workspace: governed by Google Workspace enterprise terms. Personal Gemini Advanced: separate policy. NotebookLM: separate policy — review independently. |
| Figma AI | Governed by Figma's enterprise terms for paid plans. Review before using AI features with confidential client files. |
| Adobe Firefly | Adobe's enterprise terms apply for Creative Cloud for Enterprise. Consumer Creative Cloud: review separately. |
| Cursor | Sends code to Anthropic/OpenAI for processing. Privacy mode available — enable by default for client codebase work. |
| GitHub Copilot | Business/Enterprise: code not used for model training. Individual plan: different policy. Always use Business or Enterprise on client projects. |
Four distinct directions of translation now exist between design files and code. Before exploring each direction, three Figma-native capabilities underpin how modern canvas-to-code works:
| Figma Infrastructure | What It Does |
|---|---|
| Figma MCP Server | Connects live Figma files directly to AI coding tools (Cursor, VS Code, Windsurf, Claude). The AI reads actual design structure, component properties, and token values during code generation — not screenshots. Requires developer setup. |
| Figma Code Connect | Maps Figma components to their real production code. Developers see the actual import statement and correct props from the codebase — not AI-generated approximations. Requires developer setup via CLI or the Code Connect UI with GitHub. |
| Figma Git Integration | Design files as branches in the same Git repository as production code. Designers branch, commit, and merge in Figma. Pull requests show visual diffs alongside code diffs. Eliminates design file drift. |
These three capabilities each require developer setup — the designer's role is to request them from the development lead at the start of a project, and design in a way that takes advantage of them.
A. Claude + Figma — The Native Integration
Claude's Figma integration reads live Figma files through the Figma API — understanding component relationships, auto-layout logic, design system connections, and layer structure. Unlike screenshot-based tools, it reads actual design structure.
What This Enables
- Design-aware code generation: Claude reads your actual Figma file and generates component code that reflects the real design structure.
- Design system audit: Ask Claude to identify components that are inconsistent with the design system.
- System-aware new component generation: Claude generates new components that match the established system's patterns, spacing, and naming conventions.
- Handoff documentation from live files: Claude reads the Figma file and produces component specifications and accessibility notes without a documentation sprint.
The Role of the Figma MCP Server
When the MCP server is connected, Claude reads exact component properties, spacing values, and token references rather than inferring them from a screenshot. The difference in code output quality is significant. Ask the development lead to configure the Figma MCP server alongside Cursor or VS Code at the start of a project.
B. Canvas → Code — Pipeline Tools
| Tool | Approach & Best For |
|---|---|
| Anima | Figma plugin → React/Vue/HTML. Best for isolated component work where a developer reviews and integrates. |
| Locofy | Figma → production-oriented components with framework selection. More developer-focused output. |
| Builder.io | Figma → CMS-editable pages. Best for content-heavy marketing or landing page work. |
| Pencil.dev | Design canvas inside VS Code/Cursor. Design as code in the same repo. MCP + Claude Code for pixel-perfect output. Best for teams sharing a codebase — eliminates handoff entirely. |
C. Screenshot → Code
The most accessible direction — no Figma integration or plugin required. A screenshot of any UI element can be converted to working code using Claude, ChatGPT (GPT-4o), or AI coding tools with vision capability.
- Regenerate an existing component in a new design system: Screenshot the current component. Prompt: generate this component in React using our new design system tokens.
- Reverse-engineer a reference UI: Screenshot a UI pattern from a competitor. Starting point for adaptation, not copying.
- Quick Figma-to-code without the integration: Export a Figma frame as PNG. Upload to Claude with the screenshot-to-component prompt.
D. Code → Canvas
When a client's product has evolved faster than its design files, code-to-canvas closes the gap.
- Figma Code Connect for reverse context: When Code Connect is configured, Dev Mode shows the real production code snippet for any component — giving exact component names, props, and import paths to work from when rebuilding design files.
- Figma Dev Mode inspection: Surfaces what CSS values, components, and tokens are actually in use versus what is in the design file.
- AI-assisted component inventory: Use Claude or Cursor to scan a client's component directory and generate a plain-language inventory.
Guardrails
- Never present Claude + Figma integration output as production-ready code without developer review.
- Never connect Claude to a client's Figma file without confirming this is within the terms of the engagement.
- Figma MCP Server, Code Connect, and Git Integration all require developer setup — request these at project start, not mid-project.
- Figma Git Integration requires Git fluency from the designer — do not enable it without understanding branching and merging.
- Canvas-to-code pipeline tools require developer review before any output enters a client codebase.
What It Means Practically
Vibe coding is using AI to generate working interactive prototypes from natural language or design input — without writing code yourself. You describe what you want; the AI produces working HTML, React, or similar code that you can immediately share and test.
When to Use It / When Not To
| Use | Do Not Use |
|---|---|
| Early concept validation with clients | Production handoff — not production-ready code |
| User testing (interactions, flows, copy) | Complex application logic |
| Stakeholder sign-off on interactions | Unsupervised client builds — prototypes will break |
| Iteration between user test sessions | Accessibility-sensitive builds |
| Internal design team alignment | Security-sensitive or data-sensitive flows |
Step-by-Step: Concept to Shareable Prototype in Under One Hour
- Start in Figma (15 min): Design or sketch your key screen at any fidelity. Enough to communicate layout and intent.
- Export or describe (2 min): Export the frame as PNG, or write a plain-language description of what the screen does.
- Generate (20 min): Use Figma Make, Lovable, or Bolt.new. Paste the image or description. Describe the interaction: 'When the user clicks [X], show [Y].' Review the output.
- Iterate (15 min): Fix issues by describing them. 'The button is too small.' 'Add a loading state after submission.' You are directing, not coding.
- Share (2 min): All tools generate a shareable URL. Send directly to a client or test participant.
Choosing Your Tool
| Tool | Best For |
|---|---|
| Claude (in conversation) | Ask Claude to generate a working HTML or React component directly in the conversation. Claude renders a live interactive preview — no account, no separate tool. Best for single-screen or single-component demos. Start here. |
| Figma Make | Best for designers already in Figma. Generates interactive code directly from Figma frames. Stays closest to the existing design. |
| Bolt.new | Fast browser-based prototype generation. Strong for quickly demonstrating a concept with basic data interactions. Good for same-session iteration. |
| Lovable | Stronger for more complex interactions with multiple states and flows. Generates full React apps. Best for functional prototypes with more moving parts. |
| Replit | Browser-based coding and preview environment — no local installation required. Useful in client workshops or on machines where installing tools is not possible. |
Guardrails
- Never present a vibe-coded prototype as production-ready — set client expectations explicitly before sharing.
- Always test the prototype yourself before sharing with a client or test participant.
- Never build security-sensitive or data-sensitive flows in a vibe-coded prototype.
The Spectrum
| Level | Description |
|---|---|
| 1 — Read | Can read HTML/CSS and understand what it does. Communicates precisely with developers using technical terms. |
| 2 — Edit | Makes targeted changes to CSS, copy, and simple layout properties in an existing codebase with AI assistance. Eliminates P3 bug backlog before it queues. |
| 3 — Build (UI layer) | Builds static UI components from scratch using AI tools. Prototypes full screens in code. |
| 4 — Full build | Builds complete front-end interfaces. Out of scope for most service designers — developer territory. |
Target for service designers: Levels 1–2 as baseline. Level 3 as an advanced capability for designers who want it and have the aptitude.
Agent Mode vs. Chat Mode
- Chat mode: You paste a snippet of code, describe the change, AI edits it. You apply the edit manually. Best for targeted, single-file changes. Most designer UI-layer work sits here.
- Agent mode: AI is given a task and autonomy to make changes across multiple files. Powerful for systematic changes — 'update all instances of this deprecated component.' Use with caution in a client codebase. Always review all changes before committing.
Working in a Client Codebase — Rules
- Answer the three-question framework first — see Phase 7
- Always work in a branch — never directly on main or production
- Confirm with the development lead which files are in scope for UI edits
- Make small, targeted changes with clear commit messages
- Request a developer review before merging — even P3 CSS fixes
- Never touch files outside the designated UI layer scope
Guardrails
- Never make code changes in a client codebase without developer knowledge and branch isolation.
- Never use agent mode in a client codebase for changes you have not fully reviewed.
- Never touch backend files, API routes, authentication logic, or database-adjacent code.
When AI Assets Are Appropriate
| Client / Project Type | AI Asset Suitability |
|---|---|
| Early-stage startup, no brand system | High — establishes visual direction quickly |
| Established brand, mature guidelines | Low-Medium — consistency requires strong style reference inputs |
| Consumer product, strong brand equity | Low — brand risk too high for generic AI output |
| Concept deck or mood board | High — placeholder assets explicitly appropriate |
| Final client deliverable | Conditional — Adobe Firefly only, with IP disclosure |
By Asset Type
UI Illustration
Adobe Firefly for client deliverables. ChatGPT (DALL·E 3), Gemini, or Midjourney for exploration. Recraft for vector-native output. Known limitation: AI illustration defaults to a recognisable 'AI illustration style' — strong style prompting with a reference image counteracts this.
Icons
AI icon generation is currently unreliable for production-quality, consistent icon sets. Use established icon libraries (Lucide, Phosphor, Material Symbols) for production. AI is useful for one-off custom icons or concept exploration only.
Photography
Adobe Firefly for client-deliverable photography. Generative Fill (Adobe Firefly in Photoshop) for editing existing photography in context. Known limitation: AI-generated photographs of people have ethical and legal complexity. Avoid depicting specific real people or brands.
Motion & Video
Adobe Firefly Video for IP-safe motion assets. Runway for higher-quality or longer motion generation. Rive for production-ready UI animations. AI video generation is rapidly maturing — review policies for commercial use on each project.
The most consistently underused application of AI in design practice is the removal of non-design overhead. Every hour spent on ticket writing, documentation formatting, or meeting summarisation is an hour not spent designing.
Ticket Generation from Briefs
Convert this design brief into development-ready tickets. For each ticket: - Title (action-verb: 'Build', 'Update', 'Fix') - User story (As a [user], I want [action], so that [outcome]) - Acceptance criteria (bullet list, testable) - Design reference (Figma link placeholder) - Priority: P1 / P2 / P3 BRIEF: [paste anonymised brief] TECH STACK: [list if known]
Handoff Documentation Without a Sprint
Generate a design handoff document for this phase: Phase: [phase name] Scope: [brief description of what was designed] Include: 1. Summary of design decisions and rationale 2. Component inventory with states 3. Accessibility requirements 4. Open questions and known issues 5. Next steps for the development team Input: [paste design notes, Figma structure, decision log]
Shared Prompt Library
The single highest-leverage admin investment: building and maintaining a shared prompt library for the team. Structure: Capability / Phase / Prompt / Example output / Last refined / Owner.
Designer AI Skill Stack 2025/2026
Ten skills ranked from foundational to advanced. The first five are a professional baseline by mid-2025.
| # | Skill | How to Start This Week |
|---|---|---|
| 1 | Client-Safe AI Workflow Design | Read the Confidentiality Protocol today. Apply the 5-minute anonymisation protocol on your next project. |
| 2 | Prompt Engineering for Design | Rewrite one current prompt using the role + task + format structure. Compare the output. |
| 3 | AI Research Synthesis | Use the transcript synthesis prompt on a past project's research notes. Compare AI output to your own. |
| 4 | Interactive Prototyping — Vibe Coding | Ask Claude to generate a working version of a current screen in conversation. Spend one hour on this. |
| 5 | AI-Assisted UI Copy | Use the UI copy prompt on all five microcopy types for a current component. |
| 6 | Canvas ↔ Code Literacy | Use the Claude + Figma integration on one component from a current project. Review the output with your developer. |
| 7 | UI Generation with Design System Constraints | Run the UI generation prompt with your current project's design system tokens. |
| 8 | Multi-Modal Prompting | Photograph a rough sketch. Upload to Claude with the sketch-to-variations prompt. |
| 9 | AI Asset Generation with IP Literacy | Read Adobe Firefly's commercial licensing terms. Generate a mood board for a fictional project. |
| 10 | AI Workflow Design & Prompt Library Maintenance | Create a shared document with three prompt templates from this playbook. Assign ownership to a colleague. |
30-Day Adoption Plan
Built for adoption under real client pressure. Each action fits into live work.
Day 1–2: Apply the Confidentiality Protocol to your current project.
Day 3–4: Use the research synthesis prompt on your most recent discovery materials.
Day 5: Use the UI copy prompt template on all microcopy in a current component.
Days 1–2: Run the discovery synthesis, HMW generation, and problem statement prompts on an active project.
Days 3–4: Build one interactive prototype using Claude (in conversation), Bolt.new, or Figma Make. Show it to a client or test participant.
Day 5: Use the component specification prompt for your next handoff. Ask the developer to rate the spec quality.
Day 1: Hold a 60-minute team retrospective on AI usage.
Days 2–3: Refine the prompt templates that produced the best outputs. Document refined versions with example outputs.
Days 4–5: Build the team's shared prompt library. Target: 10 prompts minimum.
Days 1–2: Create the team's formal AI workflow policy — which tools are approved, which require data review, which are restricted.
Day 3: Assign phase ownership — one designer responsible for maintaining the AI workflow for each phase.
Days 4–5: Present the framework to the wider team. Use outputs from the previous three weeks as evidence.
Quick Start
One action. One day. Immediately visible result.
Open Claude in a new conversation. Describe a screen from your current project in plain language — what it does, what the user is trying to accomplish, what elements it needs. Ask Claude to generate it as a working HTML prototype. Review the result. Send the link to one colleague.
No Figma export. No separate tool. No setup. Ask Claude to generate a working component directly in the conversation — it is the fastest path from description to something interactive that exists today.
Core AI Platforms
This section has the shortest shelf life in the document. Capabilities across all five platforms are evolving rapidly — some monthly. Review this section quarterly and update before onboarding any new designer to the framework.
Claude — Anthropic
A reasoning-first AI assistant with strong capabilities in structured writing, code generation, and design-context analysis. The only major AI assistant with a native Figma integration, allowing it to read live design files directly through the Figma API.
- Research synthesis — structured output from long briefs, transcripts, and multi-source analysis
- UI code generation — React, HTML/CSS, Tailwind with design system awareness via Figma integration
- Figma file reading — reads layer structure, component relationships, and design system values directly
- Vision analysis — upload a screenshot of a design for critique, accessibility flagging, or component identification
- Code Connect — links Figma components to real production code snippets in Dev Mode
- MCP Server — exposes live design file data to AI coding tools for design-aware code generation
- Git Integration — design files as branches in the same repository as code
Claude's enterprise tier offers data isolation — conversations are not used for model training. Claude Projects maintain persistent context across sessions, useful for long client engagements.
ChatGPT / GPT-4o — OpenAI
A highly versatile multimodal AI assistant combining text, image, voice, and file analysis in a single interface.
- Multimodal input — voice-to-brief, image analysis, file uploads, all in one conversation
- Image generation (DALL·E 3) — directly in conversation without switching tools
- Canvas feature — iterative document and code editing in a persistent side panel
- Broad research and market analysis starting points
Gemini — Google
Google's AI platform with the largest context window of the major AI assistants, deep integration with Google Workspace, and NotebookLM as a dedicated research synthesis tool.
- Long-context document analysis — processes very large documents in a single pass
- Google Workspace integration — reads Google Docs, Sheets, Slides, and Drive files directly
- NotebookLM — purpose-built for research synthesis from multiple uploaded sources with citations
- Real-time web grounding — search-integrated responses for current market and competitor research
Figma AI — Figma
Native AI capabilities embedded directly in the design environment. Operates within the context of your actual design files, components, and design system.
- Design system-aware component suggestions
- Dev Mode AI annotations — generates handoff notes and code hints from design structure
- Code Connect — links Figma components to real production code snippets
- MCP Server — exposes live design file data to AI coding tools
- Git Integration — design files as branches with visual diffs in pull requests
- Figma Make — generates interactive prototypes from frames
Adobe Firefly — Adobe
Adobe's AI platform trained on licensed Adobe Stock content, providing commercial use rights that other AI image generation platforms cannot match with the same clarity.
- Commercial IP-safe image and asset generation — the only major platform with clear commercial licensing
- Generative Fill — editing existing photography in context in Photoshop
- Vector generation — generates editable vector assets
- Creative Cloud integration — generated assets land directly in Photoshop, Illustrator, and InDesign
Capability Map
| Capability | Platforms |
|---|---|
| Research synthesis & document analysis | Claude, ChatGPT, Gemini, NotebookLM |
| Long document / large context analysis | Gemini, Claude |
| UI code generation | Claude, ChatGPT, Figma AI, Pencil.dev |
| Screen layout / UI generation from prompt | Figma AI, v0 (Vercel), Google Stitch, Uizard |
| Figma file reading & design-aware code | Claude (native Figma integration), Figma AI |
| Image generation | ChatGPT (DALL·E 3), Gemini, Adobe Firefly |
| Commercial IP-safe asset generation | Adobe Firefly |
| Production code in Dev Mode (not AI-generated) | Figma Code Connect |
| Live design context in AI coding tools | Figma MCP Server |
| Design files as Git branches with visual diffs | Figma Git Integration |
| Design system-aware suggestions | Figma AI, Claude (via Figma integration) |
| Google Workspace integration | Gemini |
| Copy & content writing | Claude, ChatGPT, Gemini, Figma AI |
| Persistent project context across sessions | Claude (Projects), ChatGPT (custom GPTs) |
Specialist Tools
These tools deliver specific capabilities that the core AI platforms do not cover. Each is a current example of a capability — when a tool is superseded, find the new best example and update this section.
Research Synthesis & Repository
| Tool | Capability & Service Context |
|---|---|
| Dovetail | Dedicated research repository with AI tagging, theme extraction, and pattern surfacing. Best for teams running repeated research across client projects. Anonymise client research before uploading. |
| Maze | Automated usability testing with AI analysis. Generates test reports including task completion rates, heatmaps, and open response themes. |
| Notably | AI-powered research analysis focused on qualitative data. Sentiment analysis, theme clustering, and highlight reels from interviews. |
| Perplexity | Search-grounded AI for secondary research. Returns sourced answers with citations. Use for public market and competitor research only. |
Interactive Prototyping
| Tool | Capability & Service Context |
|---|---|
| Lovable | Generates full React applications from text prompts or design descriptions. Strong for complex interactions with multiple states and flows. Produces shareable URLs. |
| Replit | Browser-based coding and preview environment — no local installation required. Run, edit, and share prototypes from the browser. Useful in client workshops or on restricted machines. Free tier available. |
| v0 by Vercel | Generates React and Tailwind components from text prompts. Component-level output — faster and more precise than full-screen generation. Produces copyable code. Free tier available. |
| Bolt.new | Fast, browser-based full-stack prototype generation from a prompt. Strong for quickly demonstrating a concept with basic data interactions. |
| Claude (in conversation) | Ask Claude to generate a working HTML or React component in any conversation. Claude renders a live interactive preview — no account, no separate tool. Fastest path from description to something interactive. |
Canvas → Code Pipeline
| Tool | Capability & Service Context |
|---|---|
| Anima | Generates React, Vue, or HTML from Figma frames. Best for component-level output. Reads Figma's layer structure — cleaner than screenshot-based approaches. |
| Locofy | Production-oriented component generation with framework selection (React, Next.js, Vue, React Native). More developer-focused output. |
| Builder.io | Visual CMS with Figma-to-code capability. Best for marketing or content-heavy pages where the client needs to edit content post-build without developer involvement. |
| Pencil.dev | Design canvas inside VS Code and Cursor. Design files are Git-versioned .pen files stored in the repository. Uses MCP and Claude Code to convert vector designs to pixel-perfect React/HTML/CSS. Figma import via copy/paste. Currently free. |
AI in Client Codebases
| Tool | Capability & Service Context |
|---|---|
| Cursor | VS Code fork with deeply integrated AI. Agent mode handles multi-file, scoped tasks. Sends code to AI providers — verify data retention policy against client confidentiality requirements. |
| Windsurf (Codeium) | AI coding tool with strong agent capabilities for contained, scoped tasks. Enterprise data policy available — verify before client codebase use. |
| GitHub Copilot | Strong choice when the client's development team already uses it — meets existing IT policy approvals. GitHub Enterprise data terms apply to Enterprise accounts. |
UI-Specific Image Generation
| Tool | Capability & Service Context |
|---|---|
| Google Stitch | AI-native UI screen generator, formerly Galileo AI, now a free Google Labs tool running on Gemini models. Generates high-fidelity UI mockups from text prompts with Figma export. Best for rapid visual direction exploration. |
| Recraft | Vector-native AI image generation. Produces SVG-compatible output suited for UI illustration and icon-adjacent work. Commercial licensing — verify current terms. |
Design System Documentation
| Tool | Capability & Service Context |
|---|---|
| Zeroheight | Design system documentation platform. AI capabilities assist in generating component documentation from Figma specs. Living docs update when designs change. |
| Supernova | Design token pipeline with AI-assisted documentation. Syncs tokens from Figma to code. Best for design systems with active token management. |
| Tokens Studio | Design token management within Figma. Exports tokens to JSON for developer handoff. Essential infrastructure for AI-assisted token-to-code workflows. |
Accessibility
| Tool | Capability & Service Context |
|---|---|
| Stark (Figma plugin) | WCAG contrast checking, colour blindness simulation, and focus order annotation within Figma. Essential for client deliverables. |
| axe DevTools | Browser extension for accessibility auditing of live pages. Use after prototype or staging deployment to check AI-generated code. |
Post-Launch Analytics
| Tool | Capability & Service Context |
|---|---|
| Hotjar AI | AI-assisted heatmap interpretation and session replay analysis. Surfaces patterns without manual review of every recording. |
| Microsoft Clarity AI | Free session recording and heatmap tool with AI-generated insights. Good for budget-constrained client projects. |
| Maze AI | Automated usability test analysis with AI-generated insight summaries. Used post-launch for lightweight unmoderated testing. |
Presentation Creation
| Tool | Capability & Service Context |
|---|---|
| Gamma | AI-native presentation builder. Generates full slide decks from a text brief or document. Best for internal presentations and early concept decks — not for polished final client deliverables. Review all content before sharing. |