changes
This commit is contained in:
106
frontend/critique/reference/cognitive-load.md
Normal file
106
frontend/critique/reference/cognitive-load.md
Normal file
@@ -0,0 +1,106 @@
|
||||
# Cognitive Load Assessment
|
||||
|
||||
Cognitive load is the total mental effort required to use an interface. Overloaded users make mistakes, get frustrated, and leave. This reference helps identify and fix cognitive overload.
|
||||
|
||||
---
|
||||
|
||||
## Three Types of Cognitive Load
|
||||
|
||||
### Intrinsic Load — The Task Itself
|
||||
Complexity inherent to what the user is trying to do. You can't eliminate this, but you can structure it.
|
||||
|
||||
**Manage it by**:
|
||||
- Breaking complex tasks into discrete steps
|
||||
- Providing scaffolding (templates, defaults, examples)
|
||||
- Progressive disclosure — show what's needed now, hide the rest
|
||||
- Grouping related decisions together
|
||||
|
||||
### Extraneous Load — Bad Design
|
||||
Mental effort caused by poor design choices. **Eliminate this ruthlessly** — it's pure waste.
|
||||
|
||||
**Common sources**:
|
||||
- Confusing navigation that requires mental mapping
|
||||
- Unclear labels that force users to guess meaning
|
||||
- Visual clutter competing for attention
|
||||
- Inconsistent patterns that prevent learning
|
||||
- Unnecessary steps between user intent and result
|
||||
|
||||
### Germane Load — Learning Effort
|
||||
Mental effort spent building understanding. This is *good* cognitive load — it leads to mastery.
|
||||
|
||||
**Support it by**:
|
||||
- Progressive disclosure that reveals complexity gradually
|
||||
- Consistent patterns that reward learning
|
||||
- Feedback that confirms correct understanding
|
||||
- Onboarding that teaches through action, not walls of text
|
||||
|
||||
---
|
||||
|
||||
## Cognitive Load Checklist
|
||||
|
||||
Evaluate the interface against these 8 items:
|
||||
|
||||
- [ ] **Single focus**: Can the user complete their primary task without distraction from competing elements?
|
||||
- [ ] **Chunking**: Is information presented in digestible groups (≤4 items per group)?
|
||||
- [ ] **Grouping**: Are related items visually grouped together (proximity, borders, shared background)?
|
||||
- [ ] **Visual hierarchy**: Is it immediately clear what's most important on the screen?
|
||||
- [ ] **One thing at a time**: Can the user focus on a single decision before moving to the next?
|
||||
- [ ] **Minimal choices**: Are decisions simplified (≤4 visible options at any decision point)?
|
||||
- [ ] **Working memory**: Does the user need to remember information from a previous screen to act on the current one?
|
||||
- [ ] **Progressive disclosure**: Is complexity revealed only when the user needs it?
|
||||
|
||||
**Scoring**: Count the failed items. 0–1 failures = low cognitive load (good). 2–3 = moderate (address soon). 4+ = high cognitive load (critical fix needed).
|
||||
|
||||
---
|
||||
|
||||
## The Working Memory Rule
|
||||
|
||||
**Humans can hold ≤4 items in working memory at once** (Miller's Law revised by Cowan, 2001).
|
||||
|
||||
At any decision point, count the number of distinct options, actions, or pieces of information a user must simultaneously consider:
|
||||
- **≤4 items**: Within working memory limits — manageable
|
||||
- **5–7 items**: Pushing the boundary — consider grouping or progressive disclosure
|
||||
- **8+ items**: Overloaded — users will skip, misclick, or abandon
|
||||
|
||||
**Practical applications**:
|
||||
- Navigation menus: ≤5 top-level items (group the rest under clear categories)
|
||||
- Form sections: ≤4 fields visible per group before a visual break
|
||||
- Action buttons: 1 primary, 1–2 secondary, group the rest in a menu
|
||||
- Dashboard widgets: ≤4 key metrics visible without scrolling
|
||||
- Pricing tiers: ≤3 options (more causes analysis paralysis)
|
||||
|
||||
---
|
||||
|
||||
## Common Cognitive Load Violations
|
||||
|
||||
### 1. The Wall of Options
|
||||
**Problem**: Presenting 10+ choices at once with no hierarchy.
|
||||
**Fix**: Group into categories, highlight recommended, use progressive disclosure.
|
||||
|
||||
### 2. The Memory Bridge
|
||||
**Problem**: User must remember info from step 1 to complete step 3.
|
||||
**Fix**: Keep relevant context visible, or repeat it where it's needed.
|
||||
|
||||
### 3. The Hidden Navigation
|
||||
**Problem**: User must build a mental map of where things are.
|
||||
**Fix**: Always show current location (breadcrumbs, active states, progress indicators).
|
||||
|
||||
### 4. The Jargon Barrier
|
||||
**Problem**: Technical or domain language forces translation effort.
|
||||
**Fix**: Use plain language. If domain terms are unavoidable, define them inline.
|
||||
|
||||
### 5. The Visual Noise Floor
|
||||
**Problem**: Every element has the same visual weight — nothing stands out.
|
||||
**Fix**: Establish clear hierarchy: one primary element, 2–3 secondary, everything else muted.
|
||||
|
||||
### 6. The Inconsistent Pattern
|
||||
**Problem**: Similar actions work differently in different places.
|
||||
**Fix**: Standardize interaction patterns. Same type of action = same type of UI.
|
||||
|
||||
### 7. The Multi-Task Demand
|
||||
**Problem**: Interface requires processing multiple simultaneous inputs (reading + deciding + navigating).
|
||||
**Fix**: Sequence the steps. Let the user do one thing at a time.
|
||||
|
||||
### 8. The Context Switch
|
||||
**Problem**: User must jump between screens/tabs/modals to gather info for a single decision.
|
||||
**Fix**: Co-locate the information needed for each decision. Reduce back-and-forth.
|
||||
234
frontend/critique/reference/heuristics-scoring.md
Normal file
234
frontend/critique/reference/heuristics-scoring.md
Normal file
@@ -0,0 +1,234 @@
|
||||
# Heuristics Scoring Guide
|
||||
|
||||
Score each of Nielsen's 10 Usability Heuristics on a 0–4 scale. Be honest — a 4 means genuinely excellent, not "good enough."
|
||||
|
||||
## Nielsen's 10 Heuristics
|
||||
|
||||
### 1. Visibility of System Status
|
||||
|
||||
Keep users informed about what's happening through timely, appropriate feedback.
|
||||
|
||||
**Check for**:
|
||||
- Loading indicators during async operations
|
||||
- Confirmation of user actions (save, submit, delete)
|
||||
- Progress indicators for multi-step processes
|
||||
- Current location in navigation (breadcrumbs, active states)
|
||||
- Form validation feedback (inline, not just on submit)
|
||||
|
||||
**Scoring**:
|
||||
| Score | Criteria |
|
||||
|-------|----------|
|
||||
| 0 | No feedback — user is guessing what happened |
|
||||
| 1 | Rare feedback — most actions produce no visible response |
|
||||
| 2 | Partial — some states communicated, major gaps remain |
|
||||
| 3 | Good — most operations give clear feedback, minor gaps |
|
||||
| 4 | Excellent — every action confirms, progress is always visible |
|
||||
|
||||
### 2. Match Between System and Real World
|
||||
|
||||
Speak the user's language. Follow real-world conventions. Information appears in natural, logical order.
|
||||
|
||||
**Check for**:
|
||||
- Familiar terminology (no unexplained jargon)
|
||||
- Logical information order matching user expectations
|
||||
- Recognizable icons and metaphors
|
||||
- Domain-appropriate language for the target audience
|
||||
- Natural reading flow (left-to-right, top-to-bottom priority)
|
||||
|
||||
**Scoring**:
|
||||
| Score | Criteria |
|
||||
|-------|----------|
|
||||
| 0 | Pure tech jargon, alien to users |
|
||||
| 1 | Mostly confusing — requires domain expertise to navigate |
|
||||
| 2 | Mixed — some plain language, some jargon leaks through |
|
||||
| 3 | Mostly natural — occasional term needs context |
|
||||
| 4 | Speaks the user's language fluently throughout |
|
||||
|
||||
### 3. User Control and Freedom
|
||||
|
||||
Users need a clear "emergency exit" from unwanted states without extended dialogue.
|
||||
|
||||
**Check for**:
|
||||
- Undo/redo functionality
|
||||
- Cancel buttons on forms and modals
|
||||
- Clear navigation back to safety (home, previous)
|
||||
- Easy way to clear filters, search, selections
|
||||
- Escape from long or multi-step processes
|
||||
|
||||
**Scoring**:
|
||||
| Score | Criteria |
|
||||
|-------|----------|
|
||||
| 0 | Users get trapped — no way out without refreshing |
|
||||
| 1 | Difficult exits — must find obscure paths to escape |
|
||||
| 2 | Some exits — main flows have escape, edge cases don't |
|
||||
| 3 | Good control — users can exit and undo most actions |
|
||||
| 4 | Full control — undo, cancel, back, and escape everywhere |
|
||||
|
||||
### 4. Consistency and Standards
|
||||
|
||||
Users shouldn't wonder whether different words, situations, or actions mean the same thing.
|
||||
|
||||
**Check for**:
|
||||
- Consistent terminology throughout the interface
|
||||
- Same actions produce same results everywhere
|
||||
- Platform conventions followed (standard UI patterns)
|
||||
- Visual consistency (colors, typography, spacing, components)
|
||||
- Consistent interaction patterns (same gesture = same behavior)
|
||||
|
||||
**Scoring**:
|
||||
| Score | Criteria |
|
||||
|-------|----------|
|
||||
| 0 | Inconsistent everywhere — feels like different products stitched together |
|
||||
| 1 | Many inconsistencies — similar things look/behave differently |
|
||||
| 2 | Partially consistent — main flows match, details diverge |
|
||||
| 3 | Mostly consistent — occasional deviation, nothing confusing |
|
||||
| 4 | Fully consistent — cohesive system, predictable behavior |
|
||||
|
||||
### 5. Error Prevention
|
||||
|
||||
Better than good error messages is a design that prevents problems in the first place.
|
||||
|
||||
**Check for**:
|
||||
- Confirmation before destructive actions (delete, overwrite)
|
||||
- Constraints preventing invalid input (date pickers, dropdowns)
|
||||
- Smart defaults that reduce errors
|
||||
- Clear labels that prevent misunderstanding
|
||||
- Autosave and draft recovery
|
||||
|
||||
**Scoring**:
|
||||
| Score | Criteria |
|
||||
|-------|----------|
|
||||
| 0 | Errors easy to make — no guardrails anywhere |
|
||||
| 1 | Few safeguards — some inputs validated, most aren't |
|
||||
| 2 | Partial prevention — common errors caught, edge cases slip |
|
||||
| 3 | Good prevention — most error paths blocked proactively |
|
||||
| 4 | Excellent — errors nearly impossible through smart constraints |
|
||||
|
||||
### 6. Recognition Rather Than Recall
|
||||
|
||||
Minimize memory load. Make objects, actions, and options visible or easily retrievable.
|
||||
|
||||
**Check for**:
|
||||
- Visible options (not buried in hidden menus)
|
||||
- Contextual help when needed (tooltips, inline hints)
|
||||
- Recent items and history
|
||||
- Autocomplete and suggestions
|
||||
- Labels on icons (not icon-only navigation)
|
||||
|
||||
**Scoring**:
|
||||
| Score | Criteria |
|
||||
|-------|----------|
|
||||
| 0 | Heavy memorization — users must remember paths and commands |
|
||||
| 1 | Mostly recall — many hidden features, few visible cues |
|
||||
| 2 | Some aids — main actions visible, secondary features hidden |
|
||||
| 3 | Good recognition — most things discoverable, few memory demands |
|
||||
| 4 | Everything discoverable — users never need to memorize |
|
||||
|
||||
### 7. Flexibility and Efficiency of Use
|
||||
|
||||
Accelerators — invisible to novices — speed up expert interaction.
|
||||
|
||||
**Check for**:
|
||||
- Keyboard shortcuts for common actions
|
||||
- Customizable interface elements
|
||||
- Recent items and favorites
|
||||
- Bulk/batch actions
|
||||
- Power user features that don't complicate the basics
|
||||
|
||||
**Scoring**:
|
||||
| Score | Criteria |
|
||||
|-------|----------|
|
||||
| 0 | One rigid path — no shortcuts or alternatives |
|
||||
| 1 | Limited flexibility — few alternatives to the main path |
|
||||
| 2 | Some shortcuts — basic keyboard support, limited bulk actions |
|
||||
| 3 | Good accelerators — keyboard nav, some customization |
|
||||
| 4 | Highly flexible — multiple paths, power features, customizable |
|
||||
|
||||
### 8. Aesthetic and Minimalist Design
|
||||
|
||||
Interfaces should not contain irrelevant or rarely needed information. Every element should serve a purpose.
|
||||
|
||||
**Check for**:
|
||||
- Only necessary information visible at each step
|
||||
- Clear visual hierarchy directing attention
|
||||
- Purposeful use of color and emphasis
|
||||
- No decorative clutter competing for attention
|
||||
- Focused, uncluttered layouts
|
||||
|
||||
**Scoring**:
|
||||
| Score | Criteria |
|
||||
|-------|----------|
|
||||
| 0 | Overwhelming — everything competes for attention equally |
|
||||
| 1 | Cluttered — too much noise, hard to find what matters |
|
||||
| 2 | Some clutter — main content clear, periphery noisy |
|
||||
| 3 | Mostly clean — focused design, minor visual noise |
|
||||
| 4 | Perfectly minimal — every element earns its pixel |
|
||||
|
||||
### 9. Help Users Recognize, Diagnose, and Recover from Errors
|
||||
|
||||
Error messages should use plain language, precisely indicate the problem, and constructively suggest a solution.
|
||||
|
||||
**Check for**:
|
||||
- Plain language error messages (no error codes for users)
|
||||
- Specific problem identification ("Email is missing @" not "Invalid input")
|
||||
- Actionable recovery suggestions
|
||||
- Errors displayed near the source of the problem
|
||||
- Non-blocking error handling (don't wipe the form)
|
||||
|
||||
**Scoring**:
|
||||
| Score | Criteria |
|
||||
|-------|----------|
|
||||
| 0 | Cryptic errors — codes, jargon, or no message at all |
|
||||
| 1 | Vague errors — "Something went wrong" with no guidance |
|
||||
| 2 | Clear but unhelpful — names the problem but not the fix |
|
||||
| 3 | Clear with suggestions — identifies problem and offers next steps |
|
||||
| 4 | Perfect recovery — pinpoints issue, suggests fix, preserves user work |
|
||||
|
||||
### 10. Help and Documentation
|
||||
|
||||
Even if the system is usable without docs, help should be easy to find, task-focused, and concise.
|
||||
|
||||
**Check for**:
|
||||
- Searchable help or documentation
|
||||
- Contextual help (tooltips, inline hints, guided tours)
|
||||
- Task-focused organization (not feature-organized)
|
||||
- Concise, scannable content
|
||||
- Easy access without leaving current context
|
||||
|
||||
**Scoring**:
|
||||
| Score | Criteria |
|
||||
|-------|----------|
|
||||
| 0 | No help available anywhere |
|
||||
| 1 | Help exists but hard to find or irrelevant |
|
||||
| 2 | Basic help — FAQ or docs exist, not contextual |
|
||||
| 3 | Good documentation — searchable, mostly task-focused |
|
||||
| 4 | Excellent contextual help — right info at the right moment |
|
||||
|
||||
---
|
||||
|
||||
## Score Summary
|
||||
|
||||
**Total possible**: 40 points (10 heuristics × 4 max)
|
||||
|
||||
| Score Range | Rating | What It Means |
|
||||
|-------------|--------|---------------|
|
||||
| 36–40 | Excellent | Minor polish only — ship it |
|
||||
| 28–35 | Good | Address weak areas, solid foundation |
|
||||
| 20–27 | Acceptable | Significant improvements needed before users are happy |
|
||||
| 12–19 | Poor | Major UX overhaul required — core experience broken |
|
||||
| 0–11 | Critical | Redesign needed — unusable in current state |
|
||||
|
||||
---
|
||||
|
||||
## Issue Severity (P0–P3)
|
||||
|
||||
Tag each individual issue found during scoring with a priority level:
|
||||
|
||||
| Priority | Name | Description | Action |
|
||||
|----------|------|-------------|--------|
|
||||
| **P0** | Blocking | Prevents task completion entirely | Fix immediately — this is a showstopper |
|
||||
| **P1** | Major | Causes significant difficulty or confusion | Fix before release |
|
||||
| **P2** | Minor | Annoyance, but workaround exists | Fix in next pass |
|
||||
| **P3** | Polish | Nice-to-fix, no real user impact | Fix if time permits |
|
||||
|
||||
**Tip**: If you're unsure between two levels, ask: "Would a user contact support about this?" If yes, it's at least P1.
|
||||
178
frontend/critique/reference/personas.md
Normal file
178
frontend/critique/reference/personas.md
Normal file
@@ -0,0 +1,178 @@
|
||||
# Persona-Based Design Testing
|
||||
|
||||
Test the interface through the eyes of 5 distinct user archetypes. Each persona exposes different failure modes that a single "design director" perspective would miss.
|
||||
|
||||
**How to use**: Select 2–3 personas most relevant to the interface being critiqued. Walk through the primary user action as each persona. Report specific red flags — not generic concerns.
|
||||
|
||||
---
|
||||
|
||||
## 1. Impatient Power User — "Alex"
|
||||
|
||||
**Profile**: Expert with similar products. Expects efficiency, hates hand-holding. Will find shortcuts or leave.
|
||||
|
||||
**Behaviors**:
|
||||
- Skips all onboarding and instructions
|
||||
- Looks for keyboard shortcuts immediately
|
||||
- Tries to bulk-select, batch-edit, and automate
|
||||
- Gets frustrated by required steps that feel unnecessary
|
||||
- Abandons if anything feels slow or patronizing
|
||||
|
||||
**Test Questions**:
|
||||
- Can Alex complete the core task in under 60 seconds?
|
||||
- Are there keyboard shortcuts for common actions?
|
||||
- Can onboarding be skipped entirely?
|
||||
- Do modals have keyboard dismiss (Esc)?
|
||||
- Is there a "power user" path (shortcuts, bulk actions)?
|
||||
|
||||
**Red Flags** (report these specifically):
|
||||
- Forced tutorials or unskippable onboarding
|
||||
- No keyboard navigation for primary actions
|
||||
- Slow animations that can't be skipped
|
||||
- One-item-at-a-time workflows where batch would be natural
|
||||
- Redundant confirmation steps for low-risk actions
|
||||
|
||||
---
|
||||
|
||||
## 2. Confused First-Timer — "Jordan"
|
||||
|
||||
**Profile**: Never used this type of product. Needs guidance at every step. Will abandon rather than figure it out.
|
||||
|
||||
**Behaviors**:
|
||||
- Reads all instructions carefully
|
||||
- Hesitates before clicking anything unfamiliar
|
||||
- Looks for help or support constantly
|
||||
- Misunderstands jargon and abbreviations
|
||||
- Takes the most literal interpretation of any label
|
||||
|
||||
**Test Questions**:
|
||||
- Is the first action obviously clear within 5 seconds?
|
||||
- Are all icons labeled with text?
|
||||
- Is there contextual help at decision points?
|
||||
- Does terminology assume prior knowledge?
|
||||
- Is there a clear "back" or "undo" at every step?
|
||||
|
||||
**Red Flags** (report these specifically):
|
||||
- Icon-only navigation with no labels
|
||||
- Technical jargon without explanation
|
||||
- No visible help option or guidance
|
||||
- Ambiguous next steps after completing an action
|
||||
- No confirmation that an action succeeded
|
||||
|
||||
---
|
||||
|
||||
## 3. Accessibility-Dependent User — "Sam"
|
||||
|
||||
**Profile**: Uses screen reader (VoiceOver/NVDA), keyboard-only navigation. May have low vision, motor impairment, or cognitive differences.
|
||||
|
||||
**Behaviors**:
|
||||
- Tabs through the interface linearly
|
||||
- Relies on ARIA labels and heading structure
|
||||
- Cannot see hover states or visual-only indicators
|
||||
- Needs adequate color contrast (4.5:1 minimum)
|
||||
- May use browser zoom up to 200%
|
||||
|
||||
**Test Questions**:
|
||||
- Can the entire primary flow be completed keyboard-only?
|
||||
- Are all interactive elements focusable with visible focus indicators?
|
||||
- Do images have meaningful alt text?
|
||||
- Is color contrast WCAG AA compliant (4.5:1 for text)?
|
||||
- Does the screen reader announce state changes (loading, success, errors)?
|
||||
|
||||
**Red Flags** (report these specifically):
|
||||
- Click-only interactions with no keyboard alternative
|
||||
- Missing or invisible focus indicators
|
||||
- Meaning conveyed by color alone (red = error, green = success)
|
||||
- Unlabeled form fields or buttons
|
||||
- Time-limited actions without extension option
|
||||
- Custom components that break screen reader flow
|
||||
|
||||
---
|
||||
|
||||
## 4. Deliberate Stress Tester — "Riley"
|
||||
|
||||
**Profile**: Methodical user who pushes interfaces beyond the happy path. Tests edge cases, tries unexpected inputs, and probes for gaps in the experience.
|
||||
|
||||
**Behaviors**:
|
||||
- Tests edge cases intentionally (empty states, long strings, special characters)
|
||||
- Submits forms with unexpected data (emoji, RTL text, very long values)
|
||||
- Tries to break workflows by navigating backwards, refreshing mid-flow, or opening in multiple tabs
|
||||
- Looks for inconsistencies between what the UI promises and what actually happens
|
||||
- Documents problems methodically
|
||||
|
||||
**Test Questions**:
|
||||
- What happens at the edges (0 items, 1000 items, very long text)?
|
||||
- Do error states recover gracefully or leave the UI in a broken state?
|
||||
- What happens on refresh mid-workflow? Is state preserved?
|
||||
- Are there features that appear to work but produce broken results?
|
||||
- How does the UI handle unexpected input (emoji, special chars, paste from Excel)?
|
||||
|
||||
**Red Flags** (report these specifically):
|
||||
- Features that appear to work but silently fail or produce wrong results
|
||||
- Error handling that exposes technical details or leaves UI in a broken state
|
||||
- Empty states that show nothing useful ("No results" with no guidance)
|
||||
- Workflows that lose user data on refresh or navigation
|
||||
- Inconsistent behavior between similar interactions in different parts of the UI
|
||||
|
||||
---
|
||||
|
||||
## 5. Distracted Mobile User — "Casey"
|
||||
|
||||
**Profile**: Using phone one-handed on the go. Frequently interrupted. Possibly on a slow connection.
|
||||
|
||||
**Behaviors**:
|
||||
- Uses thumb only — prefers bottom-of-screen actions
|
||||
- Gets interrupted mid-flow and returns later
|
||||
- Switches between apps frequently
|
||||
- Has limited attention span and low patience
|
||||
- Types as little as possible, prefers taps and selections
|
||||
|
||||
**Test Questions**:
|
||||
- Are primary actions in the thumb zone (bottom half of screen)?
|
||||
- Is state preserved if the user leaves and returns?
|
||||
- Does it work on slow connections (3G)?
|
||||
- Can forms leverage autocomplete and smart defaults?
|
||||
- Are touch targets at least 44×44pt?
|
||||
|
||||
**Red Flags** (report these specifically):
|
||||
- Important actions positioned at the top of the screen (unreachable by thumb)
|
||||
- No state persistence — progress lost on tab switch or interruption
|
||||
- Large text inputs required where selection would work
|
||||
- Heavy assets loading on every page (no lazy loading)
|
||||
- Tiny tap targets or targets too close together
|
||||
|
||||
---
|
||||
|
||||
## Selecting Personas
|
||||
|
||||
Choose personas based on the interface type:
|
||||
|
||||
| Interface Type | Primary Personas | Why |
|
||||
|---------------|-----------------|-----|
|
||||
| Landing page / marketing | Jordan, Riley, Casey | First impressions, trust, mobile |
|
||||
| Dashboard / admin | Alex, Sam | Power users, accessibility |
|
||||
| E-commerce / checkout | Casey, Riley, Jordan | Mobile, edge cases, clarity |
|
||||
| Onboarding flow | Jordan, Casey | Confusion, interruption |
|
||||
| Data-heavy / analytics | Alex, Sam | Efficiency, keyboard nav |
|
||||
| Form-heavy / wizard | Jordan, Sam, Casey | Clarity, accessibility, mobile |
|
||||
|
||||
---
|
||||
|
||||
## Project-Specific Personas
|
||||
|
||||
If `AGENTS.md` contains a `## Design Context` section (generated by `teach-impeccable`), derive 1–2 additional personas from the audience and brand information:
|
||||
|
||||
1. Read the target audience description
|
||||
2. Identify the primary user archetype not covered by the 5 predefined personas
|
||||
3. Create a persona following this template:
|
||||
|
||||
```
|
||||
### [Role] — "[Name]"
|
||||
|
||||
**Profile**: [2-3 key characteristics derived from Design Context]
|
||||
|
||||
**Behaviors**: [3-4 specific behaviors based on the described audience]
|
||||
|
||||
**Red Flags**: [3-4 things that would alienate this specific user type]
|
||||
```
|
||||
|
||||
Only generate project-specific personas when real Design Context data is available. Don't invent audience details — use the 5 predefined personas when no context exists.
|
||||
Reference in New Issue
Block a user