changes
This commit is contained in:
199
.pi/agent/skills/frontend/critique/SKILL.md
Normal file
199
.pi/agent/skills/frontend/critique/SKILL.md
Normal file
@@ -0,0 +1,199 @@
|
||||
---
|
||||
name: critique
|
||||
description: Evaluate design from a UX perspective, assessing visual hierarchy, information architecture, emotional resonance, cognitive load, and overall quality with quantitative scoring, persona-based testing, and actionable feedback. Use when the user asks to review, critique, evaluate, or give feedback on a design or component.
|
||||
---
|
||||
|
||||
## MANDATORY PREPARATION
|
||||
|
||||
Invoke /frontend-design — it contains design principles, anti-patterns, and the **Context Gathering Protocol**. Follow the protocol before proceeding — if no design context exists yet, you MUST run /teach-impeccable first. Additionally gather: what the interface is trying to accomplish.
|
||||
|
||||
---
|
||||
|
||||
Conduct a holistic design critique, evaluating whether the interface actually works — not just technically, but as a designed experience. Think like a design director giving feedback.
|
||||
|
||||
## Phase 1: Design Critique
|
||||
|
||||
Evaluate the interface across these dimensions:
|
||||
|
||||
### 1. AI Slop Detection (CRITICAL)
|
||||
|
||||
**This is the most important check.** Does this look like every other AI-generated interface from 2024-2025?
|
||||
|
||||
Review the design against ALL the **DON'T** guidelines in the frontend-design skill — they are the fingerprints of AI-generated work. Check for the AI color palette, gradient text, dark mode with glowing accents, glassmorphism, hero metric layouts, identical card grids, generic fonts, and all other tells.
|
||||
|
||||
**The test**: If you showed this to someone and said "AI made this," would they believe you immediately? If yes, that's the problem.
|
||||
|
||||
### 2. Visual Hierarchy
|
||||
- Does the eye flow to the most important element first?
|
||||
- Is there a clear primary action? Can you spot it in 2 seconds?
|
||||
- Do size, color, and position communicate importance correctly?
|
||||
- Is there visual competition between elements that should have different weights?
|
||||
|
||||
### 3. Information Architecture & Cognitive Load
|
||||
> *Consult [cognitive-load](reference/cognitive-load.md) for the working memory rule and 8-item checklist*
|
||||
- Is the structure intuitive? Would a new user understand the organization?
|
||||
- Is related content grouped logically?
|
||||
- Are there too many choices at once? Count visible options at each decision point — if >4, flag it
|
||||
- Is the navigation clear and predictable?
|
||||
- **Progressive disclosure**: Is complexity revealed only when needed, or dumped on the user upfront?
|
||||
- **Run the 8-item cognitive load checklist** from the reference. Report failure count: 0–1 = low (good), 2–3 = moderate, 4+ = critical.
|
||||
|
||||
### 4. Emotional Journey
|
||||
- What emotion does this interface evoke? Is that intentional?
|
||||
- Does it match the brand personality?
|
||||
- Does it feel trustworthy, approachable, premium, playful — whatever it should feel?
|
||||
- Would the target user feel "this is for me"?
|
||||
- **Peak-end rule**: Is the most intense moment positive? Does the experience end well (confirmation, celebration, clear next step)?
|
||||
- **Emotional valleys**: Check for onboarding frustration, error cliffs, feature discovery gaps, or anxiety spikes at high-stakes moments (payment, delete, commit)
|
||||
- **Interventions at negative moments**: Are there design interventions where users are likely to feel frustrated or anxious? (progress indicators, reassurance copy, undo options, social proof)
|
||||
|
||||
### 5. Discoverability & Affordance
|
||||
- Are interactive elements obviously interactive?
|
||||
- Would a user know what to do without instructions?
|
||||
- Are hover/focus states providing useful feedback?
|
||||
- Are there hidden features that should be more visible?
|
||||
|
||||
### 6. Composition & Balance
|
||||
- Does the layout feel balanced or uncomfortably weighted?
|
||||
- Is whitespace used intentionally or just leftover?
|
||||
- Is there visual rhythm in spacing and repetition?
|
||||
- Does asymmetry feel designed or accidental?
|
||||
|
||||
### 7. Typography as Communication
|
||||
- Does the type hierarchy clearly signal what to read first, second, third?
|
||||
- Is body text comfortable to read? (line length, spacing, size)
|
||||
- Do font choices reinforce the brand/tone?
|
||||
- Is there enough contrast between heading levels?
|
||||
|
||||
### 8. Color with Purpose
|
||||
- Is color used to communicate, not just decorate?
|
||||
- Does the palette feel cohesive?
|
||||
- Are accent colors drawing attention to the right things?
|
||||
- Does it work for colorblind users? (not just technically — does meaning still come through?)
|
||||
|
||||
### 9. States & Edge Cases
|
||||
- Empty states: Do they guide users toward action, or just say "nothing here"?
|
||||
- Loading states: Do they reduce perceived wait time?
|
||||
- Error states: Are they helpful and non-blaming?
|
||||
- Success states: Do they confirm and guide next steps?
|
||||
|
||||
### 10. Microcopy & Voice
|
||||
- Is the writing clear and concise?
|
||||
- Does it sound like a human (the right human for this brand)?
|
||||
- Are labels and buttons unambiguous?
|
||||
- Does error copy help users fix the problem?
|
||||
|
||||
## Phase 2: Present Findings
|
||||
|
||||
Structure your feedback as a design director would:
|
||||
|
||||
### Design Health Score
|
||||
> *Consult [heuristics-scoring](reference/heuristics-scoring.md)*
|
||||
|
||||
Score each of Nielsen's 10 heuristics 0–4. Present as a table:
|
||||
|
||||
| # | Heuristic | Score | Key Issue |
|
||||
|---|-----------|-------|-----------|
|
||||
| 1 | Visibility of System Status | ? | [specific finding or "—" if solid] |
|
||||
| 2 | Match System / Real World | ? | |
|
||||
| 3 | User Control and Freedom | ? | |
|
||||
| 4 | Consistency and Standards | ? | |
|
||||
| 5 | Error Prevention | ? | |
|
||||
| 6 | Recognition Rather Than Recall | ? | |
|
||||
| 7 | Flexibility and Efficiency | ? | |
|
||||
| 8 | Aesthetic and Minimalist Design | ? | |
|
||||
| 9 | Error Recovery | ? | |
|
||||
| 10 | Help and Documentation | ? | |
|
||||
| **Total** | | **??/40** | **[Rating band]** |
|
||||
|
||||
Be honest with scores. A 4 means genuinely excellent. Most real interfaces score 20–32.
|
||||
|
||||
### Anti-Patterns Verdict
|
||||
**Start here.** Pass/fail: Does this look AI-generated? List specific tells from the skill's Anti-Patterns section. Be brutally honest.
|
||||
|
||||
### Overall Impression
|
||||
A brief gut reaction — what works, what doesn't, and the single biggest opportunity.
|
||||
|
||||
### What's Working
|
||||
Highlight 2–3 things done well. Be specific about why they work.
|
||||
|
||||
### Priority Issues
|
||||
The 3–5 most impactful design problems, ordered by importance.
|
||||
|
||||
For each issue, tag with **P0–P3 severity** (consult [heuristics-scoring](reference/heuristics-scoring.md) for severity definitions):
|
||||
- **[P?] What**: Name the problem clearly
|
||||
- **Why it matters**: How this hurts users or undermines goals
|
||||
- **Fix**: What to do about it (be concrete)
|
||||
- **Suggested command**: Which command could address this (from: /animate, /quieter, /optimize, /adapt, /clarify, /distill, /delight, /onboard, /normalize, /audit, /harden, /polish, /extract, /bolder, /arrange, /typeset, /critique, /colorize, /overdrive)
|
||||
|
||||
### Persona Red Flags
|
||||
> *Consult [personas](reference/personas.md)*
|
||||
|
||||
Auto-select 2–3 personas most relevant to this interface type (use the selection table in the reference). If `AGENTS.md` contains a `## Design Context` section from `teach-impeccable`, also generate 1–2 project-specific personas from the audience/brand info.
|
||||
|
||||
For each selected persona, walk through the primary user action and list specific red flags found:
|
||||
|
||||
**Alex (Power User)**: No keyboard shortcuts detected. Form requires 8 clicks for primary action. Forced modal onboarding. ⚠️ High abandonment risk.
|
||||
|
||||
**Jordan (First-Timer)**: Icon-only nav in sidebar. Technical jargon in error messages ("404 Not Found"). No visible help. ⚠️ Will abandon at step 2.
|
||||
|
||||
Be specific — name the exact elements and interactions that fail each persona. Don't write generic persona descriptions; write what broke for them.
|
||||
|
||||
### Minor Observations
|
||||
Quick notes on smaller issues worth addressing.
|
||||
|
||||
**Remember**:
|
||||
- Be direct — vague feedback wastes everyone's time
|
||||
- Be specific — "the submit button" not "some elements"
|
||||
- Say what's wrong AND why it matters to users
|
||||
- Give concrete suggestions, not just "consider exploring..."
|
||||
- Prioritize ruthlessly — if everything is important, nothing is
|
||||
- Don't soften criticism — developers need honest feedback to ship great design
|
||||
|
||||
## Phase 3: Ask the User
|
||||
|
||||
**After presenting findings**, use targeted questions based on what was actually found. ask the user directly to clarify what you cannot infer. These answers will shape the action plan.
|
||||
|
||||
Ask questions along these lines (adapt to the specific findings — do NOT ask generic questions):
|
||||
|
||||
1. **Priority direction**: Based on the issues found, ask which category matters most to the user right now. For example: "I found problems with visual hierarchy, color usage, and information overload. Which area should we tackle first?" Offer the top 2–3 issue categories as options.
|
||||
|
||||
2. **Design intent**: If the critique found a tonal mismatch, ask whether it was intentional. For example: "The interface feels clinical and corporate. Is that the intended tone, or should it feel warmer/bolder/more playful?" Offer 2–3 tonal directions as options based on what would fix the issues found.
|
||||
|
||||
3. **Scope**: Ask how much the user wants to take on. For example: "I found N issues. Want to address everything, or focus on the top 3?" Offer scope options like "Top 3 only", "All issues", "Critical issues only".
|
||||
|
||||
4. **Constraints** (optional — only ask if relevant): If the findings touch many areas, ask if anything is off-limits. For example: "Should any sections stay as-is?" This prevents the plan from touching things the user considers done.
|
||||
|
||||
**Rules for questions**:
|
||||
- Every question must reference specific findings from Phase 2 — never ask generic "who is your audience?" questions
|
||||
- Keep it to 2–4 questions maximum — respect the user's time
|
||||
- Offer concrete options, not open-ended prompts
|
||||
- If findings are straightforward (e.g., only 1–2 clear issues), skip questions and go directly to Phase 4
|
||||
|
||||
## Phase 4: Recommended Actions
|
||||
|
||||
**After receiving the user's answers**, present a prioritized action summary reflecting the user's priorities and scope from Phase 3.
|
||||
|
||||
### Action Summary
|
||||
|
||||
List recommended commands in priority order, based on the user's answers:
|
||||
|
||||
1. **`/command-name`** — Brief description of what to fix (specific context from critique findings)
|
||||
2. **`/command-name`** — Brief description (specific context)
|
||||
...
|
||||
|
||||
**Rules for recommendations**:
|
||||
- Only recommend commands from: /animate, /quieter, /optimize, /adapt, /clarify, /distill, /delight, /onboard, /normalize, /audit, /harden, /polish, /extract, /bolder, /arrange, /typeset, /critique, /colorize, /overdrive
|
||||
- Order by the user's stated priorities first, then by impact
|
||||
- Each item's description should carry enough context that the command knows what to focus on
|
||||
- Map each Priority Issue to the appropriate command
|
||||
- Skip commands that would address zero issues
|
||||
- If the user chose a limited scope, only include items within that scope
|
||||
- If the user marked areas as off-limits, exclude commands that would touch those areas
|
||||
- End with `/polish` as the final step if any fixes were recommended
|
||||
|
||||
After presenting the summary, tell the user:
|
||||
|
||||
> You can ask me to run these one at a time, all at once, or in any order you prefer.
|
||||
>
|
||||
> Re-run `/critique` after fixes to see your score improve.
|
||||
106
.pi/agent/skills/frontend/critique/reference/cognitive-load.md
Normal file
106
.pi/agent/skills/frontend/critique/reference/cognitive-load.md
Normal file
@@ -0,0 +1,106 @@
|
||||
# Cognitive Load Assessment
|
||||
|
||||
Cognitive load is the total mental effort required to use an interface. Overloaded users make mistakes, get frustrated, and leave. This reference helps identify and fix cognitive overload.
|
||||
|
||||
---
|
||||
|
||||
## Three Types of Cognitive Load
|
||||
|
||||
### Intrinsic Load — The Task Itself
|
||||
Complexity inherent to what the user is trying to do. You can't eliminate this, but you can structure it.
|
||||
|
||||
**Manage it by**:
|
||||
- Breaking complex tasks into discrete steps
|
||||
- Providing scaffolding (templates, defaults, examples)
|
||||
- Progressive disclosure — show what's needed now, hide the rest
|
||||
- Grouping related decisions together
|
||||
|
||||
### Extraneous Load — Bad Design
|
||||
Mental effort caused by poor design choices. **Eliminate this ruthlessly** — it's pure waste.
|
||||
|
||||
**Common sources**:
|
||||
- Confusing navigation that requires mental mapping
|
||||
- Unclear labels that force users to guess meaning
|
||||
- Visual clutter competing for attention
|
||||
- Inconsistent patterns that prevent learning
|
||||
- Unnecessary steps between user intent and result
|
||||
|
||||
### Germane Load — Learning Effort
|
||||
Mental effort spent building understanding. This is *good* cognitive load — it leads to mastery.
|
||||
|
||||
**Support it by**:
|
||||
- Progressive disclosure that reveals complexity gradually
|
||||
- Consistent patterns that reward learning
|
||||
- Feedback that confirms correct understanding
|
||||
- Onboarding that teaches through action, not walls of text
|
||||
|
||||
---
|
||||
|
||||
## Cognitive Load Checklist
|
||||
|
||||
Evaluate the interface against these 8 items:
|
||||
|
||||
- [ ] **Single focus**: Can the user complete their primary task without distraction from competing elements?
|
||||
- [ ] **Chunking**: Is information presented in digestible groups (≤4 items per group)?
|
||||
- [ ] **Grouping**: Are related items visually grouped together (proximity, borders, shared background)?
|
||||
- [ ] **Visual hierarchy**: Is it immediately clear what's most important on the screen?
|
||||
- [ ] **One thing at a time**: Can the user focus on a single decision before moving to the next?
|
||||
- [ ] **Minimal choices**: Are decisions simplified (≤4 visible options at any decision point)?
|
||||
- [ ] **Working memory**: Does the user need to remember information from a previous screen to act on the current one?
|
||||
- [ ] **Progressive disclosure**: Is complexity revealed only when the user needs it?
|
||||
|
||||
**Scoring**: Count the failed items. 0–1 failures = low cognitive load (good). 2–3 = moderate (address soon). 4+ = high cognitive load (critical fix needed).
|
||||
|
||||
---
|
||||
|
||||
## The Working Memory Rule
|
||||
|
||||
**Humans can hold ≤4 items in working memory at once** (Miller's Law revised by Cowan, 2001).
|
||||
|
||||
At any decision point, count the number of distinct options, actions, or pieces of information a user must simultaneously consider:
|
||||
- **≤4 items**: Within working memory limits — manageable
|
||||
- **5–7 items**: Pushing the boundary — consider grouping or progressive disclosure
|
||||
- **8+ items**: Overloaded — users will skip, misclick, or abandon
|
||||
|
||||
**Practical applications**:
|
||||
- Navigation menus: ≤5 top-level items (group the rest under clear categories)
|
||||
- Form sections: ≤4 fields visible per group before a visual break
|
||||
- Action buttons: 1 primary, 1–2 secondary, group the rest in a menu
|
||||
- Dashboard widgets: ≤4 key metrics visible without scrolling
|
||||
- Pricing tiers: ≤3 options (more causes analysis paralysis)
|
||||
|
||||
---
|
||||
|
||||
## Common Cognitive Load Violations
|
||||
|
||||
### 1. The Wall of Options
|
||||
**Problem**: Presenting 10+ choices at once with no hierarchy.
|
||||
**Fix**: Group into categories, highlight recommended, use progressive disclosure.
|
||||
|
||||
### 2. The Memory Bridge
|
||||
**Problem**: User must remember info from step 1 to complete step 3.
|
||||
**Fix**: Keep relevant context visible, or repeat it where it's needed.
|
||||
|
||||
### 3. The Hidden Navigation
|
||||
**Problem**: User must build a mental map of where things are.
|
||||
**Fix**: Always show current location (breadcrumbs, active states, progress indicators).
|
||||
|
||||
### 4. The Jargon Barrier
|
||||
**Problem**: Technical or domain language forces translation effort.
|
||||
**Fix**: Use plain language. If domain terms are unavoidable, define them inline.
|
||||
|
||||
### 5. The Visual Noise Floor
|
||||
**Problem**: Every element has the same visual weight — nothing stands out.
|
||||
**Fix**: Establish clear hierarchy: one primary element, 2–3 secondary, everything else muted.
|
||||
|
||||
### 6. The Inconsistent Pattern
|
||||
**Problem**: Similar actions work differently in different places.
|
||||
**Fix**: Standardize interaction patterns. Same type of action = same type of UI.
|
||||
|
||||
### 7. The Multi-Task Demand
|
||||
**Problem**: Interface requires processing multiple simultaneous inputs (reading + deciding + navigating).
|
||||
**Fix**: Sequence the steps. Let the user do one thing at a time.
|
||||
|
||||
### 8. The Context Switch
|
||||
**Problem**: User must jump between screens/tabs/modals to gather info for a single decision.
|
||||
**Fix**: Co-locate the information needed for each decision. Reduce back-and-forth.
|
||||
@@ -0,0 +1,234 @@
|
||||
# Heuristics Scoring Guide
|
||||
|
||||
Score each of Nielsen's 10 Usability Heuristics on a 0–4 scale. Be honest — a 4 means genuinely excellent, not "good enough."
|
||||
|
||||
## Nielsen's 10 Heuristics
|
||||
|
||||
### 1. Visibility of System Status
|
||||
|
||||
Keep users informed about what's happening through timely, appropriate feedback.
|
||||
|
||||
**Check for**:
|
||||
- Loading indicators during async operations
|
||||
- Confirmation of user actions (save, submit, delete)
|
||||
- Progress indicators for multi-step processes
|
||||
- Current location in navigation (breadcrumbs, active states)
|
||||
- Form validation feedback (inline, not just on submit)
|
||||
|
||||
**Scoring**:
|
||||
| Score | Criteria |
|
||||
|-------|----------|
|
||||
| 0 | No feedback — user is guessing what happened |
|
||||
| 1 | Rare feedback — most actions produce no visible response |
|
||||
| 2 | Partial — some states communicated, major gaps remain |
|
||||
| 3 | Good — most operations give clear feedback, minor gaps |
|
||||
| 4 | Excellent — every action confirms, progress is always visible |
|
||||
|
||||
### 2. Match Between System and Real World
|
||||
|
||||
Speak the user's language. Follow real-world conventions. Information appears in natural, logical order.
|
||||
|
||||
**Check for**:
|
||||
- Familiar terminology (no unexplained jargon)
|
||||
- Logical information order matching user expectations
|
||||
- Recognizable icons and metaphors
|
||||
- Domain-appropriate language for the target audience
|
||||
- Natural reading flow (left-to-right, top-to-bottom priority)
|
||||
|
||||
**Scoring**:
|
||||
| Score | Criteria |
|
||||
|-------|----------|
|
||||
| 0 | Pure tech jargon, alien to users |
|
||||
| 1 | Mostly confusing — requires domain expertise to navigate |
|
||||
| 2 | Mixed — some plain language, some jargon leaks through |
|
||||
| 3 | Mostly natural — occasional term needs context |
|
||||
| 4 | Speaks the user's language fluently throughout |
|
||||
|
||||
### 3. User Control and Freedom
|
||||
|
||||
Users need a clear "emergency exit" from unwanted states without extended dialogue.
|
||||
|
||||
**Check for**:
|
||||
- Undo/redo functionality
|
||||
- Cancel buttons on forms and modals
|
||||
- Clear navigation back to safety (home, previous)
|
||||
- Easy way to clear filters, search, selections
|
||||
- Escape from long or multi-step processes
|
||||
|
||||
**Scoring**:
|
||||
| Score | Criteria |
|
||||
|-------|----------|
|
||||
| 0 | Users get trapped — no way out without refreshing |
|
||||
| 1 | Difficult exits — must find obscure paths to escape |
|
||||
| 2 | Some exits — main flows have escape, edge cases don't |
|
||||
| 3 | Good control — users can exit and undo most actions |
|
||||
| 4 | Full control — undo, cancel, back, and escape everywhere |
|
||||
|
||||
### 4. Consistency and Standards
|
||||
|
||||
Users shouldn't wonder whether different words, situations, or actions mean the same thing.
|
||||
|
||||
**Check for**:
|
||||
- Consistent terminology throughout the interface
|
||||
- Same actions produce same results everywhere
|
||||
- Platform conventions followed (standard UI patterns)
|
||||
- Visual consistency (colors, typography, spacing, components)
|
||||
- Consistent interaction patterns (same gesture = same behavior)
|
||||
|
||||
**Scoring**:
|
||||
| Score | Criteria |
|
||||
|-------|----------|
|
||||
| 0 | Inconsistent everywhere — feels like different products stitched together |
|
||||
| 1 | Many inconsistencies — similar things look/behave differently |
|
||||
| 2 | Partially consistent — main flows match, details diverge |
|
||||
| 3 | Mostly consistent — occasional deviation, nothing confusing |
|
||||
| 4 | Fully consistent — cohesive system, predictable behavior |
|
||||
|
||||
### 5. Error Prevention
|
||||
|
||||
Better than good error messages is a design that prevents problems in the first place.
|
||||
|
||||
**Check for**:
|
||||
- Confirmation before destructive actions (delete, overwrite)
|
||||
- Constraints preventing invalid input (date pickers, dropdowns)
|
||||
- Smart defaults that reduce errors
|
||||
- Clear labels that prevent misunderstanding
|
||||
- Autosave and draft recovery
|
||||
|
||||
**Scoring**:
|
||||
| Score | Criteria |
|
||||
|-------|----------|
|
||||
| 0 | Errors easy to make — no guardrails anywhere |
|
||||
| 1 | Few safeguards — some inputs validated, most aren't |
|
||||
| 2 | Partial prevention — common errors caught, edge cases slip |
|
||||
| 3 | Good prevention — most error paths blocked proactively |
|
||||
| 4 | Excellent — errors nearly impossible through smart constraints |
|
||||
|
||||
### 6. Recognition Rather Than Recall
|
||||
|
||||
Minimize memory load. Make objects, actions, and options visible or easily retrievable.
|
||||
|
||||
**Check for**:
|
||||
- Visible options (not buried in hidden menus)
|
||||
- Contextual help when needed (tooltips, inline hints)
|
||||
- Recent items and history
|
||||
- Autocomplete and suggestions
|
||||
- Labels on icons (not icon-only navigation)
|
||||
|
||||
**Scoring**:
|
||||
| Score | Criteria |
|
||||
|-------|----------|
|
||||
| 0 | Heavy memorization — users must remember paths and commands |
|
||||
| 1 | Mostly recall — many hidden features, few visible cues |
|
||||
| 2 | Some aids — main actions visible, secondary features hidden |
|
||||
| 3 | Good recognition — most things discoverable, few memory demands |
|
||||
| 4 | Everything discoverable — users never need to memorize |
|
||||
|
||||
### 7. Flexibility and Efficiency of Use
|
||||
|
||||
Accelerators — invisible to novices — speed up expert interaction.
|
||||
|
||||
**Check for**:
|
||||
- Keyboard shortcuts for common actions
|
||||
- Customizable interface elements
|
||||
- Recent items and favorites
|
||||
- Bulk/batch actions
|
||||
- Power user features that don't complicate the basics
|
||||
|
||||
**Scoring**:
|
||||
| Score | Criteria |
|
||||
|-------|----------|
|
||||
| 0 | One rigid path — no shortcuts or alternatives |
|
||||
| 1 | Limited flexibility — few alternatives to the main path |
|
||||
| 2 | Some shortcuts — basic keyboard support, limited bulk actions |
|
||||
| 3 | Good accelerators — keyboard nav, some customization |
|
||||
| 4 | Highly flexible — multiple paths, power features, customizable |
|
||||
|
||||
### 8. Aesthetic and Minimalist Design
|
||||
|
||||
Interfaces should not contain irrelevant or rarely needed information. Every element should serve a purpose.
|
||||
|
||||
**Check for**:
|
||||
- Only necessary information visible at each step
|
||||
- Clear visual hierarchy directing attention
|
||||
- Purposeful use of color and emphasis
|
||||
- No decorative clutter competing for attention
|
||||
- Focused, uncluttered layouts
|
||||
|
||||
**Scoring**:
|
||||
| Score | Criteria |
|
||||
|-------|----------|
|
||||
| 0 | Overwhelming — everything competes for attention equally |
|
||||
| 1 | Cluttered — too much noise, hard to find what matters |
|
||||
| 2 | Some clutter — main content clear, periphery noisy |
|
||||
| 3 | Mostly clean — focused design, minor visual noise |
|
||||
| 4 | Perfectly minimal — every element earns its pixel |
|
||||
|
||||
### 9. Help Users Recognize, Diagnose, and Recover from Errors
|
||||
|
||||
Error messages should use plain language, precisely indicate the problem, and constructively suggest a solution.
|
||||
|
||||
**Check for**:
|
||||
- Plain language error messages (no error codes for users)
|
||||
- Specific problem identification ("Email is missing @" not "Invalid input")
|
||||
- Actionable recovery suggestions
|
||||
- Errors displayed near the source of the problem
|
||||
- Non-blocking error handling (don't wipe the form)
|
||||
|
||||
**Scoring**:
|
||||
| Score | Criteria |
|
||||
|-------|----------|
|
||||
| 0 | Cryptic errors — codes, jargon, or no message at all |
|
||||
| 1 | Vague errors — "Something went wrong" with no guidance |
|
||||
| 2 | Clear but unhelpful — names the problem but not the fix |
|
||||
| 3 | Clear with suggestions — identifies problem and offers next steps |
|
||||
| 4 | Perfect recovery — pinpoints issue, suggests fix, preserves user work |
|
||||
|
||||
### 10. Help and Documentation
|
||||
|
||||
Even if the system is usable without docs, help should be easy to find, task-focused, and concise.
|
||||
|
||||
**Check for**:
|
||||
- Searchable help or documentation
|
||||
- Contextual help (tooltips, inline hints, guided tours)
|
||||
- Task-focused organization (not feature-organized)
|
||||
- Concise, scannable content
|
||||
- Easy access without leaving current context
|
||||
|
||||
**Scoring**:
|
||||
| Score | Criteria |
|
||||
|-------|----------|
|
||||
| 0 | No help available anywhere |
|
||||
| 1 | Help exists but hard to find or irrelevant |
|
||||
| 2 | Basic help — FAQ or docs exist, not contextual |
|
||||
| 3 | Good documentation — searchable, mostly task-focused |
|
||||
| 4 | Excellent contextual help — right info at the right moment |
|
||||
|
||||
---
|
||||
|
||||
## Score Summary
|
||||
|
||||
**Total possible**: 40 points (10 heuristics × 4 max)
|
||||
|
||||
| Score Range | Rating | What It Means |
|
||||
|-------------|--------|---------------|
|
||||
| 36–40 | Excellent | Minor polish only — ship it |
|
||||
| 28–35 | Good | Address weak areas, solid foundation |
|
||||
| 20–27 | Acceptable | Significant improvements needed before users are happy |
|
||||
| 12–19 | Poor | Major UX overhaul required — core experience broken |
|
||||
| 0–11 | Critical | Redesign needed — unusable in current state |
|
||||
|
||||
---
|
||||
|
||||
## Issue Severity (P0–P3)
|
||||
|
||||
Tag each individual issue found during scoring with a priority level:
|
||||
|
||||
| Priority | Name | Description | Action |
|
||||
|----------|------|-------------|--------|
|
||||
| **P0** | Blocking | Prevents task completion entirely | Fix immediately — this is a showstopper |
|
||||
| **P1** | Major | Causes significant difficulty or confusion | Fix before release |
|
||||
| **P2** | Minor | Annoyance, but workaround exists | Fix in next pass |
|
||||
| **P3** | Polish | Nice-to-fix, no real user impact | Fix if time permits |
|
||||
|
||||
**Tip**: If you're unsure between two levels, ask: "Would a user contact support about this?" If yes, it's at least P1.
|
||||
178
.pi/agent/skills/frontend/critique/reference/personas.md
Normal file
178
.pi/agent/skills/frontend/critique/reference/personas.md
Normal file
@@ -0,0 +1,178 @@
|
||||
# Persona-Based Design Testing
|
||||
|
||||
Test the interface through the eyes of 5 distinct user archetypes. Each persona exposes different failure modes that a single "design director" perspective would miss.
|
||||
|
||||
**How to use**: Select 2–3 personas most relevant to the interface being critiqued. Walk through the primary user action as each persona. Report specific red flags — not generic concerns.
|
||||
|
||||
---
|
||||
|
||||
## 1. Impatient Power User — "Alex"
|
||||
|
||||
**Profile**: Expert with similar products. Expects efficiency, hates hand-holding. Will find shortcuts or leave.
|
||||
|
||||
**Behaviors**:
|
||||
- Skips all onboarding and instructions
|
||||
- Looks for keyboard shortcuts immediately
|
||||
- Tries to bulk-select, batch-edit, and automate
|
||||
- Gets frustrated by required steps that feel unnecessary
|
||||
- Abandons if anything feels slow or patronizing
|
||||
|
||||
**Test Questions**:
|
||||
- Can Alex complete the core task in under 60 seconds?
|
||||
- Are there keyboard shortcuts for common actions?
|
||||
- Can onboarding be skipped entirely?
|
||||
- Do modals have keyboard dismiss (Esc)?
|
||||
- Is there a "power user" path (shortcuts, bulk actions)?
|
||||
|
||||
**Red Flags** (report these specifically):
|
||||
- Forced tutorials or unskippable onboarding
|
||||
- No keyboard navigation for primary actions
|
||||
- Slow animations that can't be skipped
|
||||
- One-item-at-a-time workflows where batch would be natural
|
||||
- Redundant confirmation steps for low-risk actions
|
||||
|
||||
---
|
||||
|
||||
## 2. Confused First-Timer — "Jordan"
|
||||
|
||||
**Profile**: Never used this type of product. Needs guidance at every step. Will abandon rather than figure it out.
|
||||
|
||||
**Behaviors**:
|
||||
- Reads all instructions carefully
|
||||
- Hesitates before clicking anything unfamiliar
|
||||
- Looks for help or support constantly
|
||||
- Misunderstands jargon and abbreviations
|
||||
- Takes the most literal interpretation of any label
|
||||
|
||||
**Test Questions**:
|
||||
- Is the first action obviously clear within 5 seconds?
|
||||
- Are all icons labeled with text?
|
||||
- Is there contextual help at decision points?
|
||||
- Does terminology assume prior knowledge?
|
||||
- Is there a clear "back" or "undo" at every step?
|
||||
|
||||
**Red Flags** (report these specifically):
|
||||
- Icon-only navigation with no labels
|
||||
- Technical jargon without explanation
|
||||
- No visible help option or guidance
|
||||
- Ambiguous next steps after completing an action
|
||||
- No confirmation that an action succeeded
|
||||
|
||||
---
|
||||
|
||||
## 3. Accessibility-Dependent User — "Sam"
|
||||
|
||||
**Profile**: Uses screen reader (VoiceOver/NVDA), keyboard-only navigation. May have low vision, motor impairment, or cognitive differences.
|
||||
|
||||
**Behaviors**:
|
||||
- Tabs through the interface linearly
|
||||
- Relies on ARIA labels and heading structure
|
||||
- Cannot see hover states or visual-only indicators
|
||||
- Needs adequate color contrast (4.5:1 minimum)
|
||||
- May use browser zoom up to 200%
|
||||
|
||||
**Test Questions**:
|
||||
- Can the entire primary flow be completed keyboard-only?
|
||||
- Are all interactive elements focusable with visible focus indicators?
|
||||
- Do images have meaningful alt text?
|
||||
- Is color contrast WCAG AA compliant (4.5:1 for text)?
|
||||
- Does the screen reader announce state changes (loading, success, errors)?
|
||||
|
||||
**Red Flags** (report these specifically):
|
||||
- Click-only interactions with no keyboard alternative
|
||||
- Missing or invisible focus indicators
|
||||
- Meaning conveyed by color alone (red = error, green = success)
|
||||
- Unlabeled form fields or buttons
|
||||
- Time-limited actions without extension option
|
||||
- Custom components that break screen reader flow
|
||||
|
||||
---
|
||||
|
||||
## 4. Deliberate Stress Tester — "Riley"
|
||||
|
||||
**Profile**: Methodical user who pushes interfaces beyond the happy path. Tests edge cases, tries unexpected inputs, and probes for gaps in the experience.
|
||||
|
||||
**Behaviors**:
|
||||
- Tests edge cases intentionally (empty states, long strings, special characters)
|
||||
- Submits forms with unexpected data (emoji, RTL text, very long values)
|
||||
- Tries to break workflows by navigating backwards, refreshing mid-flow, or opening in multiple tabs
|
||||
- Looks for inconsistencies between what the UI promises and what actually happens
|
||||
- Documents problems methodically
|
||||
|
||||
**Test Questions**:
|
||||
- What happens at the edges (0 items, 1000 items, very long text)?
|
||||
- Do error states recover gracefully or leave the UI in a broken state?
|
||||
- What happens on refresh mid-workflow? Is state preserved?
|
||||
- Are there features that appear to work but produce broken results?
|
||||
- How does the UI handle unexpected input (emoji, special chars, paste from Excel)?
|
||||
|
||||
**Red Flags** (report these specifically):
|
||||
- Features that appear to work but silently fail or produce wrong results
|
||||
- Error handling that exposes technical details or leaves UI in a broken state
|
||||
- Empty states that show nothing useful ("No results" with no guidance)
|
||||
- Workflows that lose user data on refresh or navigation
|
||||
- Inconsistent behavior between similar interactions in different parts of the UI
|
||||
|
||||
---
|
||||
|
||||
## 5. Distracted Mobile User — "Casey"
|
||||
|
||||
**Profile**: Using phone one-handed on the go. Frequently interrupted. Possibly on a slow connection.
|
||||
|
||||
**Behaviors**:
|
||||
- Uses thumb only — prefers bottom-of-screen actions
|
||||
- Gets interrupted mid-flow and returns later
|
||||
- Switches between apps frequently
|
||||
- Has limited attention span and low patience
|
||||
- Types as little as possible, prefers taps and selections
|
||||
|
||||
**Test Questions**:
|
||||
- Are primary actions in the thumb zone (bottom half of screen)?
|
||||
- Is state preserved if the user leaves and returns?
|
||||
- Does it work on slow connections (3G)?
|
||||
- Can forms leverage autocomplete and smart defaults?
|
||||
- Are touch targets at least 44×44pt?
|
||||
|
||||
**Red Flags** (report these specifically):
|
||||
- Important actions positioned at the top of the screen (unreachable by thumb)
|
||||
- No state persistence — progress lost on tab switch or interruption
|
||||
- Large text inputs required where selection would work
|
||||
- Heavy assets loading on every page (no lazy loading)
|
||||
- Tiny tap targets or targets too close together
|
||||
|
||||
---
|
||||
|
||||
## Selecting Personas
|
||||
|
||||
Choose personas based on the interface type:
|
||||
|
||||
| Interface Type | Primary Personas | Why |
|
||||
|---------------|-----------------|-----|
|
||||
| Landing page / marketing | Jordan, Riley, Casey | First impressions, trust, mobile |
|
||||
| Dashboard / admin | Alex, Sam | Power users, accessibility |
|
||||
| E-commerce / checkout | Casey, Riley, Jordan | Mobile, edge cases, clarity |
|
||||
| Onboarding flow | Jordan, Casey | Confusion, interruption |
|
||||
| Data-heavy / analytics | Alex, Sam | Efficiency, keyboard nav |
|
||||
| Form-heavy / wizard | Jordan, Sam, Casey | Clarity, accessibility, mobile |
|
||||
|
||||
---
|
||||
|
||||
## Project-Specific Personas
|
||||
|
||||
If `AGENTS.md` contains a `## Design Context` section (generated by `teach-impeccable`), derive 1–2 additional personas from the audience and brand information:
|
||||
|
||||
1. Read the target audience description
|
||||
2. Identify the primary user archetype not covered by the 5 predefined personas
|
||||
3. Create a persona following this template:
|
||||
|
||||
```
|
||||
### [Role] — "[Name]"
|
||||
|
||||
**Profile**: [2-3 key characteristics derived from Design Context]
|
||||
|
||||
**Behaviors**: [3-4 specific behaviors based on the described audience]
|
||||
|
||||
**Red Flags**: [3-4 things that would alienate this specific user type]
|
||||
```
|
||||
|
||||
Only generate project-specific personas when real Design Context data is available. Don't invent audience details — use the 5 predefined personas when no context exists.
|
||||
Reference in New Issue
Block a user