We used persona-driven testing to build and refine this documentation site. Three personas, three perspectives, completely different findings.

PP

Pixel Perfect Pat

UI Perfectionist

Senior UI designer with 12 years at design-forward companies (Stripe, Linear, Vercel). Known for catching spacing issues that are 2px off. Carries a mental checklist of design principles.

Goals

  • Verify consistent spacing rhythm across all elements
  • Check typography hierarchy and consistency
  • Look for alignment issues between elements
  • Evaluate whitespace balance and visual breathing room
  • Test responsive behavior at key breakpoints
  • Assess visual polish and attention to detail

Behaviors

  • Zooms to 150%+ to catch subpixel issues
  • Resizes browser constantly between breakpoints
  • Compares similar components side-by-side
  • Notices orphaned text, widows, and raggedy line lengths
Pages Visited
4
Viewports Tested
3
Screenshots
14
Duration
~25 min
B+
Initial Test
1 moderate, 7 minor
A
After Fixes
0 moderate, 2 minor

Issues Found & Fixed

moderate Fixed

Step indicators remain horizontal at tablet width while content stacks vertically, creating visual disconnect

Homepage "How It Works" section at 768px

Step indicators now stack vertically at 768px along with content, creating visual harmony
minor Fixed

Three Pillars cards have unequal heights due to varying content lengths

Homepage pillars section

All three pillar cards now have equal heights with flex-grow applied to descriptions
minor Fixed

Comparison boxes have unequal heights - Traditional E2E shorter than Persona-Driven

Homepage problem section

Both comparison boxes now stretch to equal height with flex layout
minor Fixed

Sidebar navigation in methodology page lacks active state indicator

Methodology page sidebar

Sidebar now highlights current section with purple background as user scrolls

What Worked Well

success Aurora gradient background effect in hero is subtle and polished
success Typography hierarchy is clear: eyebrow to headline to subtitle to CTA
success Observation badges use semantic colors consistently (green/purple/amber/pink)
success Persona avatar gradients are distinct and visually appealing
success Mobile layout at 375px properly stacks all content into single column
success Hero section maintains excellent visual hierarchy at all breakpoints
success Code blocks have proper syntax highlighting and consistent styling
DD

Dave the Dev

Developer Evaluating

Senior frontend developer at a Series B startup. Skeptical of new testing approaches after being burned by over-hyped tools. Has 15 minutes between meetings to evaluate if this is worth adopting.

Goals

  • Understand the core concept quickly (<60 seconds)
  • Find runnable code examples
  • Assess implementation complexity
  • Determine Playwright integration compatibility
  • Estimate time to first working test

Behaviors

  • Skims headings, jumps directly to code blocks
  • Looks for "Getting Started" or "Quick Start" immediately
  • Evaluates code quality and patterns in examples
  • Checks for TypeScript support
  • Looks for GitHub repo to star/clone
Pages Visited
4
Code Blocks Examined
6
Duration
8 min

Tasks Completed

Task Result Duration Notes
Understand core concept Success 45s Hero section clearly explains value prop
Find code example Success 28s Getting Started page, Step 3
Assess complexity Success 3min Looks manageable, ~2hr to implement
Check Playwright integration Success 1min Uses standard Playwright patterns

Findings

success Code examples have copy buttons - reduces friction
success TypeScript used throughout - matches our stack
success ObservationCollector is a simple, portable class
note Could use more inline comments in code examples
note Would like to see a complete working repo to clone
confusion Unclear if this works with Playwright's parallel test mode
MM

Manager Morgan

Engineering Manager Assessing Adoption

Engineering manager at a fintech company. Team of 8 developers struggling with flaky E2E tests that don't catch real UX issues. Looking for ways to improve test quality without major process changes.

Goals

  • Understand ROI - what do we get vs. time invested?
  • Assess team adoption friction
  • Determine training/documentation needs
  • Evaluate if this catches issues current tests miss
  • Find evidence this works (case studies, examples)

Behaviors

  • Reads executive summaries and key points
  • Looks for comparison to existing approaches
  • Seeks evidence of real-world usage
  • Evaluates documentation completeness
  • Considers team skill requirements
Pages Visited
3
Sections Read
4
Duration
5 min

Tasks Completed

Task Result Duration Notes
Understand value prop Success 30s "Traditional E2E vs Persona-Driven" comparison is perfect
Assess team adoption Success 2min Low barrier - builds on Playwright knowledge
Find evidence Partial 1min This example page helps! Need more case studies
Evaluate documentation Success 1.5min Methodology page is comprehensive

Findings

success Clear comparison to traditional E2E testing on homepage
success Methodology documentation is thorough and well-organized
success Sidebar navigation makes long docs scannable
success Example personas give team a starting point
note Would like to see ROI metrics (bugs caught, time saved)
note Team adoption guide would be helpful
confusion How do we prioritize which personas to create first?

Same site. Three personas. Completely different findings.

Pixel Perfect Pat found responsive layout bugs. Dave the Dev evaluated code quality and developer experience. Manager Morgan assessed adoption feasibility. Each persona brought a unique lens that traditional E2E tests would never consider.

That's the power of persona-driven testing: you don't just verify features work, you verify users can succeed.

pixel-perfect-pat-observations.json JSON
{
  "persona": "Pixel Perfect Pat - UI Perfectionist",
  "session": {
    "pagesVisited": 4,
    "viewportsTested": ["1440x900", "768x1024", "375x812"],
    "screenshotsCaptured": 14
  },
  "tasks": [
    {
      "name": "Evaluate homepage visual hierarchy",
      "success": true,
      "duration": 180000
    }
  ],
  "observations": [
    {
      "type": "success",
      "description": "Aurora gradient background is subtle and polished",
      "location": "Homepage hero section"
    },
    {
      "type": "note",
      "description": "Pillar cards have unequal heights",
      "location": "Homepage pillars section",
      "severity": "minor"
    }
  ],
  "summary": {
    "overallScore": "B+",
    "criticalIssues": 0,
    "moderateIssues": 1,
    "minorIssues": 7
  }
}

Ready to test like your users think?