mbg@portfolio:~/personas · case study 02 · read 7 min
Quiet search --:--:--
$ cat case_06/synthetic-personas.md
// 25 personas · 5 verticals · grounded in public research through April 2026

Synthetic personas, built to scale hypothesis & guardrail AI design.

A research-grounded synthetic-persona system. Built in weeks instead of quarters, used two ways inside Dell: as a hypothesis shortcut in front of real customer research, and as a guardrail for AI-generated design on an unreleased systems management product shipping later this year.

Role
Program lead
Schema, ops, validation, rollout
Timeline
March 2026 → Present
March v1 → April v1 → April v2
Coverage
25 personas
5 verticals · 10-field schema · 8 sourced metrics each
Provenance
Public data
No Dell IP; externally shareable
Synthetic EDG Personas landing page - 25 personas organized across 5 business verticals
FIG. 00 Human knowledge-building output of system below. Infinite personas scaled to any vertical and on-demand with business need.
01 Architecture Extension · Guardrail Layer proof of concept · in socialization
Proof of Concept · In Socialization

If personas are governance, they have to sit in the AI's path, not next to it.

The version of synthetic personas inside Dell is one half of a larger pattern. The other half is the guardrail layer that enforces persona use across the entire AI-driven design and engineering process. Three input lanes (PdM, Engineering, UX) feed the guardrail, and nothing reaches the model without going through it.

This is a proof of concept currently in socialization and rationalization. Below: the architecture at a glance, then the UX lane fully unpacked since this case page is about the design side.

Guardrail Architecture · System View
L.PdM · Product Management 3 branches
Requirements & Scope
  • P.01Check task scope against PRD
  • P.02Look for product policy / OKR alignment
  • P.03Validate against business constraints
L.Eng · Engineering 3 branches
Constraints & Tech Decisions
  • E.01Check task against architecture rules
  • E.02Look for tech-debt context
  • E.03Validate output against constraints
L.UX · User Experience 5 branches
Persona Governance
  • B.01Persona fetch & fallback chain
  • B.02Clarifying questions
  • B.03Output validation against persona
  • B.04Drift detection
  • B.05Multi-persona conflict resolution
★ Guardrail
AI Model Interception Layer
Mechanism: system-prompt injection
+ post-generation output validation
◆ Review
Human in the Loop
Checkpoint: designer or PM
approves before downstream output
Primary Output
Validated, Research-Backed Response
Returned to developer or designer. Tagged with persona used, branch traversed, validation result.
↳ also writes to
Auxiliary Output
Internal Persona Library
Live record of which personas are getting referenced. Open to any employee for learning and knowledge-building.
FIG. 04 Guardrail architecture at a glance. All three lanes flow into the same interception layer, with a human checkpoint before downstream output. The persona library is the auxiliary output that turns governance into shared knowledge-building.
▼ L.UX · User Experience five-branch decision tree, exploded view
B.01
Persona Fetch & Fallback Chain
  • check task scope
  • component / feature / UI
  • look for persona.md
  • human-authored: use as-is
  • synthetic: use, log version
  • if none found
  • generate from internal repo
  • else: external research + rules
  • no persona = block call
B.02
Clarifying Questions
  • before output, prompt user
  • role / seniority context
  • primary task right now
  • tools already in use
  • org / team scale
  • persona seeds the question set
  • enterprise IT pro = security-first
  • SMB admin = cost-first
  • skip = degraded output flag
B.03
Output Validation Against Persona
  • validate before returning
  • claims match persona context
  • vocabulary fits user level
  • workflow plausible in real env
  • score against rubric
  • pass: return
  • borderline: flag, return
  • fail: regenerate w/ stronger system prompt
B.04
Drift Detection
  • monitor output across session
  • vocabulary drift
  • workflow assumption drift
  • tooling fictional / hallucinated
  • if drift > threshold
  • re-inject persona
  • notify designer
  • repeat drift = pause session
B.05
Multi-Persona Conflict Resolution
  • check for overlapping personas
  • primary user vs admin user
  • end-user vs decision-maker
  • resolution order
  • task scope (whose goal?)
  • access level required
  • explicit override (PM/PdM)
  • unresolvable = clarifying Q
FIG. 05 UX lane unpacked. Five decision trees that the AI must traverse before output. Amber items represent failure conditions where the guardrail blocks, regenerates, or escalates.

The output of the guardrail isn't just a validated response. It's a traceable response. Every reply records which persona was used, which branch fired, whether validation passed, and what got flagged. That trace stream feeds the auxiliary persona library so any employee can see which personas are getting exercised in production AI work, and which are being asked to do too much.

It's the difference between a guardrail that hides its work and a guardrail that teaches while it governs.

02 The Problem context

Persona work at enterprise scale has an honest tension in it.

Real customer research is slow and expensive. Twelve interviews across five verticals takes six months and six figures, and still only covers a landscape moving faster than your calendar. Meanwhile every new product team starts every quarter needing some representation of the user before they can make their first real decision. The question that interested me: is there a form of synthesis that sits honestly between those two failure modes?

03 The Approach 5 rules

A research-grounded companion to first-party research. Not a replacement.

If every claim has to cite a real, recent, named source, and the hierarchy is enforced by tooling rather than willpower, the output stops being "AI-generated content" and becomes "AI-generated synthesis of real research." Different thing. Five rules, in order of how much they matter.

R.01Source hierarchy

A five-tier source hierarchy.

Regulatory and standards bodies first (FDA, FERC, NERC, NIST, ISA, ECB, SEC, FINRA, CISA), then peer-reviewed academic work, then industry analysts, then vendor technical material, then trade press. When two sources disagree, the higher tier wins.

R.02Research freshness

Nothing stale, nothing invented.

Nothing published before April 2024 unless it's a still-authoritative foundational standard (SR 11-7, ISA-95, IEEE 1547-2018). The rule has teeth: claims that couldn't be sourced to recent, named research got cut rather than fabricated. Where exact numbers weren't available, ranges or qualitative directions were used instead of invented precision.

R.03Fixed schema

Every persona is the same object.

Every persona is a JavaScript object with ten top-level fields: cover, profile, goals, day-in-the-life, tools-and-ecosystem, frustrations, skills map, opportunities, metrics, summary. Same shape every time, which means the rendering layer doesn't care which persona is loaded and the authoring process can't quietly skip a section.

R.04Automated validation

The validator is the point.

A Node-based validator runs as a permanent build step. Every metric card needs a value, a label, and a source. Every persona needs a needs array. Every nav category has to match one of the five verticals exactly. No duplicate IDs, no duplicate names across versions. When content drifts, the validator fails the build and nothing ships. Without it, the system is a pile of plausible text. With it, it's a data product.

R.05Product alignment

Dell SKUs, concretely.

Every persona's opportunities slide maps real needs to specific current-gen Dell SKUs. PowerEdge XE9712, XE9785L, IR7000 racks, PowerScale F710, PowerCool RCDU, CloudIQ, NativeEdge. No abstractions. If a persona would benefit from Dell's portfolio, the mapping has to be specific enough to act on.

Metrics slide showing 8 stat cards each with a named and dated source citation
FIG. 01 Every persona's metrics slide carries eight stat cards, each with a named and dated source. Twenty-five personas × eight sourced metrics = two hundred auditable data points.
04 What It Looks Like 250 slides

Ten slides per persona. Twenty-five personas. Two hundred and fifty slides of decision-grade detail.

Cluster-Scale AI Infrastructure Architect persona profile slide showing bio, statistics, primary responsibility, and an attributed pull quote
FIG. 02 One persona's profile slide. Bio, role, budget influence, team shape, and an attributed pull quote. The Architect shown here signs off on two to four billion dollars of annual GPU procurement.

The depth is deliberate. A persona that gets three bullets gets used once in a kickoff and forgotten. A persona with twelve months of regulatory context, a realistic day-in-the-life, named vendor ecosystem, rated frustrations, skills map, product opportunities, eight sourced metrics, and a three-card summary becomes reference material teams come back to during detailed design.

Design and product opportunities slide mapping persona needs to specific Dell SKUs including IR7000, XE9712, PowerScale, PowerCool, and CloudIQ
FIG. 03 Every persona's opportunities slide maps needs to specific Dell SKUs. PowerEdge XE9712. PowerCool. PowerScale F710. CloudIQ. No abstractions.
05 In Use 2 production uses

Two uses in production. The second is why this matters for AI design.

First use: a hypothesis shortcut in front of real research. When a new product area needs persona coverage we don't have, we start synthetic to scope the space. What used to take four to six weeks of literature review and expert interviews now takes a sitting. The real research that follows is sharper because it's testing specific claims instead of discovering them.

Second use: a guardrail for AI-generated design. On an unreleased Dell systems management product shipping later this year, the personas sit alongside the design system as a second input the AI layer has to satisfy. The design system governs visual and interaction grammar. The personas govern whether the output makes sense for anyone real.

06 Boundaries caveats

Synthesis is not a substitute for first-party research.

The value is in compressing time to a defensible starting point, and in making AI-generated design answer to something other than its own plausibility. The system is not a substitute for talking to customers. Anyone using it as a stand-in for first-party research is misusing the tool.

→ NEXT_CASE

Dell Research Library · shipped research platform.

Continue → 07/research-lib