Bench(k) for Prime Capital · The Evolution

From Debi's spreadsheet
to 869,889 plans
in zero keystrokes.

This is the story of how Prime Capital's plan benchmarking workflow — one of the most reliable client-facing tools in the practice — evolved from a manual Excel comparison to a zero-entry, filing-data-driven benchmark with AI expert analysis.
April 2026 Prepared for Prime Capital leadership Powered by the (k) Suite
Data Entry
ManualZero
Enter a company name. Everything else comes from the filing. No manual input needed for Section 1.
Plan Universe
869Kplans
Every Form 5500 filing for plan year 2024. 48 fields per plan. Loaded client-side in seconds.
Benchmark Charts
15charts
All computed from filing data at runtime. Peer comparisons by industry, size band, and state.
Time per Review
30m30s
Search by name → 15 charts render instantly → AI expert analysis generates in ~15 seconds.
The Evolution · Three Chapters

How we got here.

Each chapter preserved what worked and upgraded what was limiting. Chapter 3 came from an unexpected place — a data integrity audit that led to a fundamentally better architecture.

Chapter 1 · The Origin

Debi's Excel Spreadsheet

Plan_Industry_Review.xlsx · The foundation
  • 12 industries digitized from PLANSPONSOR's 2025 DC Plan Industry Report.
  • 5 KPIs + 15 plan design categories per industry — participation, deferral, match, vesting, auto-features, loans, Roth, QDIA.
  • Manual data entry: Debi types each client plan's figures into comparison columns.
  • Visual output: a clean benchmark grid the Prime advisor could walk a sponsor through.
  • Single source: PLANSPONSOR survey data only.
  • Manual synthesis: the advisor drew their own conclusions from the comparison.
~30–45 minutes per plan review, start to finish.
Chapter 2 · The Digitization

The v1 SPA

prime-plan-review.html · Prime navy & gold branded
  • Prime Capital branded report with navy & gold visual identity preserved.
  • 15 SVG donut charts rendered live as the advisor types.
  • Key-metric stack with your-plan-vs-peer side-by-side comparison.
  • Segmented buttons replaced Debi's spreadsheet dropdowns — faster input.
  • One-click PDF export for the plan sponsor meeting.
  • Still single-source: PLANSPONSOR only.
  • Still manual: every plan design field entered by hand.
~10–15 minutes — the interactive UI was faster, but entry was still manual.
Chapter 3 · The Pivot

Bench(k) v3

bench-k.pages.dev · Zero entry · 869,889 plans · AI analysis
  • Zero entry for Section 1 — search by company name, everything else comes from Form 5500 filings.
  • 869,889 plans loaded client-side. 48 fields per plan. Peer stats computed at runtime.
  • 15 filing benchmark charts: Plan Score, ER/EE contributions, account balances, plan age, net cash flow, contribution balance, auto-enrollment, QDIA, brokerage, compliance, match, asset growth, controlled group, pre-approved docs.
  • 3-layer peer groups: NAICS sector + size band (primary), industry + state (when n ≥ 50), national industry.
  • AI expert analysis: Claude Sonnet 4 writes a structured narrative — strengths, concerns, peer comparisons, recommendations. Every number from the actual data.
  • Name-first search with live results table — no one knows an EIN from memory.
~30 seconds — search, render, done. AI adds ~15s.

The pivot that made v3 better than v2 would have been

During v2 development, a data integrity audit revealed that some benchmark values had been pattern-matched rather than verified from the source spreadsheet. Rather than patch the problem, we made a fundamental architecture decision: stop asking the advisor to manually enter data that already exists in public filings. The Form 5500 dataset — 869,889 plans with 48 verified fields each — became the primary data source for Section 1. This made the product faster (zero entry), more defensible (DOL-attested data), and more scalable (peer groups computed from real filing populations, not survey samples). The PLANSPONSOR survey data remains available in Section 2 for plan design comparisons, using only values verified against the original spreadsheet.

Side-by-Side

What changed, axis by axis.

The "Before" column captures the state under Excel and v1. The "Now" column captures v3.

Axis
Before (Excel & v1)
Now (Bench(k) v3)
Data entry
Manual — every field typed by hand
Zero entry — search by name, filing data auto-loads
Primary data source
PLANSPONSOR survey (12 industries)
Form 5500 filings — 869,889 plans, DOL-attested
Plan lookup
Manual typing, every field
Name search → instant auto-fill from filing record
Peer group size
"Industry averages" (survey sample)
Actual filing populations — "10,656 Manufacturing plans with 11–25 participants"
Peer dimensions
Industry only
Industry + size band + state (3-layer, n ≥ 50 threshold)
Benchmark charts
15 donut charts (PLANSPONSOR categories)
15 filing benchmark charts — scores, contributions, growth, compliance, features
Scoring
No scoring system
Plan Rank(k) 1–10 with 4 decile components
AI analysis
Rule-based templates (if/else logic)
Claude Sonnet 4 — structured expert narrative from actual data
Compliance visibility
Not available
6 compliance flags — late deposits, corrective distributions, bond coverage, prohibited transactions
State-level comparison
Not available
Automatic when ≥ 50 plans in NAICS + state combination
Data freshness
Annual manual update from new survey
Automatic — loads the current scored CSV on every session
Time per review
30–45 min (Excel) / 10–15 (v1)
~30 seconds + 15s AI 60× faster than Excel
Enhancements · What Each One Does

Six material upgrades.

The jump from "digitized Excel" to "filing benchmark engine" happens along six specific axes. Each one was chosen because it either eliminates advisor effort or makes the benchmark more defensible.

01 · Zero Entry

Search by name, not EIN

Type a company name. A live results table shows matching plans from 869,889 filings — plan name, sponsor, location, assets, score. Click one. The benchmark renders instantly. No EIN lookup, no manual typing. The advisor's hands never leave the keyboard for more than 3 seconds.

02 · Filing Data

DOL-attested, not survey-sampled

Every Form 5500 filing is legally attested by the plan administrator. This is census data, not sample data. When the report says "among 10,656 Manufacturing plans with 11–25 participants," that's the actual population — not an estimate from a survey of ~1,000 plan sponsors.

03 · 3-Layer Peers

Industry × Size × State

Primary peers: same NAICS sector + size band. State peers: same industry + state (shown when n ≥ 50 for statistical credibility). National industry: all plans in the sector regardless of size. The advisor sees which layer is driving each comparison and exactly how many plans are in the peer group.

04 · Plan Rank(k)

A composite score that means something

Every plan gets a 1–10 score from four weighted decile components: employer contributions (40%), participant contributions (30%), account balances (20%), plan age (10%). Computed within the plan's size band so a 15-person plan isn't penalized for having lower assets than a 5,000-person plan.

05 · AI Expert Analysis

A senior advisor's narrative, generated

Claude Sonnet 4 receives the full plan data + peer stats and writes a structured analysis: executive summary, strengths, areas of concern, peer comparison highlights, actionable recommendations. Every number in the narrative comes from the actual filing data — the model is explicitly instructed never to invent statistics.

06 · Scale

From one plan to a portfolio

The Excel workflow topped out at 1 plan / 30 min. Bench(k) v3 runs at 1 plan / 30 sec — a 60× throughput improvement. For a Prime advisor with a book of 100 plans, that's the difference between a quarter-long project and a 50-minute afternoon session.

The New Workflow

What 30 seconds looks like.

The complete advisor flow from opening the app to reading the AI analysis. No manual data entry anywhere in the chain.

1

Open the app — CSV loads in background

Visit bench-k.pages.dev. The 299 MB Form 5500 dataset streams with live progress. NOX constellation spinner + animated status messages keep the advisor informed: "Loading 869,889 plans…" → "Preparing 15 benchmark charts…" → "AI analysis engine ready." ~25s · once per session

2

Search by company name

Type "anello" or "summit" or any part of a company name. Live results appear in the sidebar with score badges, location, and assets. Click a plan. The identity card, 15 benchmark charts, and peer stats render instantly. <3s

3

AI analysis generates

Claude Sonnet 4 receives the full plan data and peer statistics. It writes a structured expert narrative — executive summary, ranked strengths and concerns, peer comparison highlights, actionable recommendations. ~15s

4

Review & export

The advisor scrolls through 15 benchmark charts and the AI narrative. The Help Center in the sidebar explains scoring methodology, data sources, and plan feature codes. Export produces a print-ready report. ~10s

What This Enables

Beyond speed.

Faster benchmarks are the visible improvement. The deeper change is what an advisor can now do that wasn't possible before.

Proactive Benchmarking

Every plan, every quarter.

At 30 seconds per plan, a Prime advisor can run their entire book in an afternoon. Quarterly benchmarking becomes a calendar event, not a project.

Prospecting Leverage

"Here's what your filing says."

Before a prospect meeting, search for their company name — public filing data — and walk in with a benchmark of their current plan vs. peers. Zero sponsor effort required. Prime advisors show up with data, not brochures.

Defensible Data

DOL-attested, not estimated.

Every peer comparison shows the exact peer count. Every number comes from legally attested Form 5500 filings. Plan committees and ERISA counsel can trace every data point to its source.