James Chang / Work / The Judge Tool / Under the hood

Static snapshot of the in-app /tech build teardown as of April 2026. Live version at thejudgetool.com/tech.

Judge Tool · Tech / Under the hood · Last Updated

A full-stack KCBS judging platform, shipped in 20 days.

16,424 lines of code, 113 passing tests, 12 Prisma models, 62 auth-guarded server actions. Built methodically across 13 sessions with a transparent AI-assisted workflow.

By the numbers

16,424lines of code (8,280 client / 8,144 server)
136source files (.ts, .tsx)
39React components
62server actions, all auth-guarded
12Prisma models
113unit tests across 25 suites, all passing
5feature modules (domain-driven)
1,561lines of documentation (Diataxis framework)

AI development workflow

How it actually works

  • Terminal-only development. Every line goes through Claude Code (CLI). No IDE inline suggestions.
  • CLAUDE.md as context bridge. A 103-line file gives each new session the full project context — stack constraints, business rules, auth patterns, seed data. Eliminates "starting from zero every conversation."
  • Session = 1 conversation = 1 PR. Each session produces exactly one PR, ranging from focused fixes (PR #3: 15 files) to large feature ships (PR #1: 130 files).
  • Human directs, AI implements. Humans made the architecture decisions (feature modules, security model, UX flows). The AI generated implementation, tests, documentation. Code review happened before merge, not after.

Architecture

Browser (Next.js 14 App Router) │ ├──▶ Server components ──▶ Prisma ──▶ Supabase PostgreSQL ├──▶ Server actions (62) ──▶ auth-guards.ts ──▶ requireRole(x) │ │ │ └──▶ bcrypt (PIN), Zod (validation), transactions │ ├──▶ NextAuth.js v5 (JWT) │ ├── Credentials: organizer password │ ├── Credentials: judge PIN │ └── Credentials: captain │ ├──▶ Client components (39) │ ├── 11 shared common + 10 UI primitives (shadcn) │ ├── next-themes dark/light │ └── Mermaid.js for diagrams │ └──▶ Rate limiter (in-memory sliding window, 5/15min) Tests ├── Vitest (113 unit, 25 suites) ──▶ Pure-function tested │ (scoring, distribution, validation) └── E2E simulation script ──▶ 2,000+ assertions seed → distribute → score 4 cats → tabulate → validate

Tech stack

Frontend

Next.js 14.2 (App Router), React 18, TypeScript 5 (strict), Tailwind v3, shadcn/ui, Zustand, React Hook Form, Zod, next-themes, Lucide, Mermaid.js

Backend

Prisma 5, Supabase PostgreSQL, NextAuth.js v5 (JWT + 3 Credentials providers), bcryptjs, custom sliding-window rate limiter

Testing

Vitest (113 tests across 25 suites), E2E simulation script (2,000+ assertions, found 3 bugs unit tests missed)

Infrastructure

Vercel (free tier, auto-deploy), GitHub Pages (static marketing), Supabase, tsx

Feature modules

5 domain modules following a consistent pattern: actions/, components/, types/, schemas/, store/, index.ts barrel.

competition

Lifecycle, category advancement, box distribution, judge/table management. 27 actions, 12 components.

judging

Judge setup flow (4 phases), score submission, comment cards. 16 actions, 13 components.

scoring

Table captain dashboard, score review, correction requests. 9 actions, 6 components.

tabulation

KCBS scoring engine, tiebreaking, winner declaration, audit. 7 actions, 7 components.

users

Judge import (single + bulk), search. 3 actions, 1 component.

Lessons learned

What worked

CLAUDE.md is critical. 103 lines of cross-session context eliminates re-briefing every session.

Pure functions for business logic. KCBS scoring, box distribution, validation all extracted and tested (113 passing tests).

Feature module pattern. 5 modules with identical structure keeps the code navigable as it scales.

E2E simulation catches what unit tests miss. Found 3 scoring bugs that passed code review.

What was hard

Security is easier to build in than bolt on. Auth guards should be in commit 1, not commit 3.

AI generates code faster than you can review. Added automated verification (tests, simulation) instead of rushing reviews.

Don't build the whole app in one commit. PR #1 (130 files) made debugging nearly impossible. Incremental PRs are better.

Sketch navigation before coding. Avoided dead routes and rework.

Live screenshots

The Judge Tool marketing site home page

The public marketing site at thejudgetool.com.

Organizer dashboard showing competition category control

Organizer dashboard — category advancement and live competition control.

Judge scoring interface showing the 4-phase flow

Judge scoring interface — KCBS-compliant score entry (1, 2, 5–9).

Table captain dashboard with score review

Table captain view — score review and correction-request approvals.

← Fantastic Leagues changelog