James Chang / Work / The Fantastic Leagues / Product metrics

Static snapshot of the in-app /analytics page (admin-only inside the app) as of April 2026. Public marketing site is at thefantasticleagues.com.

Fantastic Leagues · Product metrics · Last Updated

How I measure what I ship, and how the AI actually works.

Product analytics powered by PostHog. Eight AI-powered features built on Google Gemini (primary) and Anthropic Claude Sonnet (fallback). A velocity chart of every session from scaffold to production.

AI infrastructure

Two models, one purpose: produce league-context-aware fantasy baseball analysis that reads the owner's league, not a generic playbook.

8AI-powered features
2models (Gemini primary + Claude fallback)
60–90srequest timeout
~8 KBmax output per call

The eight AI features

  • Draft grades — Post-auction team grade (A–F) with rationale across category strength, risk, upside, depth
  • Trade analyzer — Evaluates proposed trades for both sides, flags category gaps, surfaces fair-value read
  • Keeper recommender — Given salary inflation, ages, position scarcity, which keepers hold value?
  • Waiver advisor — Ranks available players against your roster holes, not against raw projection
  • Weekly insights — Per-team narrative of the week's moves and standings movement
  • Bid advisor — During live auction: suggested max bid based on remaining budget + positional needs
  • League digest — Weekly AI-generated recap with team grades and trade polls
  • Insights cache — Shared cross-league cache to avoid duplicate inference for identical prompts

How the cache works

League-context prompts are large and regenerating them every page view would be expensive and slow. Instead, the MCP MLB proxy caches MLB data in SQLite (shared across all leagues); AI responses are cached per (team_id, feature_key, data_hash) tuple so owners see near-instant responses on repeat views.

Development velocity

33 tracked sessions since January 2026, 292 items completed. Peak efficiency was Sessions 21–23 at 30 items/session (the code-review + auth push).

S1–2
3 items · Scaffolding
S3–6
8 items · Core features
S7–10
14 items · Auction, trades, security
S11–14
12 items · Archive, design
S15–17
16 items · Scripts, maintenance, franchise
S18–20
22 items · Season gating, testing
S21–23
30 items · Code review, auth, MCP — peak
S24–25
26 items · Live data, auction prep
S26–27
24 items · Stats, bid tracking, roadmap
S28–30
28 items · Auction UX, proxy, chat, sounds
S31
31 items · My Val, guides, code review
S32
25 items · AI, reliability, mobile, PWA
S33
13 items · Deploy, CSP, HSTS, retrospective

Average: 8.8 items per session · Peak: 31 items (S21–23) · Growth: +1,033% (Session 1–2 to Session 32)

Product metrics

Five tracking metrics live, two planned. Event schema is deliberately narrow so it stays useful instead of devolving into noise.

Tracking now

  • Pageviews — Page-level tracking with SPA-aware navigation events
  • User identity — Authenticated users identified by email for session continuity
  • Feature adoption — Auth flows, trade proposals, waiver claims, keeper saves, watchlist actions
  • Auction engagement — Nominate, bid, proxy bid, chat, init, finish, force assign, watchlist toggle, WebSocket reconnects
  • Error tracking — React error boundaries report crashes with component name and stack trace

Planned

  • Page performance — Load times, slow API calls, rendering bottlenecks
  • Click tracking — Granular interaction events: tab switches, modal opens, sort changes

Questions I'm trying to answer

The point of analytics is to answer specific product questions, not to accumulate dashboards. These four are the active ones.

Q1

What pages do owners visit most?

Hypothesis: Home (standings), Team roster, Auction, Activity feed.
Source: PostHog pageviews · Status: tracking live.

Q2

How engaged are owners during auction?

Measurement: WebSocket bid events, nomination rate, session duration. Draft Board log already tracks per-lot bid history.
Source: auction bid history + PostHog · Status: tracking live.

Q3

Are trades and waivers being used?

Measurement: proposal rate, vote response time, waiver claim frequency. Low adoption may indicate UX friction.
Source: transaction events + PostHog · Status: tracking live.

Q4

Which features need mobile optimization?

Measurement: PostHog viewport data shows device breakdown. Pages with high mobile traffic but poor engagement need responsive work.
Source: PostHog device properties · Status: collecting data.

← AI Insights