bad1dea / Audit Framework

Six pillars.
Zero compromise.

How bad1dea turns your idea into a forensic finding — and why the score means something.

Deterministic scoring
Live web data
Constraint-matched
5 validated frameworks
How the audit works

Evidence in. Verdict out.

Your answers drive a deterministic algorithm. Real data fills the gaps your answers can't.

STEP 01
You give context to your idea + constraints
Name, one-liner, then the inputs that actually matter: your budget, available hours, technical skills, and team. These aren't optional — they're the foundation of the gap analysis. The audit scores your reality, not an ideal team's.
Constraint-matched scoring
STEP 02
You answer evidence questions across six pillars
Each pillar has a set of evidence-tiered questions. You pick the option that honestly reflects what you actually know — not what you hope is true. Unanswered pillars default to AI baseline so nothing is hidden.
Evidence-weighted
STEP 03
Live data fills what you can't know yet
For market and competition pillars, we run live web searches at audit time — pulling real competitor counts, funding signals, and market benchmarks from 2026. Not training data. Not estimates. Fetched live, every audit.
Live web intelligence
STEP 04
TypeScript calculates. Claude narrates.
A deterministic algorithm computes your composite score mathematically from your answers and live data. Claude writes the narrative only — it never decides a number. The score is reproducible: same inputs, same output, every time.
Deterministic algorithm
The six pillars

What we actually measure.

Six dimensions. Each one a common startup failure vector. Each one scored from your evidence — not your confidence.

01
Problem Clarity
Is the pain real, frequent, and articulated by customers unprompted — or only after you explain it?
Evidence gates
02
Market Size
Bottom-up segment sizing with live market data — not top-down TAM from a Statista report.
Live data augmented
03
Solution Fit
Does your solution directly destroy the pain — or is it a feature that found a problem to attach to?
Prototype required
04
Business Model
Real unit economics and pricing evidence — not your optimistic spreadsheet assumptions.
Economics first
05
Competition
Live competitor scan. Your moat is verified against funded alternatives — not assumed away.
Live scan
06
Execution
Your specific budget, timeline, and team scored against what this idea actually requires to ship.
Constraint-matched
The questions are inside the audit.
Each pillar has a structured set of evidence-tiered questions. You'll see them when you run your first audit — free, no card.
Start free →
The scoring engine

How your score is calculated.

Deterministic math. Not AI opinion. The same inputs always produce the same score.

How scores are weighted
Six pillars. Equal weight.
No pillar can be hidden or skipped. Unanswered pillars use a conservative AI baseline — they don't disappear from the score.
What live data adds
Up to ±15 points per relevant pillar
Real-time web fetches adjust market and competition scores based on what actually exists in 2026 — not what existed in training data.
Why it's reproducible
TypeScript algorithm, not LLM judgment
Claude narrates the finding after the number is computed. It cannot change the score — only explain it.
Verdict thresholds
<30
Hard Stop
30–50
Do Not Build Yet
50–70
Conditional Go
70–85
Go
85+
Strong Go
Built on evidence

Methodology, not magic.

Five frameworks used by the world's best founders and investors — not invented in a prompt.

01 · 2013
The Mom Test
Rob Fitzpatrick
The definitive framework for customer discovery. Drives our Problem Clarity pillar — how to know if pain is real when everyone is too polite to tell you it isn't.
02 · 2016
Competing Against Luck
Clayton Christensen
Jobs to Be Done theory. The foundation of our Solution Fit scoring — does your product get hired to do a specific job, or does it solve a problem nobody actually has?
03 · 2016
7 Powers
Hamilton Helmer
The moat framework used by top-tier VCs. Directly informs our Competition pillar — what type of structural advantage your business actually has, if any.
04 · 2011
The Lean Startup
Eric Ries
Evidence reduces score uncertainty. Assumptions increase it. The build–measure–learn loop is baked into how we weight answered versus unanswered pillars.
05 · Ongoing
YC Evaluation Criteria
Y Combinator
The questions YC asks every batch, every year. Our Business Model and Execution pillars map directly to what gets ideas funded — or rejected — at the world's top accelerator.
Design principles

Why we built it this way.

Four decisions that make the audit different from every other validation tool.

Deterministic, not generative
The score is computed by an algorithm, not decided by AI. Claude writes narrative around a number that has already been calculated — and cannot change it. Same inputs, same output, every time.
Live data, not training memory
Market and competition scores are augmented by web fetches at audit time. We pull real competitor counts and funding signals — not patterns from 2022 training data that may no longer reflect reality.
Your constraints, not the ideal case
Your budget, hours, and team are loaded into the execution scoring. An idea that works for a funded team of five may be a hard stop for a solo founder with €3k/month. We score your situation, not the best-case scenario.
No cheerleading. Ever.
The tool is a forensic analyst, not a coach. It does not soften hard stops. It does not tell you your idea is exciting. The gap analysis names your specific gaps and says exactly what would need to change for the verdict to shift.

Put your idea
through the test.

Free. No card. No encouragement. Just the verdict your idea deserves.