// AI PROMPT GRADER

Stop writing weak prompts.
Score yours in 10 seconds.

Paste your ChatGPT, Claude, or Gemini prompt. Get a 0-100 score across 6 dimensions plus 5 concrete upgrades. Heuristics-based, no black-box AI lag, runs client-side.

Paste your promptthe full prompt you'd send to ChatGPT, Claude, or Gemini
Use case
Target model
// Paste your prompt above to grade it
Pro · $7 one-time
AI-rewritten upgraded prompt + 3 expert variants
Grader tells you what's broken. Pro gives you the fix — your prompt rebuilt by Claude with all 6 dimensions maxed, plus 3 stylistic variants (concise / verbose / chain-of-thought). One-click copy. No subscription.
// $7 one-time · early-access list opens this week
// HOW IT WORKS
01 / PASTE
Drop in your prompt
Whatever you'd send to ChatGPT, Claude, or Gemini — the full thing, system prompt or user prompt.
02 / SCORE
Grade across 6 dimensions
Specificity, Context, Role, Output Format, Constraints, Iteration Hooks. Each scored 0-10. Combined into 0-100 with grade label.
03 / UPGRADE
Get 5 concrete edits
No vague advice. Each upgrade names the missing element and gives you copy-paste-able example text to add.
// FAQ
Why 6 dimensions?
These are the load-bearing variables across every published study on prompt quality (Anthropic, OpenAI, DeepMind). Specificity and Output Format alone account for ~60% of variance in response quality. The other four catch the long tail.
Is my prompt sent to a server?
No. The grader runs 100% in your browser — your prompt never leaves the page. The only network call is if you opt in to the email cheatsheet.
How accurate is the score?
It's a heuristic, not an oracle. It reliably catches the high-frequency mistakes (no role frame, no output format, no constraints) — which is where 80% of weak prompts fail. It will not catch domain nuance ("you used the wrong example for legal writing"). Pair with judgment.
Does it differ by model?
The core 6 dimensions are model-agnostic. The target-model selector adjusts the upgrade examples — Claude responds better to detailed XML-tagged context, ChatGPT to numbered steps, Gemini to explicit length targets. Same skeleton, different scaffolding.
What does Pro do that the grader doesn't?
The grader tells you what's missing. Pro takes your prompt and rewrites it — full version optimized across all 6 dimensions, plus 3 alternate styles (concise, verbose, chain-of-thought). $7 one-time, no subscription.
Can I score system prompts?
Yes. Paste either user prompts or system prompts — the grader treats both as "the instructions you're giving the model." For agent system prompts, expect lower Iteration Hooks scores unless you've coded explicit branching.

100% client-side — your prompt never leaves your browser. Heuristics encode published prompt-engineering patterns; not a substitute for testing on real outputs.
Built by MoneySmith · Powered by OPAI.