Hypothesis
Write the single-sentence hypothesis: 'If we [change], we expect [metric] to [direction] by [magnitude] because [reason].' If you can't write this in one sentence, you're not ready to run the experiment.
BoardSnap is an iOS app that converts whiteboard photos into clean summaries and action items in about ten seconds. This pricing experiment template structures the thinking — hypothesis, variants, metrics, and decision criteria — before a single price tag changes.
Use this template whenever you're considering a price change, a new tier, a bundle, or a promotional strategy. It's also useful when you have qualitative signal that pricing is the conversion bottleneck but need to turn that into a testable hypothesis before touching the checkout flow.
Run it as a 45–60 minute whiteboard session with the people who own product, revenue, and data. The session's job is to leave with one test to run — not a committee decision about the 'right' price.
Write the single-sentence hypothesis: 'If we [change], we expect [metric] to [direction] by [magnitude] because [reason].' If you can't write this in one sentence, you're not ready to run the experiment.
Control (current state) on the left, each variant on the right. For each variant: the price point or packaging change, the expected conversion impact, and the expected revenue per user impact. Keep it to three variants maximum — more splits your sample and extends your timeline.
Primary metric: the one number that decides the winner. Secondary metrics: signals you'll watch but won't let override the primary. Guard rails: metrics that, if they move badly, kill the test regardless of the primary result.
Write the minimum detectable effect you care about and the sample size required to detect it at 95% confidence. Then divide by your daily traffic to get the duration. If the duration is longer than 30 days, consider whether the test is worth running.
Decide now: what result ships the variant? What result reverts? What result triggers a new experiment? Write the thresholds explicitly. Post-hoc rationalization of ambiguous results is how companies accidentally cut revenue.
Start with the observed problem — conversion rate at a specific step, churn tied to billing events, or pricing objections in sales calls. The hypothesis should follow from evidence, not instinct.
One sentence, on the board, at the top. Everyone in the room must agree it's testable. If someone says 'we'll know it worked if people feel better about the price,' rewrite it with a measurable outcome.
Draw three columns: Control, Variant A, Variant B. Fill in the pricing change for each variant. Write the expected conversion rate and expected ARPU next to each. Make the math visible.
Circle the primary metric. Draw a red box around your guard rails — refund rate, churn in the first 30 days, support ticket volume. If any guard rail moves more than X%, the test stops.
Write the sample size formula or use a known MDE. Divide by daily unique visitors to the pricing page. Write the end date on the board. Shake on it.
BoardSnap reads every section — hypothesis, variants, metrics, duration, decision criteria — and produces a clean summary you can share with the team. The action items include the setup tasks: instrumentation, segmentation, and review date.
Pricing experiments live and die by alignment. When everyone involved sees the hypothesis and decision criteria written on the same board at the same time, there's no ambiguity about what you decided. The whiteboard is the contract.
BoardSnap turns that contract into a shareable document before anyone leaves the room. The summary lands in Slack or Notion, attributed to the session — not to whoever wrote the Confluence doc two days later and quietly changed the decision criteria.
Yes. The hypothesis section handles any pricing-related change — free trial length, paywall placement, tier limit, price point. The variants and metrics sections stay the same regardless of the specific change you're testing.
Write that constraint on the board — it's part of the experiment design. Low-traffic alternatives include: qualitative price testing (user interviews with price anchors), willingness-to-pay surveys, or a sequential test rather than a concurrent split. BoardSnap will capture whichever approach you decide on.
Yes. BoardSnap AI reads tabular layouts and preserves the row/column structure in the summary. A variants grid with three columns will be summarized as three distinct items with their associated attributes.
The free tier covers one project and 30 boards. Pro is $9.99/month or $69.99/year for unlimited boards and AI chat that lets you drill into specific sections of any board you've snapped.
Generalize beyond pricing — structure any experiment with hypothesis, variants, and success metrics.
Anchor your pricing decisions to the metric that actually drives long-term value.
Benchmark your pricing against the market before deciding what to test.
No exporting, no transcription. Snap the board, get the action plan.