How BoardSnap works: Snap. Analyze. Execute.
BoardSnap converts a whiteboard photo into a structured summary and action plan in three stages: on-device VisionKit scanning, AI analysis via BoardSnap AI, and tri-state task execution — all in under ten seconds.
Step 1: Snap
Open BoardSnap and point your iPhone at the whiteboard. Apple VisionKit — the same framework Apple uses in the native document scanner — detects the whiteboard's edges in real time. A yellow overlay appears around the board.
Hold steady. VisionKit computes the perspective transformation needed to produce a straight-on, flat image of the board — correcting for the angle you're standing at, the distance from the wall, and the tilt of the camera.
Tap the shutter. The corrected image captures in the same instant. No post-processing wait. The board appears flat and clear, as if the camera were directly in front of it.
This step runs entirely on-device. The live camera feed never leaves your iPhone.
Step 2: Analyze
After the snap, two things happen in parallel:
On-device OCR. Apple's Neural Engine runs optical character recognition on the corrected image. It identifies every character and word, with positional data — where each word sits on the board in relation to every other word.
AI analysis. The recognized text with positional metadata goes to BoardSnap AI. The model interprets structure: it identifies headings, lists, tasks, decisions, relationships, and open questions. It reads arrows as relationships, circles as emphasis, and crossed-out text as rejected ideas.
The analysis produces:
- Summary paragraph — what the session was about and what was decided
- Key decisions — explicit calls made during the session
- Action items — extracted tasks with tri-state status and subtasks
- Open questions — unresolved threads
This step takes a few seconds with a reliable connection.
Step 3: Execute
The summary and action items appear in your BoardSnap Project.
Action items have three states: open, in-progress, and done. Boards often capture work that's already in flight — tri-state tasks reflect that reality instead of resetting everything to "not done."
Subtasks expand each action item into discrete steps. "Finalize the proposal" might generate subtasks: draft, review, get sign-off, send. These subtasks are generated from the board's surrounding context.
Everything is editable inline: rename tasks, add subtasks, change states, reorder the list. The AI generates a starting point — you own the output.
Share the summary by copying it as clean text or Markdown. Paste into Slack, Notion, Linear, Confluence, or anywhere your team works.
Projects: what makes the three steps better over time
Each board you snap joins a Project. Projects are the memory layer that makes BoardSnap more useful than a one-off snapshot tool.
Brand voice. Paste your website URL when you create a Project. BoardSnap AI reads your brand and applies your terminology, your product language, and your tone to every summary in that Project.
Pinned context. Add standing notes to a Project — sprint goals, key constraints, team agreements — and they persist in every AI chat session and inform every summary. No re-explaining.
Board history. Every board in a Project is searchable. The AI chat lets you ask questions across all boards: "what did we decide about the auth strategy" or "which action items from last week are still open."
What if there's no signal?
The snap and VisionKit processing run on-device — no network required. The corrected image stores locally.
When BoardSnap detects no connection, it queues the board for upload. The moment the device connects — Wi-Fi, cellular, or hotspot — the queue flushes. The board uploads, analyzes, and appears in the Project automatically.
The offline queue is transparent: you snap the board the same way regardless of signal status. No settings to toggle. No manual upload. It works.
- Snap without signal — VisionKit and image capture are fully on-device
- Offline queue flushes automatically when connection restores
- No manual upload required — the sync is invisible
- No lost boards — the queue persists across app restarts
Frequently asked
How long does the full snap-to-summary process take?
Under ten seconds for most boards with a reliable Wi-Fi or cellular connection. The on-device step (VisionKit) is instantaneous. The AI analysis step takes three to seven seconds depending on board complexity and network speed.
Do I need an account to use BoardSnap?
Yes. An account is required to store boards and summaries. You can sign up within the app using Apple Sign-In, Google, or email. The free tier activates immediately after sign-up.
Can I re-analyze a board after adding more context to a Project?
Yes. After adding brand context or pinned notes to a Project, you can regenerate the summary for any board within that Project. The new analysis incorporates the updated context.
What if the AI misidentifies something as a task?
Delete it from the action items list. Everything is editable. The AI generates a first pass — you curate the final result.
Does the three-step flow work the same for every type of board?
Yes. The snap and analysis steps are the same for standups, retros, brainstorms, architecture diagrams, and workshop boards. The AI adapts its output structure to the content — a retro board produces categorized reflection items; an architecture sketch produces a system description plus open questions.
See it in under a minute.
Download BoardSnap. Snap any board in your office or home and the full flow runs automatically.