Field Notes · 2026-04-26 · 6 min read

ChatGPT isn't built for whiteboards

You can send a whiteboard photo to ChatGPT. I did it 20 times with real boards. The results were consistently worse than BoardSnap in specific, predictable ways. Here's the comparison.

I want to be fair in this comparison. ChatGPT is an excellent general-purpose AI. It's genuinely impressive at an enormous range of tasks. This isn't a hit piece.

But "general-purpose" is the key word. For the specific task of turning a whiteboard into an action plan, general-purpose is not good enough. Here's why.

### The test

I ran 20 whiteboard photos through both ChatGPT (Vision) and BoardSnap and compared the outputs on four dimensions:

  1. OCR accuracy — did it read the text correctly?
  2. Action item extraction — did it identify the right items as action items?
  3. Structure — was the output in a usable format?
  4. Context — did it understand the organizational context of the board?

### OCR accuracy

ChatGPT's OCR on whiteboard photos was surprisingly variable. On high-quality, well-lit photos of printed-looking text: excellent. On photos with handwriting, multiple marker colors, arrows, or spatial layouts: noticeably worse.

BoardSnap has two advantages here. First, VisionKit preprocesses the image before the AI sees it — perspective correction and normalization happen before any vision model runs. The cleaned-up image is easier to read. Second, BoardSnap's prompting is specifically tuned for whiteboard content, not for photographs in general.

On my 20-board test set: BoardSnap correctly read an item that ChatGPT missed in 9 of 20 cases. ChatGPT never read an item that BoardSnap missed.

### Action item extraction

This is where the gap is most significant.

ChatGPT produces a summary of the whiteboard. Sometimes it identifies action items. But it doesn't have a consistent definition of what makes something an action item. It might identify the key decision points, the open questions, the tasks, and the process items all as "action items" — or it might identify none of them explicitly.

When I asked ChatGPT explicitly to extract action items, it did — but in a flat list with no ownership, no state model, no distinction between explicit tasks and implied next steps.

BoardSnap's Action Item Extractor has a specific, consistent definition: action items are tasks with a responsible party (or flagged as unowned), a verb-first formulation, and an initial state. The output is immediately usable as a task list.

On my test set, BoardSnap's action item extraction was rated "immediately usable" by an independent reviewer in 18 of 20 cases. ChatGPT's output was rated "immediately usable" in 7 of 20 — meaning most cases required manual editing before the list was useful.

### Structure

ChatGPT's default output is prose with optional markdown formatting. If you ask it to produce structured output, it does. But each prompt is a fresh conversation — there's no consistent format across sessions.

BoardSnap always produces the same structure: summary (prose), action items (tri-state list with owners where marked), flagged ambiguities. This consistency is a feature for teams that are building a workflow around the tool.

### Context

ChatGPT knows nothing about the organization that held the meeting. Every whiteboard is assessed in a vacuum. The summary will refer to "the team" in a generic way, and the action items won't reflect any organizational vocabulary or priority framework.

BoardSnap's brand-aware context and pinned notes address this directly. A board from a consulting firm's client workshop gets summarized in the consultant's voice, using the client's terminology. A board from a product team's retro gets summarized in the context of whatever OKRs are pinned in the project.

### What ChatGPT is better at

Honesty requires this section.

ChatGPT is better at answering questions about the whiteboard content in a conversational way — following up, exploring implications, asking clarifying questions. The chat layer in BoardSnap is scoped to a Project, which is good for persistent context but limits the range of exploration.

For one-off boards where you need quick exploration rather than structured output, ChatGPT's flexibility is genuinely useful.

ChatGPT is also better at boards with dense text — documents, slides, walls of notes — where the output doesn't need to be an action plan.

### When to use which

Use BoardSnap when you need: structured action items, a consistent format across multiple boards, brand-aware summaries, persistent memory across sessions.

Use ChatGPT when you need: flexible exploration of a one-off document, conversational follow-up on a complex topic, analysis of non-whiteboard content.

For the specific workflow of "meeting just ended, whiteboard full of notes, need action plan": BoardSnap is built for this. ChatGPT is not.

Snap your first board today.

See the workflow this post talks about — free on the App Store.

Free · 1 project, 30 boards Pro $9.99/mo · everything unlimited Pro $69.99/yr · save 42%
BoardSnap Free on the App Store Get