Now on the App Store

Whiteboard OCR that actually understands the board.

BoardSnap uses on-device OCR via Apple VisionKit to read a whiteboard's text and handwriting, then passes the result to BoardSnap AI to interpret structure — diagrams, arrows, annotations — and produce a useful summary with action items, not a raw text dump.

Download on the App Store Free to start. Pro from $9.99/mo or $69.99/yr.

OCR isn't the hard part

Basic OCR — character recognition from an image — has been solved for decades. The hard part is what you do with the text once you have it.

A whiteboard isn't a document. Text on a whiteboard has spatial meaning. A word inside a circle means something different from the same word written in a list. An arrow between two items asserts a relationship. A crossed-out line records a rejected idea.

Plain OCR flattens all of that. You get words in approximate reading order — and no sense of what the board actually said.

    How BoardSnap's OCR layer works

    Apple VisionKit handles the character and word recognition layer. It runs on-device, uses the iPhone's Neural Engine, and handles handwriting well — including mixed case, varied pen widths, and the slightly-hurried script of a real meeting.

    VisionKit returns recognized text with position and confidence data. BoardSnap uses the positional data to understand spatial relationships — not just "what words are on the board" but "where are they and what does their position mean."

    That spatial awareness feeds directly into the AI summarization step.

      From characters to structure: what BoardSnap AI adds

      OCR gives you characters. BoardSnap AI gives you comprehension.

      The AI model receives the recognized text with spatial metadata and interprets it as a document with structure: headings, lists, tasks, decisions, relationships. It identifies which items are action items, which are context, and which are decisions already made.

      The output isn't raw OCR text with newlines. It's a structured summary — titled, sectioned, and followed by a task list with states and subtasks. Something you can actually use.

      • Handwriting recognition via Apple's Neural Engine — runs on-device
      • Spatial relationship parsing — arrows, boxes, clusters all carry meaning
      • Task extraction distinguishes action items from context and decisions
      • Subtask generation expands compound tasks into discrete steps

      What kinds of whiteboards does it handle

      Printed text and typed notes: Easy. VisionKit reads clean printed handwriting reliably.

      Cursive and fast handwriting: Works well on modern iPhones. Accuracy improves when letters are connected but legible. Very rushed handwriting may have a word or two misread — the summary context usually self-corrects.

      Diagrams with text labels: BoardSnap AI reads the labels and infers their structural role from position.

      Mixed content — sticky notes, printed text, handwriting together: All processed together. The spatial metadata helps distinguish regions.

      Dry-erase boards under fluorescent lights: Common scenario. VisionKit handles the glare better than a manual photograph because it uses edge and contrast data rather than relying purely on color.

        What OCR alone won't do for you

        Every serious OCR app — Microsoft Lens, Adobe Scan, even Google Translate's camera mode — will extract text from a whiteboard. They stop there.

        The text extraction is thirty seconds of work. The interpretation — deciding what's a task, what's a decision, what's context, and what the board was actually trying to say — is the thirty minutes that usually doesn't happen.

        BoardSnap compresses that thirty minutes to ten seconds.

          Frequently asked

          How accurate is BoardSnap's whiteboard OCR?

          Accuracy depends heavily on handwriting legibility. For clear handwriting in good lighting, VisionKit recognition is very high. For fast, cramped, or stylized handwriting, individual words may be misread, but the AI summarization step uses context to fill gaps. You can always view the raw recognized text and correct it.

          Does BoardSnap read diagrams or just text?

          BoardSnap AI interprets both. It uses the recognized text labels and their spatial positions to understand diagrammatic structure — boxes, arrows, clusters. It won't reproduce the diagram as a visual, but the summary describes the relationships it inferred.

          Does the OCR run on-device or in the cloud?

          The OCR step — text recognition — runs on-device using Apple VisionKit and the iPhone's Neural Engine. The AI summarization step uses cloud APIs (Claude/OpenAI). Your raw camera feed never leaves the device.

          Can BoardSnap read non-English text on a whiteboard?

          VisionKit supports multiple scripts. BoardSnap AI summarization currently works best with English content. Multilingual support is on the roadmap.

          What if some words on the board are illegible?

          BoardSnap AI uses surrounding context to infer meaning where recognition is uncertain. If a key term is truly unreadable, the summary may note uncertainty or use a placeholder. The raw OCR output is available if you need to review specifics.

          OCR that reads the board, not just the words.

          Download BoardSnap and get your first whiteboard summary — diagrams, tasks, and all — in under a minute.

          Free · 1 project, 30 boards Pro $9.99/mo · everything unlimited Pro $69.99/yr · save 42%
          BoardSnap Free on the App Store Get