The day Claude rewrote our action items
Early in testing, I gave BoardSnap AI a whiteboard from a real planning session and compared its action item output to the list I'd written by hand after the same meeting. The AI's version was better. That was a problem — and an opportunity.
I was running a planning session for a separate project — not BoardSnap — and I took notes. Old-school: hand-written, in a notebook. After the session, I wrote up the action items from memory, the way I always do. I had nine items.
Then I snapped the whiteboard from the session into an early build of BoardSnap. BoardSnap AI extracted twelve action items.
Three of the twelve were items I hadn't written down. Not because I forgot them — because I'd unconsciously deprioritized them. The board had them. The AI read them. My filter had removed them because I was already doing the mental triage of "what actually matters" as I wrote.
Here's the thing: my filter was wrong. Two of those three items were important.
### What the AI reads that we skip
The AI reads the board literally. It doesn't apply the filter of "Jack knows this one will be deprioritized anyway" or "this is implied by the other item" or "this will be done before the meeting notes are ready."
This literal reading is actually valuable for whiteboard content because whiteboards often have implicit items — things written in shorthand, things in the corner, things that are obviously connected to another item but not obviously an independent action. My hand-written notes collapsed those into the items I'd mentally prioritized. The AI kept all of them.
The second thing the AI does is restructure. A whiteboard often has action items scattered around the board — in the corner where someone wrote a quick "!!! Follow up with Sarah" during the main discussion, in a box with an arrow, in a list that branched from another list. My hand-written notes put them in the order I remembered them (roughly chronological, with recency bias). BoardSnap AI groups them by theme and formats them consistently.
Thematic grouping sounds small. In practice, it makes the list scannable in a way that chronological lists aren't.
### The prompt engineering problem this created
Once I saw that the AI was producing good action item output, I started testing the prompts more aggressively. That's when the failure modes appeared.
Over-expansion. Left to its own, the AI wanted to expand every action item into multiple subtasks, even when the item was already clear. "Review pricing with Sarah" became three subtasks. The board had written "Review pricing w/Sarah." One item. The expansion was the AI's inference — and it was often wrong or unwanted.
Fix: I added an instruction to keep action items at the grain of the original board. Expand into subtasks only when the board had explicit sub-bullets or when the item was demonstrably ambiguous.
Authority assignment. The AI wanted to assign owners to every item. Sometimes this was right — if the board had "→ Marcus" or "[Design]" next to an item, that's a clear ownership signal. But when the board had no ownership markings, the AI was guessing, and the guesses were often plausible but wrong. "Redesign the onboarding flow" went to "Product" by default, but in our team, that's a design-led project.
Fix: extract ownership only when explicitly marked. Otherwise leave it unassigned. An unassigned item is honest about the gap; a wrong assignment creates false confidence.
The verb problem. The AI tended to start action items with nouns ("Pricing model review") rather than verbs ("Review the pricing model"). Verb-first action items are more actionable — the verb is the instruction, the noun is the subject. Noun-first reads like a topic, not a task.
Fix: system prompt instruction: start every action item with a verb in the imperative form. The output improved immediately and measurably.
### What the final prompts look like
I won't share the full prompts — competitive sensitivity — but the principles are public knowledge:
- Read the board literally before inferring. Extract what's there, not what should be there.
- Group by theme, not by order of appearance.
- Start every action item with a verb.
- Assign owners only when explicitly marked on the board.
- Flag ambiguous items (broken arrows, unclear abbreviations) rather than guessing.
- Use the brand voice if the project has it set.
The AI that runs BoardSnap today produces consistently better action items than I would write by hand from the same board. That's still a little unsettling. Mostly I've decided it's a good thing.
Snap your first board today.
See the workflow this post talks about — free on the App Store.