The three-week AI sprint
From concept to TestFlight in three weeks. Here's the exact structure of that sprint — the tools, the sequencing, the decisions I made on day one that I'm still living with.
I want to tell the honest version of how BoardSnap got built, not the heroic version.
The heroic version: I had a vision, I shipped in three weeks, it worked perfectly. The honest version: I had a hypothesis, the first two weeks were messy, the third week was clean because the first two weeks did their job.
### Week one: the pipeline
The first week was all about proving the core pipeline worked. Not the app — the pipeline:
- Take a photo of a whiteboard
- Get a usable action plan out the other side
I did this in Python scripts, not Swift. No UI, no app, no polish. I took photos of actual whiteboards (my own office board, plus permission-granted photos from two friends' offices) and ran them through a sequence:
- VisionKit via a minimal UIKit wrapper → corrected image
- Vision framework OCR → raw text
- Claude API with prompt → summary + action items
By day 5, the pipeline worked. The action plans were rough — the prompts needed tuning — but the core loop was proven. You could take a whiteboard photo and get something useful out the other side.
Decision made in week one I'm still living with: the tri-state action item model. I added it to the prompt in week one as an experiment and the output immediately felt more useful. I committed to it as a core feature before I had a single line of Swift.
### Week two: the Swift app skeleton
Week two was the iOS app around the pipeline. The goal: a minimally navigable app where a real person could use the pipeline without a Python script.
I built in this sequence:
- Day 8: Camera flow with VisionKit scanner
- Day 9: Basic project model + Core Data
- Day 10-11: API integration, streaming summary output
- Day 12-13: Action item list UI with tri-state controls
- Day 14: End-to-end flow, no crashes (hopefully)
What I skipped in week two: persistence across sessions, error handling for bad snaps, loading states that weren't just spinners, anything that looked polished. This was deliberate. I needed the skeleton to be navigable, not beautiful.
What I didn't skip: the optimistic UI card. I added the optimistic "Analyzing..." card on day 11 when I noticed I was staring at a spinner for 8 seconds during testing. The spinning was fine for testing. It would have been terminal for a real user. The optimistic card was 3 hours of work that changed how the app felt entirely.
### Week three: first real users
Week three was TestFlight distribution to 12 real people and three weeks of feedback-driven iteration within that first month.
The week-three focus:
- Crash fixing (there were crashes)
- Error handling for the actual failure modes (glare, low contrast, network timeouts)
- The project creation flow (which was completely broken in week two)
- Onboarding (which I'd left as "just open the app and figure it out")
The first TestFlight build shipped on day 19. It was rough. Three testers hit crashes in the first session. I had fixes out within 24 hours.
What week three taught me: the gap between "I can navigate this app" and "someone who has never seen this app can navigate it" is enormous. I'd been using the app for two weeks. I knew exactly what to tap. New users had none of that context. The onboarding was the most important thing I hadn't built.
### The three-week structure in retrospect
Week one: prove the pipeline, not the app. Build in whatever language is fastest. Don't touch the interface. Just get the core data transformation working.
Week two: build the skeleton, not the product. The navigation has to work. The core loop has to be completeable. Nothing has to be beautiful. Don't build anything a real user won't see in the first five minutes.
Week three: get real users immediately. The longer you wait for real users, the more time you spend building for the imaginary user who shares your mental model. Real users don't share your mental model.
### Tools that made three weeks possible
- Swift + SwiftUI: the UI moved faster with SwiftUI than UIKit would have. Declarative UI with previews in Xcode accelerated the iteration cycle significantly.
- Claude API: the AI capability was available via API on day one. No training, no fine-tuning, no ML infrastructure. I wrote prompts, not model code.
- VisionKit: no-build document scanner. If I'd had to build my own scanner, three weeks would have been six.
- Core Data with CloudKit: free sync across devices with no backend infrastructure.
The three-week timeline was only possible because of the depth of Apple's platform and the availability of AI via API. In 2020, this product would have taken 6–12 months. In 2026, three weeks is achievable.
Snap your first board today.
See the workflow this post talks about — free on the App Store.