
Mar 10, 2026
I Used Claude Code to Migrate My Full-Stack App from Serverless to Monolith — Here's What Actually Happened
How AI-assisted development turned a dead project and 8 months of serverless infrastructure pain into a 3-week FastAPI migration — with real git data to prove it.
TL;DR: I spent 8 months fighting infrastructure on an SST (Serverless Stack) deployment on AWS, watched my project flatline for 5 months with zero commits, then used Claude Code — Anthropic's AI-powered coding assistant — to execute a full backend rewrite from SST/TypeScript to FastAPI/Python in 3 weeks. The migration touched 432 files with 24,422 insertions and 11,569 deletions. It wasn't magic — but it changed how I build software, and the numbers back that up.
This post assumes you've built or maintained a full-stack app and are curious about AI-assisted development workflows. No Claude Code experience needed.
On March 10, 2025, I made 6 Docker-related commits in a single day. The messages read like a developer losing a fight in slow motion: added docker, added docker, reverted docker file, fixed docker, fixed docker, fixed docker. That wasn't even the worst day that month — March 2025 had 196 commits total, and most of them were infrastructure fixes, not features. I was building a photo mosaic app and spending my days debugging container deployments.
Fast-forward to February 16, 2026. I shipped a 4-phase data model migration — splitting a monolithic Media table into context-specific EventMedia and MosaicMedia tables, complete with backward compatibility, design docs, and comprehensive tests. All in a single day. Not because I suddenly became a better developer. Because the architecture underneath me had changed, and I had a different kind of tool in my hands.
If you've ever made an architectural decision early in a project that you regretted later — and the rewrite cost felt too high to justify — this post is for you. That gap between "I know this is wrong" and "I can afford to fix it" is where projects go to die. Mine almost did.
I want to be upfront: this is not a Claude Code advertisement. It's an honest account of what happened when I bet on AI-assisted development to dig myself out of a hole I'd spent months digging. The wins were real. So were the rough edges. I'll share both.
Why Serverless (SST) Felt Right — And When It Stopped
TessellAI is a photo mosaic generator built for live events. The idea is straightforward: attendees at a wedding, conference, or party upload their photos through a shared link, and the app arranges those photos into mosaic art in real time — think a portrait of the bride and groom assembled from hundreds of guest photos, updating live as new uploads come in. The backend does compute-heavy image processing: fragmenting a target image into tiles, calculating cost matrices to match uploaded photos to tile positions, solving the assignment problem, and rendering the final composite.
In December 2024, I chose SST (Serverless Stack) on AWS as the infrastructure layer. The stack looked modern and promising: Neon for serverless Postgres, Drizzle ORM for type-safe database access, a TypeScript backend with Hono routes, AWS IoT MQTT for real-time updates to the frontend, SQS queues for mosaic generation jobs, and S3 for media storage. SST promised infrastructure-as-code with minimal boilerplate. For a solo developer shipping fast, it seemed like the right call.
And for a while, it was. Auth, event management, and mosaic request handling shipped in about two weeks. The initial velocity felt great.
Then came March 2025.
That month produced 196 commits — more than any other month in the project's history. But almost none of them were features. The commit log tells a different story: fixed docker for deployment, reverted docker, revert docker, added changes to docker, fixed docker, fixed docker, fixed docker. On March 10 alone, I pushed 27 commits, nearly all Docker and deploy fixes. The mosaic generator is a Python pipeline doing heavy image processing — matrix operations, cost estimation, assignment solving, high-resolution rendering — and cramming that into a serverless container that cold-starts, scales unpredictably, and times out at the wrong moments was a losing battle.
The scaling commits were worse. Over 30 commits toggling capacity up and down like a light switch: set min scaling to 0, set min scaling back to 1, remove scaling, set min scaling to 0 again, revert changes to scaling. On April 28, I merged a scaling PR, immediately reverted it, then fixed and re-merged it — all in the same day. May brought increased memory, changed resolution to 400 not 8k, upgraded server to 8gb ram — the kind of commits where you're not building anymore, you're negotiating with your infrastructure.
By June, the energy was gone. One commit from June 20 reads: vibe vibe vibe the code, gently down the pipeline. That's not a developer shipping features. That's a developer who's checked out. July had 4 commits. August had 5. One of them was git ignore updated.
The decision to use SST wasn't wrong when I made it. SST is a good tool — for the right workload. But for my use case — a Python mosaic generation pipeline doing compute-heavy image processing with unpredictable memory requirements — serverless was the wrong fit. And the cost of living with that decision kept compounding, while the cost of changing it felt impossibly high.
The Decision to Rewrite — Why the Cost Felt Impossible
August 17, 2025 was my last commit. The message was "specify the TODOs that have no owner" — a housekeeping commit, the kind you make when you're organizing a project you've already stopped working on. Then silence. Five months of it. Zero commits from August 18 to January 24, 2026.
The project wasn't abandoned in the dramatic sense. I thought about it constantly. I knew what was wrong. The problem was that "what was wrong" touched every layer of the stack. Let me list what a rewrite would require:
The SST infrastructure files — api.ts, storage.ts, queue.ts, frontend.ts, generate_mosaic_task.ts — all needed to go. The entire TypeScript Hono backend in packages/functions/ had to be rewritten in Python to match the mosaic generator's language. The Drizzle ORM and Neon database layer had to be replaced with SQLAlchemy and standard PostgreSQL. Every frontend API call had to be rewired. The real-time system — AWS IoT MQTT — had to become something self-hosted. The SQS queue for mosaic generation jobs had to be replaced with a Python-native task queue. And all of this had to happen without losing the features that already worked.
The mental math was brutal: rewrite the backend in a different language. Migrate the database layer. Rewire every frontend integration. Replace the real-time transport. Build a new task queue. Any one of those would be a significant project. Together, they felt permanent.
Most developers have that project. The one where you know the architecture is wrong but the rewrite cost feels like it would consume you. You're too deep to start over, too frustrated to continue. So the project sits. You tell yourself you'll get back to it when you have more time, or more energy, or when the right approach becomes clear. Months pass.
Claude Code entered the picture not as a savior, but as leverage. I'd been following AI-assisted development tools, and the premise was compelling: if an AI pair programmer could handle the boilerplate — the repetitive migration work, the test scaffolding, the type generation — then maybe the rewrite math changes. Maybe a 3-month rewrite becomes a 3-week sprint. I wasn't certain it would work. The tooling was new, the workflow was unproven for a project this tangled. But the alternative was letting TessellAI die on a feature branch. So I opened my terminal on January 25, 2026, and started typing.
The AI-Assisted Migration — What Actually Happened
Day 1: January 25, 2026
Twenty commits in a single day. Here's what that looked like:
The first commit was added claude.md — literally setting up the AI assistant's context file. Then it's a systematic march through a task list: database models and migrations, authentication with test-driven development, media upload system, mosaic request management, Celery task queue, mosaic pipeline integration, frontend type generation, and the first two page migrations. Every major commit references a task number because I'd broken the entire migration into a numbered plan before writing a line of code.
I need to be honest about what made this pace possible. Claude Code generates boilerplate and tests simultaneously — the scaffolding work that usually eats a full day became something that happened in minutes. But I still directed every architectural decision. Python over TypeScript for the backend, because the mosaic generator was already Python and I was done bridging languages across a container boundary. FastAPI over Django, because I wanted explicit route definitions and Pydantic validation without the overhead of an admin panel I'd never use. Celery over background threads, because mosaic rendering can run for minutes and I needed proper job management. WebSocket over polling, because users expect to see their photos appear in the mosaic in real time. Claude Code didn't make these decisions. It made implementing them fast enough that I could make all of them in a single day.
Weeks 1-2: Wiring It Together
The frontend migration happened page by page over the next ten days. Events management, upload pages, mosaic configuration, the render display — each page got rewired from the old SST/Hono endpoints to the new FastAPI backend. The typed openapi-fetch client meant that every API change was caught at compile time; when a FastAPI endpoint changed its response shape, TypeScript flagged every caller that needed updating.
WebSocket replaced AWS IoT MQTT for real-time tile updates. Instead of provisioning IoT endpoints and managing MQTT connections through AWS, I had a FastAPI WebSocket route that broadcast tile placement events directly. The frontend connected with a standard WebSocket — no AWS SDK, no IoT policy configuration, no region-specific endpoints. Guest session support came together with JWT authentication, letting event attendees upload photos without creating an account.
Then came the squash. Commit 4e837d9 — refactor: migrate from SST/serverless to FastAPI monolith backend — landed on the main branch: 432 files changed, 24,422 insertions, 11,569 deletions. The entire SST-to-monolith migration compressed into a single diff. The SST infrastructure files, the Hono routes, the Drizzle schemas, the AWS IoT integration — all of it replaced in one merge.
Week 3 and Beyond: Building What I Couldn't Before
Once the monolith was in place, features started flowing at a pace I hadn't experienced on this project. The infrastructure wasn't fighting me anymore. Deploying meant pushing to a server, not debugging cold starts. The mosaic generator ran in the same Python process as the API, not across a container boundary with serialization overhead. Memory was predictable. Timeouts were mine to set.
In the first three weeks after migration, I shipped:
New mosaic styles (sharp, incremental) with multi-feature cost estimators and two-phase solvers
A PixiJS WebGL renderer replacing the framer-motion animation system
A generic wizard framework for multi-step flows, reusable across event creation and standalone mosaic generation
Per-tile cropping with aspect-ratio-aware compression, luminance blending, and an enhancement preview system
Gallery ZIP downloads with password-protected sharing links
i18n multi-language support across 11 locales
A high-resolution render-only pipeline for print-quality output
Tile upload pipeline optimization handling 1,000+ images
On February 16, 2026, I executed what I started calling the album pivot — a complete data model restructuring that split the monolithic Media table into EventMedia and MosaicMedia. This wasn't a quick rename. It was a 4-phase migration with backward compatibility at each step: Phase 1 created the new tables and migrated existing data. Phase 2 rewired the API endpoints. Phase 3 updated every frontend component. Phase 4 dropped the legacy model. Each phase had its own design doc, its own tests, and its own commit. All four phases shipped in a single day.
The number that matters most: more features shipped in 3 weeks post-migration than in the previous 8 months on SST. Not because I was working harder — the February commit frequency was actually lower than the frantic March 2025 peak. Because the work was going toward features instead of infrastructure. Every commit moved the product forward instead of keeping the lights on.
What Went Well
The commit velocity tells a counterintuitive story. Monthly commit frequency actually increased — from ~74 commits/month during the SST era to ~105 commits/month with Claude Code, a 42% jump. But the numbers alone don't capture what changed. The most active day in the SST era was March 26, 2025: 41 commits, nearly all infrastructure configuration changes — scaling toggles, memory adjustments, CPU allocation. The most active day in the Claude Code era was February 5, 2026: 22 commits that included the migration landing on main, i18n support across 11 locales, SEO improvements, and mosaic rendering fixes. Fewer commits, more product. That's the shift.
Test-driven development was baked in from the first day. Look at the Day 1 commit messages: "Complete authentication system with TDD," "Complete media upload system with TDD," "Complete mosaic request management with TDD." Every major system was built with tests alongside the implementation. The monolith ended up with better test coverage than the SST codebase ever had — and I didn't have to fight for it. Claude Code writes tests as part of its natural workflow. You set the expectation once in the project context, and it follows through.
That discipline extended to planning. During the Claude Code era, I created 16 design documents totaling roughly 237 KB. During the entire SST era: zero. The workflow was brainstorm, then plan, then build — and paradoxically, the planning made things faster. Spending 20 minutes writing a design doc before a feature saved hours of rework. The album pivot is the clearest example: 4 phases, each with backward compatibility, each building on the previous one. That level of structured migration worked because Claude Code could hold the full context of a multi-phase plan and execute each step without losing track of the bigger picture.
Here's the meta-takeaway. The rewrite cost equation changed. Architectural decisions that felt permanent — locked in by the sheer cost of undoing them — became reversible. The SST-to-monolith migration wasn't a hypothetical exercise. It was a project that had been dead for 5 months coming back to life. The gap between "I know this architecture is wrong" and "I can afford to fix it" got smaller. That changes how you think about every architectural decision going forward.
What Didn't Go Perfectly
Claude Code sometimes produced more code than needed. The spotlight animation system is the clearest example — it was built, shipped, and removed within days, then rebuilt from scratch using PixiJS. That's churn. It happened partly because AI-assisted coding makes it easy to build things faster than you can evaluate whether you should. When generating code costs minutes instead of hours, the temptation is to build first and evaluate later. That works until you're deleting what you built yesterday.
Git hygiene suffered during the sprint. That 432-file squash commit — refactor: migrate from SST/serverless to FastAPI monolith backend — isn't great practice. I prioritized speed over clean commit history during the initial migration. In retrospect, more granular commits would have made debugging easier and the history more useful as documentation. When you're moving fast with an AI assistant, it's easy to let a branch accumulate changes that should have been broken into smaller, reviewable pieces.
This next point matters more than the others: you still need to know what you're doing. Claude Code executes decisions — it doesn't make them. When I gave it a clear architectural direction — FastAPI for the API layer, Celery for task processing, WebSocket for real-time updates — the results were excellent. When the direction was vague, the output was too. Wrong direction at high speed is still wrong direction. The AI amplifies your judgment, good and bad. Every strong result in this migration traces back to a deliberate architectural choice I made before Claude Code wrote a line of code.
And one more loose end: as of writing, the migration work still lives on fix/improved-tessellai, not main. The migration happened. The features shipped. But the final merge and production deployment are still pending. Real projects have loose ends, and this post wouldn't be honest if I pretended otherwise.
What I'd Tell Someone Sitting on Technical Debt
The rewrite cost isn't what it used to be. AI-assisted coding tools have changed the calculus of "should I rewrite this." The work that made rewrites prohibitive — boilerplate, test scaffolding, API migration, type generation — is exactly the work these tools handle best. A migration that would have taken three months of grinding now takes three weeks of focused sprints. That doesn't make it free. But it makes it possible in a way it wasn't before.
You still make the decisions, though. Claude Code didn't tell me to switch from serverless to monolith. It didn't choose FastAPI or Celery. It didn't design the 4-phase album pivot. Those were judgment calls based on years of building software and understanding the tradeoffs. You need to know what architecture you want before the tool can help you get there.
If you're considering it, here's what worked for me. Start with the backend — get your API running on the new stack before touching the frontend. Migrate the frontend page by page; don't try to rewrite everything at once. Keep backward compatibility during the transition so you can roll back if something breaks. Write design docs before you write code. And write tests from day one — they're your safety net when you're moving fast.
The best time to fix your architecture was before you shipped. The second best time is now — and the tools to do it are better than they've ever been.
TessellAI is a photo mosaic generator for live events — guests upload photos, and the app assembles them into mosaic art in real time. It's still being built. The migration described here got the project off life support and back into active development, which is more than I expected when I opened my terminal on January 25.
If you've done your own migration — serverless to monolith, monolith to microservices, or any flavor of architectural rewrite — I'd like to hear how it went. What broke, what worked, what you'd do differently. The best lessons in this industry come from the projects that almost didn't make it.
— Khalit Hartmann, freelance software engineer at khal.it
Migration by the Numbers
Metric | SST Era (Serverless) | Claude Code Era (Monolith) |
|---|---|---|
Timeline | Dec 2024 – Aug 2025 (8 months) | Jan 25 – Feb 16, 2026 (3 weeks) |
Monthly commit velocity | ~74 commits/month | ~105 commits/month (+42%) |
Most active day | 41 commits (infra fixes) | 22 commits (features) |
Docker/infra fix commits | 70+ | 0 |
Design docs written | 0 | 16 (~237 KB) |
Project stall | 5 months (zero commits) | N/A |
The big migration | N/A | 432 files, +24,422 / -11,569 lines |
Frequently Asked Questions
Why did you migrate from serverless to monolith instead of the other way around?
Serverless works well for stateless, short-lived workloads. TessellAI's mosaic generator does heavy image processing — matrix operations, cost estimation, high-resolution rendering — with unpredictable memory requirements and multi-minute execution times. Serverless containers cold-start, scale unpredictably, and time out at the wrong moments for this workload. A monolith with Celery for background tasks gave predictable memory, no cold starts, and the Python mosaic generator running in the same process as the API. The architecture decision depends on your workload, not on what's trendy.
How much of the code did Claude Code write vs. you?
Claude Code generated the boilerplate, test scaffolding, and repetitive migration work — the implementation details. I made every architectural decision: choosing FastAPI over Django, Celery over background threads, WebSocket over polling, and designing the 4-phase album pivot. The AI amplifies developer judgment; it doesn't replace it. Clear direction produced excellent results. Vague direction produced mediocre code.
Can Claude Code handle a migration of this size?
The migration touched 432 files with 24,422 insertions and 11,569 deletions. Claude Code handled the individual tasks well when they were broken into a numbered plan with clear scope. It worked best on well-defined tasks (rewrite this API endpoint, add tests for this service, migrate this frontend page). It struggled more with ambiguous or open-ended requests. Planning the migration into discrete tasks was essential.
How long would this migration have taken without AI assistance?
Based on the scope — rewriting the backend in a different language, migrating the database layer, rewiring every frontend API call, replacing the real-time transport, and building a new task queue — I estimate 2-3 months of full-time solo work. Claude Code compressed this to approximately 3 weeks. The biggest time savings came from boilerplate generation, test writing, and type-safe API migration, not from architectural decisions.
What would you do differently?
Three things: (1) More granular commits during the initial migration sprint — the 432-file squash commit made debugging harder. (2) More upfront evaluation before building features — the spotlight animation was built and removed within days. (3) Start with design docs from day one, not after discovering their value partway through.
Further Reading
Accelerating Code Migrations with AI — Google's research on AI-assisted migrations
How AI-Driven Refactoring Cut Legacy Code Migration to 4 Months — Salesforce's migration experience
Claude Code Documentation — Official guide to the tool used in this migration
SST (Serverless Stack) — The framework I migrated away from (which is still a good tool for the right workload)