An infographic-style blog cover image titled "SST Era vs. Claude Code Era," illustrating a software migration.  The left side, labeled **"SST Era,"** has a dark blue background showing a chaotic, tangled web of icons including AWS Lambda, Docker, SQS, and TypeScript (TS) logos, accompanied by red warning arrows and text like "Infrastructure Pain" and "Zero commits for 5 months."  A large central arrow labeled **"MIGRATION"** points to the right side, labeled **"Claude Code Era."** This side has a clean, light teal background featuring organized icons for FastAPI, Python, Celery, and a friendly AI robot mascot. Stats in the center highlight the scale of the change: "432 Files Changed, 24,422 Insertions, 11,569 Deletions." The overall composition depicts a transition from serverless complexity to a streamlined, AI-assisted monolith architecture.

I Used Claude Code to Migrate My Full-Stack App from Serverless to Monolith

An 8-month serverless infrastructure fight, 5 months of silence, and a 3-week rewrite with git logs to show for it.

I spent 8 months fighting infrastructure on SST (Serverless Stack) on AWS, watched my project flatline for 5 months with zero commits, then used Claude Code to rewrite the entire backend from SST/TypeScript to FastAPI/Python in 3 weeks. 432 files changed. 24,422 insertions. 11,569 deletions. It wasn't magic, but it changed how I build software.

This post assumes you've built or maintained a full-stack app and are curious about AI-assisted development. No Claude Code experience needed.

On March 10, 2025, I made 6 Docker-related commits in a single day. The messages read like a developer losing a fight in slow motion: added docker, added docker, reverted docker file, fixed docker, fixed docker, fixed docker. That wasn't even the worst day that month. March 2025 had 196 commits total, and most were infrastructure fixes. I was building a photo mosaic app and spending my days debugging container deployments.

Fast-forward to February 16, 2026. I shipped a 4-phase data model migration, splitting a monolithic Media table into context-specific EventMedia and MosaicMedia tables, with backward compatibility, design docs, and tests. All in one day. Not because I became a better developer overnight. The architecture underneath me had changed, and I had a different kind of tool in my hands.

If you've ever made an architectural decision early in a project that you regretted later, and the rewrite cost felt too high to justify, this is that story. That gap between "I know this is wrong" and "I can afford to fix it" is where projects go to die. Mine almost did.

I should say upfront: this is not a Claude Code ad. It's an honest account of what happened when I bet on AI-assisted development to dig myself out of a hole I'd spent months digging. The wins were real. So were the rough edges.

Why serverless (SST) felt right, and when it stopped

TessellAI is a photo mosaic generator for live events. Attendees at a wedding, conference, or party upload their photos through a shared link, and the app arranges them into mosaic art in real time. Think a portrait of the bride and groom assembled from hundreds of guest photos, updating live as new uploads come in. The backend does compute-heavy image processing: fragmenting a target image into tiles, calculating cost matrices to match uploaded photos to tile positions, solving the assignment problem, and rendering the final composite.

In December 2024, I chose SST on AWS. Neon for serverless Postgres, Drizzle ORM, TypeScript backend with Hono routes, AWS IoT MQTT for real-time updates, SQS for mosaic jobs, S3 for media. SST promised infrastructure-as-code with minimal boilerplate. For a solo developer shipping fast, it seemed right.

For a while, it was. Auth, event management, and mosaic request handling shipped in about two weeks.

Then came March 2025.

196 commits. Almost none were features. The commit log: fixed docker for deployment, reverted docker, revert docker, added changes to docker, fixed docker, fixed docker, fixed docker. On March 10 alone, 27 commits, nearly all Docker and deploy fixes. The mosaic generator is a Python pipeline doing heavy image processing, and cramming that into a serverless container that cold-starts, scales unpredictably, and times out at the wrong moments was a losing battle.

The scaling commits were worse. Over 30 commits toggling capacity: set min scaling to 0, set min scaling back to 1, remove scaling, set min scaling to 0 again, revert changes to scaling. On April 28, I merged a scaling PR, immediately reverted it, then fixed and re-merged it. Same day. May brought increased memory, changed resolution to 400 not 8k, upgraded server to 8gb ram. I wasn't building anymore. I was negotiating with my infrastructure.

By June, the energy was gone. One commit from June 20. July had 4 commits. August had 5. One was git ignore updated.

SST wasn't the wrong choice when I made it. It's a good tool for the right workload. But for a Python mosaic pipeline with unpredictable memory requirements, serverless was the wrong fit. And the cost of that decision kept compounding.

The decision to rewrite

August 17, 2025. Last commit. "Specify the TODOs that have no owner." A housekeeping commit, the kind you make when you're organizing a project you've already stopped working on. Then silence. Five months. Zero commits from August 18 to January 24, 2026.

The project wasn't abandoned exactly. I thought about it all the time. I knew what was wrong. The problem was that "what was wrong" touched every layer.

A rewrite would mean: ripping out the SST infrastructure files (api.ts, storage.ts, queue.ts, frontend.ts, generate_mosaic_task.ts). Rewriting the entire TypeScript Hono backend in Python. Replacing Drizzle ORM and Neon with SQLAlchemy and standard PostgreSQL. Rewiring every frontend API call. Replacing AWS IoT MQTT with something self-hosted. Swapping SQS for a Python task queue. All without losing features that already worked.

Any one of those is a big project. Together, they felt permanent.

Most developers have that project. You know the architecture is wrong but the rewrite cost feels like it would consume you. Too deep to start over, too frustrated to continue. So the project sits. You tell yourself you'll come back when you have more time. Months pass.

Claude Code entered the picture as leverage, not a savior. I'd been following AI-assisted development tools, and the idea was straightforward: if an AI pair programmer could handle the boilerplate, the repetitive migration work, the test scaffolding, the type generation, then maybe the rewrite math changes. Maybe a 3-month rewrite becomes a 3-week sprint. I wasn't certain it would work. But the alternative was letting TessellAI die on a feature branch. So on January 25, 2026, I opened my terminal.

What actually happened

Day 1: January 25, 2026

Twenty commits:




The first commit was added claude.md, setting up the AI assistant's context file. Then a systematic march through a numbered task list: database models, auth with TDD, media uploads, mosaic request management, Celery, mosaic pipeline integration, frontend type generation, first two page migrations. Every commit references a task number because I'd broken the entire migration into a plan before writing any code.

What made this pace possible: Claude Code generates boilerplate and tests at the same time. Scaffolding work that eats a full day happened in minutes. But I still directed every architectural decision. Python over TypeScript because the mosaic generator was already Python and I was done bridging languages across a container boundary. FastAPI over Django because I wanted explicit route definitions and Pydantic validation without an admin panel I'd never use. Celery over background threads because mosaic rendering runs for minutes. WebSocket over polling because users expect to see photos appear live. Claude Code didn't make these decisions. It made implementing them fast enough that I could make all of them in one day.

Weeks 1-2: wiring it together

Frontend migration happened page by page over the next ten days. Events management, upload pages, mosaic configuration, the render display. Each page got rewired from SST/Hono endpoints to the new FastAPI backend. The typed openapi-fetch client caught API changes at compile time; when a FastAPI endpoint changed its response shape, TypeScript flagged every caller that needed updating.

WebSocket replaced AWS IoT MQTT. Instead of provisioning IoT endpoints and managing MQTT connections through AWS, I had a FastAPI WebSocket route broadcasting tile placement events. Standard WebSocket on the frontend. No AWS SDK, no IoT policy configuration, no region-specific endpoints. Guest sessions worked through JWT auth, letting event attendees upload without an account.

Then the squash. Commit 4e837d9, refactor: migrate from SST/serverless to FastAPI monolith backend, landed on main: 432 files changed, 24,422 insertions, 11,569 deletions. The whole migration in one diff.

Week 3 and beyond

With the monolith in place, features started flowing. The infrastructure wasn't fighting me. Deploying meant pushing to a server, not debugging cold starts. The mosaic generator ran in the same Python process as the API. Memory was predictable. Timeouts were mine to set.

In the first three weeks after migration:

  • New mosaic styles (sharp, incremental) with multi-feature cost estimators and two-phase solvers

  • PixiJS WebGL renderer replacing framer-motion

  • Generic wizard framework for multi-step flows

  • Per-tile cropping with aspect-ratio-aware compression, luminance blending, and enhancement previews

  • Gallery ZIP downloads with password-protected sharing links

  • i18n across 11 locales

  • High-resolution render-only pipeline for print-quality output

  • Tile upload pipeline optimization for 1,000+ images

On February 16, 2026, I did the album pivot: splitting the Media table into EventMedia and MosaicMedia. 4-phase migration with backward compatibility at each step. Phase 1 created new tables and migrated data. Phase 2 rewired endpoints. Phase 3 updated frontend components. Phase 4 dropped the legacy model. Each phase had a design doc, tests, and its own commit. All four shipped in one day.

More features shipped in 3 weeks post-migration than in the previous 8 months on SST. Not because I worked harder. The February commit frequency was actually lower than the frantic March 2025 peak. The difference was that every commit moved the product forward instead of keeping the lights on.

What went well

Monthly commits went from ~74 during the SST era to ~105 with Claude Code, a 42% jump. But the character changed more than the count. Busiest SST day: March 26, 2025, 41 commits, nearly all infrastructure configuration. Busiest Claude Code day: February 5, 2026, 22 commits including the migration landing on main, i18n across 11 locales, SEO work, and mosaic rendering fixes. Fewer commits, more product.

TDD was baked in from day one. Commit messages tell the story: "Complete authentication system with TDD," "Complete media upload system with TDD." The monolith ended up with better test coverage than the SST codebase ever had, and I didn't have to fight for it. Set the expectation once in the project context and Claude Code follows through.

Planning, too. 16 design documents (~237 KB) during the Claude Code era. Zero during SST. Brainstorm, plan, build. Paradoxically, planning made things faster. 20 minutes on a design doc saved hours of rework. The album pivot is the clearest example: 4 phases, each with backward compatibility, each building on the last.

The bigger takeaway for me personally: architectural decisions that felt permanent became reversible. This project was dead for 5 months and came back. That changes how I think about architecture going forward.

What didn't go well

Claude Code sometimes produced more code than I needed. The spotlight animation is the clearest example. Built, shipped, removed within days, rebuilt from scratch with PixiJS. When generating code costs minutes instead of hours, the temptation is to build first and evaluate later. That works until you're deleting yesterday's work.

Git hygiene suffered. That 432-file squash commit isn't great practice. I chose speed over clean history during the migration. More granular commits would have made debugging easier and the history more useful. When you move fast with an AI assistant, branches accumulate changes that should have been broken up.

This one matters most: you still need to know what you're doing. Claude Code executes decisions. It doesn't make them. Clear direction produced good results. Vague direction produced mediocre code. Wrong direction at high speed is still wrong direction. Every good result in this migration traces back to a deliberate choice I made before Claude Code wrote anything.

One more thing: as of writing, the migration work still lives on fix/improved-tessellai, not main. The migration happened. The features shipped. But the final merge and production deployment are pending. Real projects have loose ends, and I'd rather mention it than pretend otherwise.

What I'd tell someone sitting on technical debt

The rewrite cost isn't what it used to be. The work that made rewrites prohibitive (boilerplate, test scaffolding, API migration, type generation) is exactly what AI coding tools handle best. A 3-month rewrite can become 3 weeks. Not free, but possible.

You still make the decisions. Claude Code didn't tell me to switch from serverless to monolith. It didn't choose FastAPI or Celery. It didn't design the album pivot. Those were my calls, from years of building software. You need to know what you want before the tool can help you get there.

What worked for me: start with the backend, get your API running before touching the frontend. Migrate the frontend page by page. Keep backward compatibility so you can roll back. Write design docs before code. Tests from day one.

TessellAI is a photo mosaic generator for live events. Guests upload photos, the app assembles them into mosaics in real time. Still being built. The migration got it off life support and back into active development.

If you've done your own migration, I'd like to hear how it went. What broke, what worked, what you'd do differently.

Migration by the numbers

Metric

SST era (serverless)

Claude Code era (monolith)

Timeline

Dec 2024 - Aug 2025 (8 months)

Jan 25 - Feb 16, 2026 (3 weeks)

Monthly commit velocity

~74 commits/month

~105 commits/month (+42%)

Most active day

41 commits (infra fixes)

22 commits (features)

Docker/infra fix commits

70+

0

Design docs written

0

16 (~237 KB)

Project stall

5 months (zero commits)

N/A

The big migration

N/A

432 files, +24,422 / -11,569 lines

FAQ

Why serverless to monolith instead of the other way around?

TessellAI's mosaic generator does heavy image processing with unpredictable memory and multi-minute execution times. Serverless containers cold-start, scale unpredictably, and time out. A monolith with Celery gave predictable memory, no cold starts, and the generator running in the same process as the API. Right architecture depends on the workload.

How much did Claude Code write vs. you?

Claude Code generated boilerplate, test scaffolding, and repetitive migration work. I made the architectural decisions: FastAPI over Django, Celery over background threads, WebSocket over polling, the 4-phase album pivot. Clear direction produced good results. Vague direction didn't.

Can Claude Code handle a migration this big?

432 files, 24,422 insertions, 11,569 deletions. It handled individual tasks well when broken into a numbered plan with clear scope. Defined tasks (rewrite this endpoint, add tests for this service, migrate this page) went smoothly. Ambiguous requests, less so.

How long without AI assistance?

I estimate 2-3 months full-time. Claude Code compressed it to about 3 weeks. The time savings came from boilerplate, tests, and type-safe API migration, not from architecture.

What would you do differently?

More granular commits during the sprint. The 432-file squash made debugging harder. More evaluation before building. The spotlight animation was built and trashed within days. Design docs from day one.

Further reading

  • Accelerating Code Migrations with AI -- Google's research on AI-assisted migrations

  • How AI-Driven Refactoring Cut Legacy Code Migration to 4 Months -- Salesforce's migration experience

  • Claude Code Documentation -- Official guide to the tool used in this migration

  • SST (Serverless Stack) -- The framework I migrated from (still a good tool for the right workload)