🏗️ Blog · 2026-04-27
Building 100+ Micro-SaaS Tools — Operational Lessons (2026 Retrospective)
A solo operator running 119 micro-SaaS tools shares lessons on infra, SEO, AEO, and operations — including the most expensive mistakes and the highest-leverage moves.
119 tools, 1 year — the key numbers
- 119 LIVE tools (104 KR + 34 global, some overlap)
- Solo operator
- Avg deploy cycle: 4–6 hours per tool
- Infra cost: ~₩30K–50K/month (S3 + CloudFront + ACM)
- Traffic distribution: top 5 tools = 70% of all traffic
5 most expensive mistakes
1. Per-tool CloudFront OAC + Function
Each early tool spun up its own OAC and Lambda Function — AWS bill went up 6×. Consolidating onto a shared bal-pe-kr-rewrite Function + static-shorts-oac normalized it.
Lesson: same domain = share OAC + Function. 90% of infra has no per-site reason to differ.
2. Hitting the GitHub Actions free quota
Frequent deploys (5+ tools/month) blew the GHA budget. Switching to local deploy scripts (microsaas-infra/scripts/local-deploy.sh) solved it permanently.
3. CDK BucketDeployment invalidation timeout
Bundling 5+ sites in one stack timed out invalidation → UPDATE_ROLLBACK_FAILED. Standardized on continue-update-rollback skip + direct S3 sync fallback.
4. Missing i18n metadata
Forgetting English title/description forfeited 6 months of global traffic. The fix: a lint script that fails the build on missing locale metadata.
5. Copy-paste tool code across 8 sites
Legacy KR pool. Refactored common components into microsaas-shared and injected differences via props.
5 biggest leverage moves
1. Subdomain + repo-per-tool
Switched from bal.pe.kr/{tool} to {tool}.bal.pe.kr + a separate repo. Zero build collisions, isolated deploys.
2. Shared infra (CloudFront / ACM / S3)
One OAC + Function + ACM serves all 119 sites. New-site cycle: 6 hr → 30 min.
3. AEO-first (llms.txt)
Standardized llms.txt across the fleet. Citation rate on ChatGPT/Perplexity rose 27%. Built traffic resilience independent of SEO.
4. 7-person work-pool team
For big campaigns: architect + 5 workers + reviewer in parallel. The 2026 Q2 fleet-wide improvement campaign closed in 2 weeks.
5. Data-driven retros
GA4 + Search Console + per-tool usage logs reviewed weekly. 80% of effort goes to the top 5; the rest stays on automated maintenance.
Operator checklist (10)
- [ ] No new per-site OAC / Function on new launches
- [ ] llms.txt auto-built (sitemap → markdown)
- [ ] Build fails on missing locale metadata
- [ ] Local-deploy as standard
- [ ] One CDK stack per site
- [ ] Shared components in shared repo
- [ ] Weekly GA4 + GSC report
- [ ] Top-5 SLA at 99.9% with monitoring
- [ ] Monthly AEO citation tracking
- [ ] Quarterly retrospective
Advice for would-be solo SaaS builders
- First tool = something you use weekly — motivation and detail both win
- Standardize infra early — copyable from tool #5
- SEO + AEO from day one — ship llms.txt at launch
- Local deploy scripts — GHA cost is a trap
- Score → prioritize → retro — turn every decision into data
Related tools
Bottom line
After 119 tools the lesson is brutally short: dedupe + automate + measure. Building one tool 100× is faster than building one tool 100 times bigger.