How I Reduced AWS Costs by 67% Without Breaking Anything
A client came to me with an $800/month AWS bill for a marketing site that got maybe 50 visitors a day. They knew something was wrong, but their previous developer had vanished and nobody wanted to touch it.
Here's exactly what I did to cut that bill down to $264/month without changing a single line of application code.
The Problem: Over-Provisioned Everything
The first thing I did was actually look at what they were running:
- EC2 instance: t3.2xlarge (8 vCPUs, 32GB RAM) running 24/7
- RDS instance: db.m5.large running 24/7
- No caching whatsoever
- Images served directly from EC2 (no CDN)
- Daily full database backups kept forever
For a static marketing site. With 50 daily visitors.
Step 1: Right-Size the EC2 Instance
I checked CloudWatch metrics for the past 3 months. The instance was using:
- Average CPU: 3%
- Peak CPU: 12%
- Average memory: 2.1GB
- Peak memory: 3.8GB
They were paying for 32GB of RAM and using 3GB. Classic.
Solution: Moved to a t3.small (2 vCPUs, 2GB RAM). Total cost went from $244/month to $15/month.
Step 2: Implement CloudFront CDN
All images and static assets were being served from EC2. Every. Single. Request.
Solution: Set up CloudFront with:
- Origin pointing to S3 bucket for images
- One year cache TTL for images
- Gzip compression enabled
This alone reduced EC2 bandwidth costs by 80% and made the site noticeably faster. CloudFront cost: ~$5/month for their traffic level.
Step 3: Add Redis Caching
The database was getting hammered for data that rarely changed. Page content, menu items, that sort of thing.
Solution: Added a t4g.micro ElastiCache Redis instance ($11/month) with:
// Simple caching layer
const getPageContent = async (slug) => {
const cacheKey = `page:${slug}`;
// Try cache first
let content = await redis.get(cacheKey);
if (content) return JSON.parse(content);
// Cache miss - hit database
content = await db.query('SELECT * FROM pages WHERE slug = ?', [slug]);
// Cache for 1 hour
await redis.setex(cacheKey, 3600, JSON.stringify(content));
return content;
}; This reduced database queries by 95%.
Step 4: Right-Size the Database
With caching in place, the database was barely being touched. Metrics showed:
- Average connections: 2
- CPU usage: <5%
- IOPS: Practically nothing
Solution: Moved from db.m5.large ($146/month) to db.t4g.micro ($14/month).
Step 5: Fix Backup Retention
They were keeping daily database backups forever. For a 2GB database. They had 700+ backups stored.
Solution: Set retention to 7 days. Saved ~$80/month in backup storage.
The Results
| Service | Before | After |
|---|---|---|
| EC2 Instance | $244 | $15 |
| RDS Database | $146 | $14 |
| ElastiCache Redis | $0 | $11 |
| CloudFront CDN | $0 | $5 |
| Backup Storage | $90 | $10 |
| Other (bandwidth, etc) | $320 | $209 |
| Total Monthly | $800 | $264 |
67% reduction. $536/month saved. $6,432/year.
Bonus: The Site Got Faster
Page load times went from ~2.8 seconds to ~0.9 seconds. The CDN and caching actually improved the user experience while cutting costs.
Key Takeaways
- Check your metrics. CloudWatch tells you exactly what you're using. Look at it.
- Right-size everything. You probably don't need that t3.2xlarge.
- Cache aggressively. Redis is cheap. Database queries are not.
- Use a CDN. Serving static assets from EC2 is expensive and slow.
- Review backup retention. You don't need 700 backups of a 2GB database.
Timeline
The entire optimization took 4 hours of work, spread across a week to monitor the changes. Total cost to the client: $500 for the audit and implementation.
ROI: Paid for itself in the first month. Will save them $6,000+ per year going forward.
Want Help With Your AWS Bill?
I do AWS cost audits for $1,500 flat rate. Most clients see ROI within 2-3 months. If you're spending more than $500/month on AWS and haven't optimized in the last year, you're probably overpaying.