A Cloudflare Worker that provides automatic fallback from a main S3-compatible bucket to a read-only bucket for missing objects. Perfect for development and staging environments that need access to production media without the risk of accidental overwrites or expensive storage duplication.
Development and staging environments often need access to production media files (user avatars, uploaded documents, product images) to properly test features and UI. Current approaches have significant downsides:
- Direct production access - Risks accidental overwrites or deletes
- Full media copies - Expensive, slow, and requires constant synchronization
- Manual management - Error-prone and leads to broken UIs
- Custom application logic - Requires code changes and introduces complexity
S3 Fallback Worker acts as a transparent proxy that:
- Checks your environment-specific bucket first
- Automatically falls back to your production bucket if the file doesn't exist
- Serves the file with zero application changes required
- Optionally bypasses all fallback logic in production for zero overhead
- Zero Application Changes - Drop-in replacement for your CDN URL
- Production Bypass Mode - Disable fallback in production for zero performance overhead
- Edge Performance - Runs on Cloudflare's global network with <50ms overhead
- S3-Compatible - Works with AWS S3, Cloudflare R2, DigitalOcean Spaces, and more
- Optional Aggressive Caching - Add immutable cache headers to fallback content
- Framework Agnostic - Works with Laravel, Rails, Django, Next.js, or any framework
Your staging environment uses anonymized production database snapshots, but all the media file paths reference production storage.
Without S3 Fallback Worker:
- Broken images everywhere
- Manual copying of specific files
- Direct production access (risky)
With S3 Fallback Worker:
- UI works perfectly with production media
- Upload test files to staging bucket (won't affect production)
- Zero risk of modifying production storage
You're migrating from AWS S3 to Cloudflare R2 (or any other S3-compatible storage).
Without S3 Fallback Worker:
- Risky "big bang" migration
- Complex dual-write logic
- Potential downtime
With S3 Fallback Worker:
- Point new uploads to R2
- Old files automatically served from S3
- Gradual migration with zero downtime
- Disable fallback once migration complete
You have shared assets (templates, default images) in one bucket and tenant-specific media in separate buckets.
With S3 Fallback Worker:
- Check tenant bucket first
- Fall back to shared assets bucket
- Each tenant can override shared assets
Reduce storage costs by keeping only frequently-accessed files in your production bucket.
With S3 Fallback Worker:
- Main bucket: Hot storage (expensive, fast)
- Fallback bucket: Cold storage (cheap, slower)
- Transparent access to both tiers
- Node.js 18+ (for Wrangler CLI)
- A Cloudflare account (free tier works)
- Two S3-compatible buckets (main and fallback)
- Install Wrangler CLI
npm install -g wrangler- Clone or Download
git clone https://github.com/yourusername/s3-fallback-worker.git
cd s3-fallback-worker- Create Configuration
Copy the example configuration:
cp wrangler.toml.example wrangler.toml- Configure Your Buckets
Edit wrangler.toml and set your bucket URLs and credentials:
[env.staging]
name = "s3-fallback-staging"
vars = {
MAIN_BUCKET_URL = "https://staging-bucket.s3.us-east-1.amazonaws.com",
READONLY_BUCKET_URL = "https://production-bucket.s3.us-east-1.amazonaws.com",
SKIP_FALLBACK = "false",
CACHE_FALLBACK = "true"
}
[env.production]
name = "s3-fallback-production"
vars = {
MAIN_BUCKET_URL = "https://production-bucket.s3.us-east-1.amazonaws.com",
SKIP_FALLBACK = "true"
}- Deploy
# Deploy to staging
wrangler deploy --env staging
# Deploy to production
wrangler deploy --env production- Configure Custom Domain (Optional)
In the Cloudflare dashboard:
- Go to Workers & Pages
- Select your Worker
- Click "Custom Domains"
- Add
media.yourapp.com
| Variable | Required | Description |
|---|---|---|
MAIN_BUCKET_URL |
Yes | Your primary S3 bucket URL |
READONLY_BUCKET_URL |
No | Fallback bucket URL (omit if SKIP_FALLBACK=true) |
SKIP_FALLBACK |
No | Set to "true" to disable fallback (production mode) |
CACHE_FALLBACK |
No | Set to "true" to add aggressive caching for fallback content |
S3 Fallback Worker supports various S3-compatible services:
# AWS S3
https://bucket-name.s3.region.amazonaws.com
# Cloudflare R2
https://pub-xxxxx.r2.dev
# DigitalOcean Spaces
https://bucket-name.region.digitaloceanspaces.com
# MinIO or self-hosted
https://minio.yourserver.com/bucket-nameThe Worker passes through your bucket's authentication:
- Public buckets: No authentication needed
- Private buckets: Use signed URLs or configure bucket policies
- Custom authentication: Extend the Worker code to add Authorization headers
Client Request: https://media.yourapp.com/avatars/user123.jpg
↓
S3 Fallback Worker
↓
Check: staging-bucket/avatars/user123.jpg
↓
Found? → Serve file
↓
Not Found? → Check fallback
↓
Check: production-bucket/avatars/user123.jpg
↓
Found? → Serve file
↓
Not Found? → Return 404
The Worker adds a header to indicate which bucket served the file:
X-Served-From: main # File from main bucket
X-Served-From: fallback # File from fallback bucket
Without CACHE_FALLBACK:
- Passes through original bucket cache headers
- Respects your bucket's Cache-Control settings
With CACHE_FALLBACK=true:
- Adds
Cache-Control: public, max-age=31536000, immutableto fallback content - Main bucket cache headers passed through unchanged
- Excellent for production media that never changes
Update your config/filesystems.php:
'disks' => [
's3_with_fallback' => [
'driver' => 's3',
'key' => env('AWS_ACCESS_KEY_ID'),
'secret' => env('AWS_SECRET_ACCESS_KEY'),
'region' => env('AWS_DEFAULT_REGION'),
'bucket' => env('AWS_BUCKET'),
'url' => env('AWS_URL'),
'endpoint' => env('AWS_ENDPOINT'),
'use_path_style_endpoint' => env('AWS_USE_PATH_STYLE_ENDPOINT', false),
'throw' => false,
],
],Update your .env:
# Staging environment
AWS_BUCKET=staging-media
AWS_URL=https://media-staging.yourapp.com
# Production environment (bypass fallback)
AWS_BUCKET=production-media
AWS_URL=https://media.yourapp.comAccess files normally:
// In Blade
<img src="{{ Storage::disk('s3_with_fallback')->url('avatars/user.jpg') }}" />
// In controllers
$url = Storage::disk('s3_with_fallback')->url($path);See docs/laravel-integration.md for complete examples.
Update your config/storage.yml:
staging:
service: S3
access_key_id: <%= ENV['AWS_ACCESS_KEY_ID'] %>
secret_access_key: <%= ENV['AWS_SECRET_ACCESS_KEY'] %>
region: us-east-1
bucket: staging-media
public: true
endpoint: https://media-staging.yourapp.com
production:
service: S3
access_key_id: <%= ENV['AWS_ACCESS_KEY_ID'] %>
secret_access_key: <%= ENV['AWS_SECRET_ACCESS_KEY'] %>
region: us-east-1
bucket: production-media
public: true
endpoint: https://media.yourapp.comUpdate your settings.py:
AWS_STORAGE_BUCKET_NAME = 'staging-media'
AWS_S3_CUSTOM_DOMAIN = 'media-staging.yourapp.com'
AWS_S3_OBJECT_PARAMETERS = {
'CacheControl': 'max-age=86400',
}
DEFAULT_FILE_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'wrangler dev --env stagingThis starts a local server at http://localhost:8787 that mimics the production Worker.
# File exists in main bucket
curl -I http://localhost:8787/test.jpg
# X-Served-From: main
# File only in fallback bucket
curl -I http://localhost:8787/production-only.jpg
# X-Served-From: fallback
# File doesn't exist anywhere
curl -I http://localhost:8787/missing.jpg
# HTTP 404# Tail production logs
wrangler tail --env production
# Tail staging logs
wrangler tail --env staging- Direct bucket access: ~20-50ms (baseline)
- S3 Fallback Worker (hit main bucket): ~25-70ms (+5-20ms overhead)
- S3 Fallback Worker (hit fallback): ~50-100ms (double request)
- Cached at edge: ~5-15ms (CloudFlare cache hit)
Cloudflare Workers pricing:
- Free tier: 100,000 requests/day
- Paid tier: $5/month + $0.50 per million requests
For most staging environments, the free tier is sufficient. A busy staging environment with 1 million requests/month would cost ~$5.50/month total.
Cloudflare Workers automatically scale to handle millions of requests with no configuration required. The Worker is stateless and runs on Cloudflare's global edge network.
- Use Production Bypass Mode
[env.production]
vars = {
MAIN_BUCKET_URL = "https://production-bucket.s3.us-east-1.amazonaws.com",
SKIP_FALLBACK = "true" # Zero overhead in production
}- Configure Custom Domains
Use separate domains for each environment:
media.yourapp.com- Productionmedia-staging.yourapp.com- Stagingmedia-dev.yourapp.com- Development
- Enable Aggressive Caching for Staging
[env.staging]
vars = {
CACHE_FALLBACK = "true" # Cache fallback content aggressively
}- Monitor with Analytics
View Worker analytics in the Cloudflare dashboard:
- Request volume
- Error rates
- Latency percentiles
- Geographic distribution
- Verify deployment:
wrangler deployments list - Check environment name matches:
wrangler deploy --env staging
- Check
SKIP_FALLBACKis not set to"true" - Verify
READONLY_BUCKET_URLis configured - Check bucket URLs are accessible:
curl -I [BUCKET_URL]/path/to/file - Check Worker logs:
wrangler tail --env staging
The Worker passes through CORS headers from your buckets. Ensure your buckets have CORS configured:
[
{
"AllowedOrigins": ["*"],
"AllowedMethods": ["GET", "HEAD"],
"AllowedHeaders": ["*"],
"MaxAgeSeconds": 3600
}
]- Enable
CACHE_FALLBACKfor staging/dev environments - Verify buckets are in optimal regions
- Check CloudFlare cache hit rate in analytics
- Consider using Cloudflare R2 for both buckets (lower latency)
- Worker doesn't currently pass Authorization headers
- Use public bucket access or signed URLs
- For private buckets, extend Worker code to add auth headers
Extend the Worker to check multiple fallback buckets:
const FALLBACK_BUCKETS = [
env.READONLY_BUCKET_URL,
env.ARCHIVE_BUCKET_URL,
env.LEGACY_BUCKET_URL
];
for (const bucket of FALLBACK_BUCKETS) {
const response = await fetch(bucket + url.pathname);
if (response.ok) return response;
}Add custom headers to responses:
const headers = new Headers(response.headers);
headers.set('X-Custom-Header', 'value');
return new Response(response.body, {
status: response.status,
headers
});Generate time-limited signed URLs for private content:
// Implement AWS Signature V4 or use bucket policiesAlways use read-only credentials for the fallback bucket:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:GetObject"],
"Resource": "arn:aws:s3:::production-bucket/*"
}
]
}- Never use production bucket as main bucket in staging
- Use separate AWS/R2 accounts for environments if possible
- Monitor Worker logs for suspicious activity
Consider adding rate limiting for public-facing Workers:
// Use Cloudflare Rate Limiting or Workers KV for rate limitingContributions are welcome! Please see CONTRIBUTING.md for guidelines.
git clone https://github.com/yourusername/s3-fallback-worker.git
cd s3-fallback-worker
npm install
wrangler devnpm testMIT License - see LICENSE for details.
- Documentation: docs/
- Issues: GitHub Issues
- Discussions: GitHub Discussions
See .agent-os/product/roadmap.md for planned features and development phases.
Built with Cloudflare Workers and powered by Agent OS.