
Complete Cloudflare R2 Implementation Guide: S3-Compatible Storage with Zero Egress Fees
Table of Contents
Save Thousands on Storage Costs
Cloudflare R2 offers zero egress fees and S3-compatible API, potentially saving you 70-90% on storage costs compared to AWS S3. Perfect for high-traffic websites, media platforms, and data-heavy applications.
Tired of expensive egress fees from AWS S3? Cloudflare R2 offers S3-compatible object storage with zero egress fees, making it perfect for websites, applications, and media platforms. This comprehensive guide will show you exactly how to implement R2 in your projects with real code examples.
What is Cloudflare R2?
Cloudflare R2 is an object storage service that provides S3-compatible APIs without the typical egress fees that make other cloud storage services expensive for high-traffic applications.
Key Features
- • S3-compatible API (drop-in replacement)
 - • Zero egress fees for data transfer
 - • Global edge network distribution
 - • Automatic compression and optimization
 - • Built-in CDN capabilities
 - • 99.9% uptime SLA
 - • Seamless integration with Cloudflare services
 
Perfect For
- • Media and content delivery platforms
 - • High-traffic websites and applications
 - • Backup and archival storage
 - • Static website hosting
 - • Image and video streaming services
 - • Data analytics and processing pipelines
 - • Multi-tenant SaaS applications
 
Why Choose R2 Over AWS S3?
1. Zero Egress Fees = Massive Savings
The Problem with S3: AWS charges $0.09 per GB for data transfer out, which can cost thousands monthly for high-traffic sites.
R2 Solution: Zero egress fees means you only pay for storage ($0.015/GB/month), not for serving your content.
Real Example: A website serving 1TB of images monthly saves $920/month ($0.09 × 1000GB = $90/month vs $0 with R2).
2. Performance Benefits
Global Edge Network: Content delivered from 250+ locations worldwide for faster load times.
Built-in CDN: No need for separate CDN service - R2 includes edge caching automatically.
Smart Compression: Automatic compression and optimization without additional configuration.
3. Developer Experience
S3 Compatible: Use existing AWS SDK code with minimal changes.
Simple Migration: Most applications can migrate in under 30 minutes.
Better APIs: Cleaner, more intuitive API design with better documentation.
Cost Comparison & Savings
Here's a detailed cost comparison between AWS S3 and Cloudflare R2 for typical usage scenarios:
| Feature | AWS S3 | Cloudflare R2 | Savings | 
|---|---|---|---|
| Storage (per GB/month) | $0.023 | $0.015 | 35% cheaper | 
| Egress (per GB) | $0.09 | $0.00 | 100% FREE | 
| Class A Operations (per 1,000) | $0.0005 | $0.0036 | Higher | 
| Class B Operations (per 1,000) | $0.0004 | $0.0018 | Higher | 
| Monthly Bill (1TB storage, 500GB egress) | $68.50 | $15.36 | $53.14 saved! | 
Real-World Savings Examples
Small Blog (100GB storage, 50GB egress/month):
- • AWS S3: $6.80/month
 - • Cloudflare R2: $1.54/month
 - • Savings: $5.26/month ($63/year)
 
Media Platform (10TB storage, 5TB egress/month):
- • AWS S3: $685/month
 - • Cloudflare R2: $153.60/month
 - • Savings: $531.40/month ($6,377/year)
 
Getting Started with R2
Let's set up Cloudflare R2 step by step. You'll need a Cloudflare account and access to the dashboard.
Enable R2 in Cloudflare Dashboard
Access R2 Object Storage from your Cloudflare dashboard and create your first bucket.
Bucket Naming Requirements
- • Must be globally unique across all Cloudflare R2 users
 - • 3-63 characters long
 - • Lowercase letters, numbers, and hyphens only
 - • Cannot start or end with a hyphen
 - • Cannot contain consecutive hyphens
 - • Cannot be formatted as an IP address
 
API Keys & Authentication Setup
Generate R2 API Tokens
Create secure API tokens for programmatic access to your R2 buckets.
# .env file - NEVER commit to version control
CLOUDFLARE_R2_ACCOUNT_ID=your-account-id-here
CLOUDFLARE_R2_ACCESS_KEY_ID=your-access-key-id
CLOUDFLARE_R2_SECRET_ACCESS_KEY=your-secret-access-key
CLOUDFLARE_R2_BUCKET_NAME=your-bucket-name
CLOUDFLARE_R2_ENDPOINT=https://your-account-id.r2.cloudflarestorage.comJavaScript Implementation
Here's how to integrate Cloudflare R2 with JavaScript using the AWS SDK v3. This works in both browser and Node.js environments.
# Install AWS SDK v3 for S3-compatible operations
npm install @aws-sdk/client-s3
npm install @aws-sdk/s3-request-presigner
# Optional: For file upload utilities
npm install @aws-sdk/lib-storage// R2Client.js - S3-compatible client for Cloudflare R2
import { S3Client } from '@aws-sdk/client-s3';
import { 
  PutObjectCommand, 
  GetObjectCommand, 
  DeleteObjectCommand,
  ListObjectsV2Command,
  HeadObjectCommand 
} from '@aws-sdk/client-s3';
import { getSignedUrl } from '@aws-sdk/s3-request-presigner';
import { Upload } from '@aws-sdk/lib-storage';
class CloudflareR2Client {
  constructor() {
    this.client = new S3Client({
      region: 'auto', // R2 uses 'auto' region
      endpoint: process.env.CLOUDFLARE_R2_ENDPOINT,
      credentials: {
        accessKeyId: process.env.CLOUDFLARE_R2_ACCESS_KEY_ID,
        secretAccessKey: process.env.CLOUDFLARE_R2_SECRET_ACCESS_KEY,
      },
      forcePathStyle: true, // Required for R2 compatibility
    });
    
    this.bucketName = process.env.CLOUDFLARE_R2_BUCKET_NAME;
  }
  /**
   * Upload a file to R2
   * @param {string} key - Object key (file path)
   * @param {Buffer|Uint8Array|string} body - File content
   * @param {Object} options - Additional options (ContentType, metadata, etc.)
   */
  async uploadFile(key, body, options = {}) {
    try {
      const command = new PutObjectCommand({
        Bucket: this.bucketName,
        Key: key,
        Body: body,
        ContentType: options.contentType || 'application/octet-stream',
        Metadata: options.metadata || {},
        CacheControl: options.cacheControl || 'public, max-age=31536000',
        ...options
      });
      const result = await this.client.send(command);
      
      return {
        success: true,
        key: key,
        etag: result.ETag,
        url: `https://${this.bucketName}.r2.dev/${key}`,
        location: result.Location
      };
    } catch (error) {
      console.error('R2 Upload Error:', error);
      throw new Error(`Failed to upload file: ${error.message}`);
    }
  }
  /**
   * Upload large files with multipart upload
   * @param {string} key - Object key
   * @param {File|Buffer} body - Large file content
   * @param {Object} options - Upload options
   */
  async uploadLargeFile(key, body, options = {}) {
    try {
      const upload = new Upload({
        client: this.client,
        params: {
          Bucket: this.bucketName,
          Key: key,
          Body: body,
          ContentType: options.contentType || 'application/octet-stream',
          Metadata: options.metadata || {},
          ...options
        },
        // Configure multipart upload
        queueSize: 4, // Number of concurrent uploads
        partSize: 1024 * 1024 * 5, // 5MB per part
        leavePartsOnError: false,
      });
      // Optional: Track upload progress
      upload.on('httpUploadProgress', (progress) => {
        const percentage = Math.round((progress.loaded / progress.total) * 100);
        console.log(`Upload progress: ${percentage}%`);
        
        // Call progress callback if provided
        if (options.onProgress) {
          options.onProgress(percentage, progress);
        }
      });
      const result = await upload.done();
      
      return {
        success: true,
        key: key,
        etag: result.ETag,
        url: `https://${this.bucketName}.r2.dev/${key}`,
        location: result.Location
      };
    } catch (error) {
      console.error('R2 Large File Upload Error:', error);
      throw new Error(`Failed to upload large file: ${error.message}`);
    }
  }
  /**
   * Download a file from R2
   * @param {string} key - Object key
   */
  async downloadFile(key) {
    try {
      const command = new GetObjectCommand({
        Bucket: this.bucketName,
        Key: key,
      });
      const result = await this.client.send(command);
      
      return {
        success: true,
        body: result.Body,
        contentType: result.ContentType,
        contentLength: result.ContentLength,
        lastModified: result.LastModified,
        etag: result.ETag,
        metadata: result.Metadata
      };
    } catch (error) {
      console.error('R2 Download Error:', error);
      throw new Error(`Failed to download file: ${error.message}`);
    }
  }
  /**
   * Get file information without downloading content
   * @param {string} key - Object key
   */
  async getFileInfo(key) {
    try {
      const command = new HeadObjectCommand({
        Bucket: this.bucketName,
        Key: key,
      });
      const result = await this.client.send(command);
      
      return {
        success: true,
        contentType: result.ContentType,
        contentLength: result.ContentLength,
        lastModified: result.LastModified,
        etag: result.ETag,
        metadata: result.Metadata
      };
    } catch (error) {
      if (error.name === 'NotFound') {
        return { success: false, error: 'File not found' };
      }
      
      console.error('R2 File Info Error:', error);
      throw new Error(`Failed to get file info: ${error.message}`);
    }
  }
  /**
   * Delete a file from R2
   * @param {string} key - Object key
   */
  async deleteFile(key) {
    try {
      const command = new DeleteObjectCommand({
        Bucket: this.bucketName,
        Key: key,
      });
      await this.client.send(command);
      
      return {
        success: true,
        key: key,
        message: 'File deleted successfully'
      };
    } catch (error) {
      console.error('R2 Delete Error:', error);
      throw new Error(`Failed to delete file: ${error.message}`);
    }
  }
  /**
   * List files in bucket with optional prefix
   * @param {string} prefix - Filter by prefix (folder-like structure)
   * @param {number} maxKeys - Maximum number of keys to return
   */
  async listFiles(prefix = '', maxKeys = 1000) {
    try {
      const command = new ListObjectsV2Command({
        Bucket: this.bucketName,
        Prefix: prefix,
        MaxKeys: maxKeys,
      });
      const result = await this.client.send(command);
      
      return {
        success: true,
        files: result.Contents?.map(file => ({
          key: file.Key,
          size: file.Size,
          lastModified: file.LastModified,
          etag: file.ETag,
          url: `https://${this.bucketName}.r2.dev/${file.Key}`
        })) || [],
        isTruncated: result.IsTruncated,
        nextContinuationToken: result.NextContinuationToken
      };
    } catch (error) {
      console.error('R2 List Files Error:', error);
      throw new Error(`Failed to list files: ${error.message}`);
    }
  }
  /**
   * Generate a presigned URL for secure file access
   * @param {string} key - Object key
   * @param {number} expiresIn - URL expiration time in seconds (default: 1 hour)
   * @param {string} operation - Operation type ('getObject' or 'putObject')
   */
  async getPresignedUrl(key, expiresIn = 3600, operation = 'getObject') {
    try {
      let command;
      
      if (operation === 'putObject') {
        command = new PutObjectCommand({
          Bucket: this.bucketName,
          Key: key,
        });
      } else {
        command = new GetObjectCommand({
          Bucket: this.bucketName,
          Key: key,
        });
      }
      const url = await getSignedUrl(this.client, command, { expiresIn });
      
      return {
        success: true,
        url: url,
        expiresIn: expiresIn,
        operation: operation
      };
    } catch (error) {
      console.error('R2 Presigned URL Error:', error);
      throw new Error(`Failed to generate presigned URL: ${error.message}`);
    }
  }
  /**
   * Check if a file exists in R2
   * @param {string} key - Object key
   */
  async fileExists(key) {
    try {
      await this.getFileInfo(key);
      return true;
    } catch (error) {
      return false;
    }
  }
  /**
   * Copy a file within R2 or from another S3-compatible source
   * @param {string} sourceKey - Source object key
   * @param {string} destinationKey - Destination object key
   * @param {Object} options - Copy options
   */
  async copyFile(sourceKey, destinationKey, options = {}) {
    try {
      // For R2, we need to download and re-upload (simpler approach)
      const sourceFile = await this.downloadFile(sourceKey);
      
      const result = await this.uploadFile(
        destinationKey, 
        sourceFile.body, 
        {
          contentType: sourceFile.contentType,
          ...options
        }
      );
      
      return {
        success: true,
        sourceKey: sourceKey,
        destinationKey: destinationKey,
        ...result
      };
    } catch (error) {
      console.error('R2 Copy File Error:', error);
      throw new Error(`Failed to copy file: ${error.message}`);
    }
  }
}
// Export singleton instance
export const r2Client = new CloudflareR2Client();
export default CloudflareR2Client;Node.js Backend Integration
Here's a complete Node.js/Express backend implementation for handling file uploads to Cloudflare R2:
// server.js - Complete Express server with R2 file handling
import express from 'express';
import multer from 'multer';
import cors from 'cors';
import path from 'path';
import { r2Client } from './R2Client.js';
import { v4 as uuidv4 } from 'uuid';
import sharp from 'sharp'; // For image optimization
import rateLimit from 'express-rate-limit';
const app = express();
const PORT = process.env.PORT || 3000;
// Middleware
app.use(cors());
app.use(express.json({ limit: '50mb' }));
app.use(express.urlencoded({ extended: true, limit: '50mb' }));
// Rate limiting for uploads
const uploadLimiter = rateLimit({
  windowMs: 15 * 60 * 1000, // 15 minutes
  max: 10, // Limit each IP to 10 uploads per windowMs
  message: 'Too many uploads from this IP, please try again later.',
  standardHeaders: true,
  legacyHeaders: false,
});
// Configure multer for memory storage
const upload = multer({
  storage: multer.memoryStorage(),
  limits: {
    fileSize: 100 * 1024 * 1024, // 100MB limit
    files: 5 // Maximum 5 files per upload
  },
  fileFilter: (req, file, cb) => {
    // Define allowed file types
    const allowedTypes = [
      'image/jpeg', 'image/jpg', 'image/png', 'image/gif', 'image/webp',
      'application/pdf', 'text/plain', 'application/json',
      'video/mp4', 'video/mpeg', 'video/quicktime',
      'audio/mpeg', 'audio/wav'
    ];
    
    if (allowedTypes.includes(file.mimetype)) {
      cb(null, true);
    } else {
      cb(new Error(`File type not allowed: ${file.mimetype}`), false);
    }
  }
});
// Utility function to generate secure file paths
function generateSecureFilePath(originalName, userId = null) {
  const ext = path.extname(originalName);
  const timestamp = Date.now();
  const uuid = uuidv4();
  const userPath = userId ? `users/${userId}` : 'public';
  
  return `${userPath}/${timestamp}-${uuid}${ext}`;
}
// Utility function to optimize images
async function optimizeImage(buffer, mimetype) {
  if (!mimetype.startsWith('image/')) {
    return buffer;
  }
  try {
    let sharpInstance = sharp(buffer);
    
    // Get image metadata
    const metadata = await sharpInstance.metadata();
    
    // Resize if too large (max 2048px on longest side)
    if (metadata.width > 2048 || metadata.height > 2048) {
      sharpInstance = sharpInstance.resize(2048, 2048, {
        fit: 'inside',
        withoutEnlargement: true
      });
    }
    
    // Convert to optimal format and compress
    if (mimetype === 'image/png') {
      return await sharpInstance
        .png({ quality: 90, compressionLevel: 8 })
        .toBuffer();
    } else {
      return await sharpInstance
        .jpeg({ quality: 85, progressive: true })
        .toBuffer();
    }
  } catch (error) {
    console.warn('Image optimization failed, using original:', error.message);
    return buffer;
  }
}
// Single file upload endpoint
app.post('/api/upload/single', uploadLimiter, upload.single('file'), async (req, res) => {
  try {
    if (!req.file) {
      return res.status(400).json({
        success: false,
        error: 'No file provided'
      });
    }
    const { userId, folder, optimize = true } = req.body;
    
    // Generate secure file path
    const filePath = folder 
      ? `${folder}/${generateSecureFilePath(req.file.originalname, userId).split('/').pop()}`
      : generateSecureFilePath(req.file.originalname, userId);
    // Optimize image if requested
    let fileBuffer = req.file.buffer;
    if (optimize && req.file.mimetype.startsWith('image/')) {
      fileBuffer = await optimizeImage(req.file.buffer, req.file.mimetype);
    }
    // Upload to R2
    const uploadResult = await r2Client.uploadFile(filePath, fileBuffer, {
      contentType: req.file.mimetype,
      metadata: {
        'original-name': req.file.originalname,
        'upload-timestamp': Date.now().toString(),
        'user-id': userId || 'anonymous',
        'file-size': fileBuffer.length.toString(),
        'optimized': optimize.toString()
      },
      cacheControl: 'public, max-age=31536000' // 1 year cache
    });
    res.json({
      success: true,
      message: 'File uploaded successfully',
      file: {
        key: uploadResult.key,
        url: uploadResult.url,
        originalName: req.file.originalname,
        size: fileBuffer.length,
        type: req.file.mimetype,
        uploadedAt: new Date().toISOString()
      }
    });
  } catch (error) {
    console.error('Upload Error:', error);
    res.status(500).json({
      success: false,
      error: error.message || 'Upload failed'
    });
  }
});
// Multiple files upload endpoint
app.post('/api/upload/multiple', uploadLimiter, upload.array('files', 5), async (req, res) => {
  try {
    if (!req.files || req.files.length === 0) {
      return res.status(400).json({
        success: false,
        error: 'No files provided'
      });
    }
    const { userId, folder, optimize = true } = req.body;
    const uploadResults = [];
    const errors = [];
    // Process uploads in parallel with limited concurrency
    const uploadPromises = req.files.map(async (file) => {
      try {
        const filePath = folder 
          ? `${folder}/${generateSecureFilePath(file.originalname, userId).split('/').pop()}`
          : generateSecureFilePath(file.originalname, userId);
        // Optimize if image
        let fileBuffer = file.buffer;
        if (optimize && file.mimetype.startsWith('image/')) {
          fileBuffer = await optimizeImage(file.buffer, file.mimetype);
        }
        const result = await r2Client.uploadFile(filePath, fileBuffer, {
          contentType: file.mimetype,
          metadata: {
            'original-name': file.originalname,
            'upload-timestamp': Date.now().toString(),
            'user-id': userId || 'anonymous',
            'file-size': fileBuffer.length.toString(),
            'optimized': optimize.toString()
          },
          cacheControl: 'public, max-age=31536000'
        });
        return {
          success: true,
          key: result.key,
          url: result.url,
          originalName: file.originalname,
          size: fileBuffer.length,
          type: file.mimetype
        };
      } catch (error) {
        return {
          success: false,
          originalName: file.originalname,
          error: error.message
        };
      }
    });
    const results = await Promise.all(uploadPromises);
    
    // Separate successful uploads from errors
    results.forEach(result => {
      if (result.success) {
        uploadResults.push(result);
      } else {
        errors.push(result);
      }
    });
    res.json({
      success: uploadResults.length > 0,
      message: `${uploadResults.length} files uploaded successfully`,
      files: uploadResults,
      errors: errors.length > 0 ? errors : undefined,
      summary: {
        total: req.files.length,
        successful: uploadResults.length,
        failed: errors.length
      }
    });
  } catch (error) {
    console.error('Multiple Upload Error:', error);
    res.status(500).json({
      success: false,
      error: error.message || 'Upload failed'
    });
  }
});
// Get file info endpoint
app.get('/api/file/:key(*)', async (req, res) => {
  try {
    const key = req.params.key;
    
    const fileInfo = await r2Client.getFileInfo(key);
    
    if (!fileInfo.success) {
      return res.status(404).json({
        success: false,
        error: 'File not found'
      });
    }
    res.json({
      success: true,
      file: {
        key: key,
        url: `https://${process.env.CLOUDFLARE_R2_BUCKET_NAME}.r2.dev/${key}`,
        contentType: fileInfo.contentType,
        size: fileInfo.contentLength,
        lastModified: fileInfo.lastModified,
        metadata: fileInfo.metadata
      }
    });
  } catch (error) {
    console.error('Get File Info Error:', error);
    res.status(500).json({
      success: false,
      error: error.message
    });
  }
});
// Delete file endpoint
app.delete('/api/file/:key(*)', async (req, res) => {
  try {
    const key = req.params.key;
    const { userId } = req.body;
    
    // Security check: Only allow deletion of files in user's folder or public folder
    if (userId && !key.startsWith(`users/${userId}/`) && !key.startsWith('public/')) {
      return res.status(403).json({
        success: false,
        error: 'Not authorized to delete this file'
      });
    }
    const result = await r2Client.deleteFile(key);
    
    res.json({
      success: true,
      message: 'File deleted successfully',
      key: key
    });
  } catch (error) {
    console.error('Delete File Error:', error);
    res.status(500).json({
      success: false,
      error: error.message
    });
  }
});
// List files endpoint
app.get('/api/files', async (req, res) => {
  try {
    const { prefix = '', limit = 50 } = req.query;
    
    const result = await r2Client.listFiles(prefix, parseInt(limit));
    
    res.json({
      success: true,
      files: result.files,
      hasMore: result.isTruncated,
      nextToken: result.nextContinuationToken
    });
  } catch (error) {
    console.error('List Files Error:', error);
    res.status(500).json({
      success: false,
      error: error.message
    });
  }
});
// Generate presigned URL endpoint
app.post('/api/presigned-url', async (req, res) => {
  try {
    const { key, operation = 'getObject', expiresIn = 3600 } = req.body;
    
    if (!key) {
      return res.status(400).json({
        success: false,
        error: 'File key is required'
      });
    }
    const result = await r2Client.getPresignedUrl(key, expiresIn, operation);
    
    res.json({
      success: true,
      url: result.url,
      expiresIn: result.expiresIn,
      operation: result.operation
    });
  } catch (error) {
    console.error('Presigned URL Error:', error);
    res.status(500).json({
      success: false,
      error: error.message
    });
  }
});
// Error handling middleware
app.use((error, req, res, next) => {
  if (error instanceof multer.MulterError) {
    if (error.code === 'LIMIT_FILE_SIZE') {
      return res.status(400).json({
        success: false,
        error: 'File too large. Maximum size is 100MB.'
      });
    }
    if (error.code === 'LIMIT_FILE_COUNT') {
      return res.status(400).json({
        success: false,
        error: 'Too many files. Maximum is 5 files per upload.'
      });
    }
  }
  
  res.status(500).json({
    success: false,
    error: error.message || 'Server error'
  });
});
// Health check endpoint
app.get('/api/health', (req, res) => {
  res.json({
    success: true,
    message: 'R2 Upload Server is running',
    timestamp: new Date().toISOString()
  });
});
// Start server
app.listen(PORT, () => {
  console.log(`R2 Upload Server running on port ${PORT}`);
  console.log(`Health check: http://localhost:${PORT}/api/health`);
});This Express server provides a complete file upload solution with image optimization, security features, and proper error handling.
Ready to Implement R2 in Your Projects?
Use our development tools to format, validate, and optimize your R2 implementation code for production deployment.
🎉 Congratulations! You're Ready for Production
What You've Learned:
- ✅ Complete R2 setup and configuration
 - ✅ S3-compatible API implementation
 - ✅ Secure file upload system
 - ✅ Image optimization and processing
 - ✅ Production-ready error handling
 - ✅ Cost optimization strategies
 
Your Benefits:
- 💰 Save 70-90% on storage costs
 - 🚀 Zero egress fees forever
 - 🌍 Global edge delivery network
 - ⚡ Faster performance than S3
 - 🔒 Enterprise-grade security
 - 📈 Scalable to any traffic level
 
Python Implementation
Here's how to use Cloudflare R2 with Python using the boto3 library (AWS SDK for Python):
# Install boto3 for S3-compatible operations
pip install boto3
pip install python-dotenv
# Optional: For image processing
pip install Pillow# r2_client.py - Cloudflare R2 client using boto3
import boto3
import os
import uuid
import mimetypes
from datetime import datetime, timedelta
from botocore.exceptions import ClientError, NoCredentialsError
from botocore.config import Config
from typing import Optional, List, Dict, Any
import logging
# Configure logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
class CloudflareR2Client:
    """
    A Python client for Cloudflare R2 using boto3 S3-compatible API
    """
    
    def __init__(self):
        """Initialize R2 client with credentials from environment variables"""
        self.account_id = os.getenv('CLOUDFLARE_R2_ACCOUNT_ID')
        self.access_key_id = os.getenv('CLOUDFLARE_R2_ACCESS_KEY_ID')
        self.secret_access_key = os.getenv('CLOUDFLARE_R2_SECRET_ACCESS_KEY')
        self.bucket_name = os.getenv('CLOUDFLARE_R2_BUCKET_NAME')
        
        if not all([self.account_id, self.access_key_id, self.secret_access_key, self.bucket_name]):
            raise ValueError("Missing required environment variables for R2 configuration")
        
        # Configure boto3 client for R2
        self.client = boto3.client(
            's3',
            endpoint_url=f'https://{self.account_id}.r2.cloudflarestorage.com',
            aws_access_key_id=self.access_key_id,
            aws_secret_access_key=self.secret_access_key,
            region_name='auto',  # R2 uses 'auto' region
            config=Config(
                s3={'addressing_style': 'path'},  # Required for R2
                retries={'max_attempts': 3}
            )
        )
    
    def upload_file(self, 
                   file_path: str, 
                   object_key: str,
                   content_type: Optional[str] = None,
                   metadata: Optional[Dict[str, str]] = None,
                   cache_control: str = 'public, max-age=31536000') -> Dict[str, Any]:
        """
        Upload a file to R2
        
        Args:
            file_path: Local path to the file to upload
            object_key: S3 object key (remote file path)
            content_type: MIME type (auto-detected if not provided)
            metadata: Custom metadata dictionary
            cache_control: Cache control header
            
        Returns:
            Dictionary with upload result information
        """
        try:
            # Auto-detect content type if not provided
            if not content_type:
                content_type, _ = mimetypes.guess_type(file_path)
                content_type = content_type or 'application/octet-stream'
            
            # Prepare upload arguments
            upload_args = {
                'ContentType': content_type,
                'CacheControl': cache_control,
            }
            
            # Add metadata if provided
            if metadata:
                upload_args['Metadata'] = metadata
            
            # Upload file to R2
            self.client.upload_file(
                file_path,
                self.bucket_name,
                object_key,
                ExtraArgs=upload_args
            )
            
            # Get file size for response
            file_size = os.path.getsize(file_path)
            
            return {
                'success': True,
                'key': object_key,
                'url': f'https://{self.bucket_name}.r2.dev/{object_key}',
                'size': file_size,
                'content_type': content_type,
                'uploaded_at': datetime.utcnow().isoformat()
            }
            
        except FileNotFoundError:
            logger.error(f"File not found: {file_path}")
            raise ValueError(f"File not found: {file_path}")
        except ClientError as e:
            logger.error(f"R2 upload error: {e}")
            raise Exception(f"Upload failed: {e}")
        except Exception as e:
            logger.error(f"Unexpected upload error: {e}")
            raise Exception(f"Upload failed: {e}")
    
    def upload_file_object(self,
                          file_obj,
                          object_key: str,
                          content_type: Optional[str] = None,
                          metadata: Optional[Dict[str, str]] = None) -> Dict[str, Any]:
        """
        Upload a file-like object to R2
        
        Args:
            file_obj: File-like object (BytesIO, file handle, etc.)
            object_key: S3 object key
            content_type: MIME type
            metadata: Custom metadata
            
        Returns:
            Dictionary with upload result
        """
        try:
            # Prepare upload arguments
            upload_args = {
                'ContentType': content_type or 'application/octet-stream',
                'CacheControl': 'public, max-age=31536000',
            }
            
            if metadata:
                upload_args['Metadata'] = metadata
            
            # Upload file object
            self.client.upload_fileobj(
                file_obj,
                self.bucket_name,
                object_key,
                ExtraArgs=upload_args
            )
            
            return {
                'success': True,
                'key': object_key,
                'url': f'https://{self.bucket_name}.r2.dev/{object_key}',
                'content_type': content_type,
                'uploaded_at': datetime.utcnow().isoformat()
            }
            
        except ClientError as e:
            logger.error(f"R2 upload error: {e}")
            raise Exception(f"Upload failed: {e}")
    
    def download_file(self, object_key: str, local_path: str) -> Dict[str, Any]:
        """
        Download a file from R2 to local filesystem
        
        Args:
            object_key: S3 object key
            local_path: Local path to save the file
            
        Returns:
            Dictionary with download result
        """
        try:
            self.client.download_file(self.bucket_name, object_key, local_path)
            
            return {
                'success': True,
                'key': object_key,
                'local_path': local_path,
                'downloaded_at': datetime.utcnow().isoformat()
            }
            
        except ClientError as e:
            if e.response['Error']['Code'] == 'NoSuchKey':
                raise FileNotFoundError(f"Object not found: {object_key}")
            logger.error(f"R2 download error: {e}")
            raise Exception(f"Download failed: {e}")
    
    def get_file_info(self, object_key: str) -> Dict[str, Any]:
        """
        Get metadata about a file without downloading it
        
        Args:
            object_key: S3 object key
            
        Returns:
            Dictionary with file information
        """
        try:
            response = self.client.head_object(
                Bucket=self.bucket_name,
                Key=object_key
            )
            
            return {
                'success': True,
                'key': object_key,
                'size': response['ContentLength'],
                'content_type': response['ContentType'],
                'last_modified': response['LastModified'].isoformat(),
                'etag': response['ETag'].strip('"'),
                'metadata': response.get('Metadata', {})
            }
            
        except ClientError as e:
            if e.response['Error']['Code'] == '404':
                return {'success': False, 'error': 'File not found'}
            logger.error(f"R2 head object error: {e}")
            raise Exception(f"Failed to get file info: {e}")
    
    def delete_file(self, object_key: str) -> Dict[str, Any]:
        """
        Delete a file from R2
        
        Args:
            object_key: S3 object key to delete
            
        Returns:
            Dictionary with deletion result
        """
        try:
            self.client.delete_object(
                Bucket=self.bucket_name,
                Key=object_key
            )
            
            return {
                'success': True,
                'key': object_key,
                'message': 'File deleted successfully',
                'deleted_at': datetime.utcnow().isoformat()
            }
            
        except ClientError as e:
            logger.error(f"R2 delete error: {e}")
            raise Exception(f"Delete failed: {e}")
    
    def list_files(self, 
                   prefix: str = '', 
                   max_keys: int = 1000) -> Dict[str, Any]:
        """
        List files in the R2 bucket
        
        Args:
            prefix: Filter by prefix (folder-like structure)
            max_keys: Maximum number of keys to return
            
        Returns:
            Dictionary with list of files
        """
        try:
            response = self.client.list_objects_v2(
                Bucket=self.bucket_name,
                Prefix=prefix,
                MaxKeys=max_keys
            )
            
            files = []
            if 'Contents' in response:
                for obj in response['Contents']:
                    files.append({
                        'key': obj['Key'],
                        'size': obj['Size'],
                        'last_modified': obj['LastModified'].isoformat(),
                        'etag': obj['ETag'].strip('"'),
                        'url': f'https://{self.bucket_name}.r2.dev/{obj["Key"]}'
                    })
            
            return {
                'success': True,
                'files': files,
                'count': len(files),
                'is_truncated': response.get('IsTruncated', False),
                'next_token': response.get('NextContinuationToken')
            }
            
        except ClientError as e:
            logger.error(f"R2 list error: {e}")
            raise Exception(f"List files failed: {e}")
    
    def generate_presigned_url(self,
                             object_key: str,
                             expiration: int = 3600,
                             method: str = 'get_object') -> Dict[str, Any]:
        """
        Generate a presigned URL for secure file access
        
        Args:
            object_key: S3 object key
            expiration: URL expiration time in seconds (default: 1 hour)
            method: HTTP method ('get_object' or 'put_object')
            
        Returns:
            Dictionary with presigned URL
        """
        try:
            url = self.client.generate_presigned_url(
                method,
                Params={'Bucket': self.bucket_name, 'Key': object_key},
                ExpiresIn=expiration
            )
            
            return {
                'success': True,
                'url': url,
                'expires_in': expiration,
                'expires_at': (datetime.utcnow() + timedelta(seconds=expiration)).isoformat()
            }
            
        except ClientError as e:
            logger.error(f"R2 presigned URL error: {e}")
            raise Exception(f"Failed to generate presigned URL: {e}")
    
    def file_exists(self, object_key: str) -> bool:
        """
        Check if a file exists in R2
        
        Args:
            object_key: S3 object key to check
            
        Returns:
            Boolean indicating if file exists
        """
        try:
            self.client.head_object(Bucket=self.bucket_name, Key=object_key)
            return True
        except ClientError as e:
            if e.response['Error']['Code'] == '404':
                return False
            raise Exception(f"Error checking file existence: {e}")
    
    def copy_file(self,
                  source_key: str,
                  destination_key: str,
                  metadata: Optional[Dict[str, str]] = None) -> Dict[str, Any]:
        """
        Copy a file within R2
        
        Args:
            source_key: Source object key
            destination_key: Destination object key
            metadata: New metadata for copied file
            
        Returns:
            Dictionary with copy result
        """
        try:
            copy_source = {
                'Bucket': self.bucket_name,
                'Key': source_key
            }
            
            copy_args = {}
            if metadata:
                copy_args['Metadata'] = metadata
                copy_args['MetadataDirective'] = 'REPLACE'
            
            self.client.copy_object(
                CopySource=copy_source,
                Bucket=self.bucket_name,
                Key=destination_key,
                **copy_args
            )
            
            return {
                'success': True,
                'source_key': source_key,
                'destination_key': destination_key,
                'url': f'https://{self.bucket_name}.r2.dev/{destination_key}',
                'copied_at': datetime.utcnow().isoformat()
            }
            
        except ClientError as e:
            logger.error(f"R2 copy error: {e}")
            raise Exception(f"Copy failed: {e}")
# Usage example and utility functions
def generate_secure_key(original_filename: str, user_id: str = None) -> str:
    """Generate a secure, unique key for file storage"""
    timestamp = datetime.utcnow().strftime('%Y%m%d_%H%M%S')
    unique_id = str(uuid.uuid4())[:8]
    _, ext = os.path.splitext(original_filename)
    
    if user_id:
        return f"users/{user_id}/{timestamp}_{unique_id}{ext}"
    return f"public/{timestamp}_{unique_id}{ext}"
# Create a global client instance
r2_client = CloudflareR2Client()PHP Implementation
Here's a complete PHP implementation using the AWS SDK for PHP to work with Cloudflare R2:
# Install AWS SDK for PHP using Composer
composer require aws/aws-sdk-php
# Optional: For image processing
composer require intervention/image<?php
// R2Client.php - Cloudflare R2 client using AWS SDK for PHP
require_once 'vendor/autoload.php';
use Aws\S3\S3Client;
use Aws\Exception\AwsException;
use Aws\S3\Exception\S3Exception;
class CloudflareR2Client {
    private $client;
    private $bucketName;
    
    public function __construct() {
        // Get configuration from environment variables
        $accountId = getenv('CLOUDFLARE_R2_ACCOUNT_ID');
        $accessKey = getenv('CLOUDFLARE_R2_ACCESS_KEY_ID');
        $secretKey = getenv('CLOUDFLARE_R2_SECRET_ACCESS_KEY');
        $this->bucketName = getenv('CLOUDFLARE_R2_BUCKET_NAME');
        
        if (!$accountId || !$accessKey || !$secretKey || !$this->bucketName) {
            throw new Exception('Missing required R2 configuration environment variables');
        }
        
        // Initialize S3 client for R2
        $this->client = new S3Client([
            'version' => 'latest',
            'region' => 'auto', // R2 uses 'auto' region
            'endpoint' => "https://{$accountId}.r2.cloudflarestorage.com",
            'use_path_style_endpoint' => true, // Required for R2
            'credentials' => [
                'key' => $accessKey,
                'secret' => $secretKey,
            ],
        ]);
    }
    
    /**
     * Upload a file to R2
     * 
     * @param string $filePath Local file path
     * @param string $objectKey Remote object key
     * @param array $options Upload options
     * @return array Upload result
     */
    public function uploadFile($filePath, $objectKey, $options = []) {
        try {
            if (!file_exists($filePath)) {
                throw new Exception("File not found: {$filePath}");
            }
            
            // Determine content type
            $contentType = $options['ContentType'] ?? mime_content_type($filePath) ?? 'application/octet-stream';
            
            $uploadArgs = [
                'Bucket' => $this->bucketName,
                'Key' => $objectKey,
                'SourceFile' => $filePath,
                'ContentType' => $contentType,
                'CacheControl' => $options['CacheControl'] ?? 'public, max-age=31536000',
            ];
            
            // Add metadata if provided
            if (isset($options['Metadata'])) {
                $uploadArgs['Metadata'] = $options['Metadata'];
            }
            
            // Upload to R2
            $result = $this->client->putObject($uploadArgs);
            
            return [
                'success' => true,
                'key' => $objectKey,
                'url' => "https://{$this->bucketName}.r2.dev/{$objectKey}",
                'etag' => trim($result['ETag'], '"'),
                'size' => filesize($filePath),
                'content_type' => $contentType,
                'uploaded_at' => date('c')
            ];
            
        } catch (S3Exception $e) {
            error_log("R2 Upload Error: " . $e->getMessage());
            throw new Exception("Upload failed: " . $e->getMessage());
        }
    }
    
    /**
     * Upload file content from string or stream
     * 
     * @param mixed $body File content (string or stream)
     * @param string $objectKey Remote object key
     * @param array $options Upload options
     * @return array Upload result
     */
    public function uploadContent($body, $objectKey, $options = []) {
        try {
            $uploadArgs = [
                'Bucket' => $this->bucketName,
                'Key' => $objectKey,
                'Body' => $body,
                'ContentType' => $options['ContentType'] ?? 'application/octet-stream',
                'CacheControl' => $options['CacheControl'] ?? 'public, max-age=31536000',
            ];
            
            if (isset($options['Metadata'])) {
                $uploadArgs['Metadata'] = $options['Metadata'];
            }
            
            $result = $this->client->putObject($uploadArgs);
            
            return [
                'success' => true,
                'key' => $objectKey,
                'url' => "https://{$this->bucketName}.r2.dev/{$objectKey}",
                'etag' => trim($result['ETag'], '"'),
                'uploaded_at' => date('c')
            ];
            
        } catch (S3Exception $e) {
            error_log("R2 Upload Content Error: " . $e->getMessage());
            throw new Exception("Upload failed: " . $e->getMessage());
        }
    }
    
    /**
     * Download a file from R2
     * 
     * @param string $objectKey Remote object key
     * @param string $localPath Local path to save file
     * @return array Download result
     */
    public function downloadFile($objectKey, $localPath) {
        try {
            $result = $this->client->getObject([
                'Bucket' => $this->bucketName,
                'Key' => $objectKey,
                'SaveAs' => $localPath
            ]);
            
            return [
                'success' => true,
                'key' => $objectKey,
                'local_path' => $localPath,
                'content_type' => $result['ContentType'],
                'size' => $result['ContentLength'],
                'downloaded_at' => date('c')
            ];
            
        } catch (S3Exception $e) {
            if ($e->getStatusCode() === 404) {
                throw new Exception("File not found: {$objectKey}");
            }
            error_log("R2 Download Error: " . $e->getMessage());
            throw new Exception("Download failed: " . $e->getMessage());
        }
    }
    
    /**
     * Get file content as string
     * 
     * @param string $objectKey Remote object key
     * @return array File content and metadata
     */
    public function getFileContent($objectKey) {
        try {
            $result = $this->client->getObject([
                'Bucket' => $this->bucketName,
                'Key' => $objectKey
            ]);
            
            return [
                'success' => true,
                'key' => $objectKey,
                'content' => (string) $result['Body'],
                'content_type' => $result['ContentType'],
                'size' => $result['ContentLength'],
                'last_modified' => $result['LastModified']->format('c'),
                'metadata' => $result['Metadata'] ?? []
            ];
            
        } catch (S3Exception $e) {
            if ($e->getStatusCode() === 404) {
                return ['success' => false, 'error' => 'File not found'];
            }
            throw new Exception("Failed to get file content: " . $e->getMessage());
        }
    }
    
    /**
     * Get file information without downloading content
     * 
     * @param string $objectKey Remote object key
     * @return array File information
     */
    public function getFileInfo($objectKey) {
        try {
            $result = $this->client->headObject([
                'Bucket' => $this->bucketName,
                'Key' => $objectKey
            ]);
            
            return [
                'success' => true,
                'key' => $objectKey,
                'content_type' => $result['ContentType'],
                'size' => $result['ContentLength'],
                'last_modified' => $result['LastModified']->format('c'),
                'etag' => trim($result['ETag'], '"'),
                'metadata' => $result['Metadata'] ?? []
            ];
            
        } catch (S3Exception $e) {
            if ($e->getStatusCode() === 404) {
                return ['success' => false, 'error' => 'File not found'];
            }
            throw new Exception("Failed to get file info: " . $e->getMessage());
        }
    }
    
    /**
     * Delete a file from R2
     * 
     * @param string $objectKey Remote object key
     * @return array Delete result
     */
    public function deleteFile($objectKey) {
        try {
            $this->client->deleteObject([
                'Bucket' => $this->bucketName,
                'Key' => $objectKey
            ]);
            
            return [
                'success' => true,
                'key' => $objectKey,
                'message' => 'File deleted successfully',
                'deleted_at' => date('c')
            ];
            
        } catch (S3Exception $e) {
            error_log("R2 Delete Error: " . $e->getMessage());
            throw new Exception("Delete failed: " . $e->getMessage());
        }
    }
    
    /**
     * List files in bucket
     * 
     * @param string $prefix Filter by prefix
     * @param int $maxKeys Maximum number of keys to return
     * @return array List of files
     */
    public function listFiles($prefix = '', $maxKeys = 1000) {
        try {
            $params = [
                'Bucket' => $this->bucketName,
                'MaxKeys' => $maxKeys
            ];
            
            if ($prefix) {
                $params['Prefix'] = $prefix;
            }
            
            $result = $this->client->listObjectsV2($params);
            
            $files = [];
            if (isset($result['Contents'])) {
                foreach ($result['Contents'] as $object) {
                    $files[] = [
                        'key' => $object['Key'],
                        'size' => $object['Size'],
                        'last_modified' => $object['LastModified']->format('c'),
                        'etag' => trim($object['ETag'], '"'),
                        'url' => "https://{$this->bucketName}.r2.dev/{$object['Key']}"
                    ];
                }
            }
            
            return [
                'success' => true,
                'files' => $files,
                'count' => count($files),
                'is_truncated' => $result['IsTruncated'] ?? false,
                'next_token' => $result['NextContinuationToken'] ?? null
            ];
            
        } catch (S3Exception $e) {
            error_log("R2 List Files Error: " . $e->getMessage());
            throw new Exception("List files failed: " . $e->getMessage());
        }
    }
    
    /**
     * Generate presigned URL for secure access
     * 
     * @param string $objectKey Remote object key
     * @param string $operation Operation type ('GetObject' or 'PutObject')
     * @param int $expiration URL expiration time in seconds
     * @return array Presigned URL result
     */
    public function generatePresignedUrl($objectKey, $operation = 'GetObject', $expiration = 3600) {
        try {
            $command = $this->client->getCommand($operation, [
                'Bucket' => $this->bucketName,
                'Key' => $objectKey
            ]);
            
            $request = $this->client->createPresignedRequest($command, "+{$expiration} seconds");
            $url = (string) $request->getUri();
            
            return [
                'success' => true,
                'url' => $url,
                'expires_in' => $expiration,
                'expires_at' => date('c', time() + $expiration),
                'operation' => $operation
            ];
            
        } catch (AwsException $e) {
            error_log("R2 Presigned URL Error: " . $e->getMessage());
            throw new Exception("Failed to generate presigned URL: " . $e->getMessage());
        }
    }
    
    /**
     * Check if file exists in R2
     * 
     * @param string $objectKey Remote object key
     * @return bool File existence status
     */
    public function fileExists($objectKey) {
        try {
            $this->client->headObject([
                'Bucket' => $this->bucketName,
                'Key' => $objectKey
            ]);
            return true;
        } catch (S3Exception $e) {
            if ($e->getStatusCode() === 404) {
                return false;
            }
            throw new Exception("Error checking file existence: " . $e->getMessage());
        }
    }
    
    /**
     * Copy a file within R2
     * 
     * @param string $sourceKey Source object key
     * @param string $destinationKey Destination object key
     * @param array $options Copy options
     * @return array Copy result
     */
    public function copyFile($sourceKey, $destinationKey, $options = []) {
        try {
            $copyArgs = [
                'Bucket' => $this->bucketName,
                'Key' => $destinationKey,
                'CopySource' => "{$this->bucketName}/{$sourceKey}"
            ];
            
            if (isset($options['Metadata'])) {
                $copyArgs['Metadata'] = $options['Metadata'];
                $copyArgs['MetadataDirective'] = 'REPLACE';
            }
            
            $this->client->copyObject($copyArgs);
            
            return [
                'success' => true,
                'source_key' => $sourceKey,
                'destination_key' => $destinationKey,
                'url' => "https://{$this->bucketName}.r2.dev/{$destinationKey}",
                'copied_at' => date('c')
            ];
            
        } catch (S3Exception $e) {
            error_log("R2 Copy File Error: " . $e->getMessage());
            throw new Exception("Copy failed: " . $e->getMessage());
        }
    }
}
/**
 * Utility function to generate secure file keys
 */
function generateSecureKey($originalFilename, $userId = null) {
    $timestamp = date('Ymd_His');
    $uuid = substr(uniqid(), -8);
    $extension = pathinfo($originalFilename, PATHINFO_EXTENSION);
    
    $userPath = $userId ? "users/{$userId}" : 'public';
    
    return "{$userPath}/{$timestamp}_{$uuid}.{$extension}";
}
/**
 * Create a global R2 client instance
 */
$r2Client = new CloudflareR2Client();
?>Frontend File Upload Component
Here's a complete React component for handling file uploads to your R2-powered backend:
// FileUpload.jsx - Complete file upload component for R2
import React, { useState, useCallback, useRef } from 'react';
import { Upload, X, CheckCircle, AlertCircle, File, Image, Video, Music } from 'lucide-react';
const FileUpload = ({ 
  onUploadComplete, 
  onUploadError, 
  maxFiles = 5,
  maxFileSize = 100 * 1024 * 1024, // 100MB
  allowedTypes = ['image/*', 'video/*', 'audio/*', 'application/pdf', 'text/*'],
  apiEndpoint = '/api/upload/single',
  multipleEndpoint = '/api/upload/multiple',
  showPreview = true,
  optimizeImages = true 
}) => {
  const [files, setFiles] = useState([]);
  const [uploading, setUploading] = useState(false);
  const [uploadProgress, setUploadProgress] = useState({});
  const [dragActive, setDragActive] = useState(false);
  const fileInputRef = useRef(null);
  // File type validation
  const validateFile = (file) => {
    if (file.size > maxFileSize) {
      return `File "${file.name}" is too large. Maximum size is ${Math.round(maxFileSize / (1024 * 1024))}MB.`;
    }
    const isValidType = allowedTypes.some(type => {
      if (type.includes('*')) {
        const baseType = type.split('/')[0];
        return file.type.startsWith(baseType);
      }
      return file.type === type;
    });
    if (!isValidType) {
      return `File type "${file.type}" is not allowed.`;
    }
    return null;
  };
  // File icon helper
  const getFileIcon = (file) => {
    if (file.type.startsWith('image/')) return <Image className="w-5 h-5" />;
    if (file.type.startsWith('video/')) return <Video className="w-5 h-5" />;
    if (file.type.startsWith('audio/')) return <Music className="w-5 h-5" />;
    return <File className="w-5 h-5" />;
  };
  // Format file size
  const formatFileSize = (bytes) => {
    if (bytes === 0) return '0 Bytes';
    const k = 1024;
    const sizes = ['Bytes', 'KB', 'MB', 'GB'];
    const i = Math.floor(Math.log(bytes) / Math.log(k));
    return parseFloat((bytes / Math.pow(k, i)).toFixed(2)) + ' ' + sizes[i];
  };
  // Handle file selection
  const handleFileSelect = useCallback((selectedFiles) => {
    const newFiles = Array.from(selectedFiles).map((file, index) => {
      const validation = validateFile(file);
      return {
        id: `${Date.now()}_${index}`,
        file,
        name: file.name,
        size: file.size,
        type: file.type,
        preview: file.type.startsWith('image/') ? URL.createObjectURL(file) : null,
        error: validation,
        status: validation ? 'error' : 'pending'
      };
    });
    setFiles(prev => {
      const combined = [...prev, ...newFiles];
      return combined.slice(0, maxFiles); // Limit total files
    });
  }, [maxFiles, maxFileSize, allowedTypes]);
  // Handle drag events
  const handleDrag = useCallback((e) => {
    e.preventDefault();
    e.stopPropagation();
  }, []);
  const handleDragIn = useCallback((e) => {
    e.preventDefault();
    e.stopPropagation();
    setDragActive(true);
  }, []);
  const handleDragOut = useCallback((e) => {
    e.preventDefault();
    e.stopPropagation();
    setDragActive(false);
  }, []);
  const handleDrop = useCallback((e) => {
    e.preventDefault();
    e.stopPropagation();
    setDragActive(false);
    
    if (e.dataTransfer.files && e.dataTransfer.files.length > 0) {
      handleFileSelect(e.dataTransfer.files);
    }
  }, [handleFileSelect]);
  // Remove file from list
  const removeFile = (fileId) => {
    setFiles(prev => {
      const updated = prev.filter(f => f.id !== fileId);
      // Clean up preview URLs
      const removedFile = prev.find(f => f.id === fileId);
      if (removedFile?.preview) {
        URL.revokeObjectURL(removedFile.preview);
      }
      return updated;
    });
  };
  // Upload files
  const uploadFiles = async () => {
    const validFiles = files.filter(f => !f.error && f.status === 'pending');
    
    if (validFiles.length === 0) {
      onUploadError?.('No valid files to upload');
      return;
    }
    setUploading(true);
    const uploadResults = [];
    const uploadErrors = [];
    try {
      if (validFiles.length === 1) {
        // Single file upload
        const file = validFiles[0];
        const formData = new FormData();
        formData.append('file', file.file);
        formData.append('optimize', optimizeImages.toString());
        setUploadProgress({ [file.id]: 0 });
        const xhr = new XMLHttpRequest();
        
        // Track upload progress
        xhr.upload.onprogress = (e) => {
          if (e.lengthComputable) {
            const progress = Math.round((e.loaded / e.total) * 100);
            setUploadProgress(prev => ({ ...prev, [file.id]: progress }));
          }
        };
        const uploadPromise = new Promise((resolve, reject) => {
          xhr.onload = () => {
            if (xhr.status === 200) {
              const response = JSON.parse(xhr.responseText);
              resolve(response);
            } else {
              reject(new Error(`Upload failed: ${xhr.statusText}`));
            }
          };
          xhr.onerror = () => reject(new Error('Network error'));
        });
        xhr.open('POST', apiEndpoint);
        xhr.send(formData);
        const result = await uploadPromise;
        
        if (result.success) {
          setFiles(prev => prev.map(f => 
            f.id === file.id ? { ...f, status: 'success', uploadResult: result.file } : f
          ));
          uploadResults.push(result.file);
        } else {
          throw new Error(result.error || 'Upload failed');
        }
      } else {
        // Multiple file upload
        const formData = new FormData();
        validFiles.forEach(fileObj => {
          formData.append('files', fileObj.file);
        });
        formData.append('optimize', optimizeImages.toString());
        // For multiple files, we'll show indeterminate progress
        const progressObj = {};
        validFiles.forEach(f => progressObj[f.id] = 50);
        setUploadProgress(progressObj);
        const response = await fetch(multipleEndpoint, {
          method: 'POST',
          body: formData,
        });
        if (!response.ok) {
          throw new Error(`HTTP ${response.status}: ${response.statusText}`);
        }
        const result = await response.json();
        
        // Update file statuses
        setFiles(prev => prev.map(f => {
          const uploadedFile = result.files?.find(uploaded => uploaded.originalName === f.name);
          const errorFile = result.errors?.find(error => error.originalName === f.name);
          
          if (uploadedFile) {
            return { ...f, status: 'success', uploadResult: uploadedFile };
          } else if (errorFile) {
            return { ...f, status: 'error', error: errorFile.error };
          }
          return f;
        }));
        if (result.files?.length) {
          uploadResults.push(...result.files);
        }
        if (result.errors?.length) {
          uploadErrors.push(...result.errors);
        }
      }
      // Callback with results
      if (uploadResults.length > 0) {
        onUploadComplete?.(uploadResults);
      }
      if (uploadErrors.length > 0) {
        onUploadError?.(uploadErrors);
      }
    } catch (error) {
      console.error('Upload error:', error);
      onUploadError?.(error.message || 'Upload failed');
      
      // Mark files as failed
      setFiles(prev => prev.map(f => 
        validFiles.some(vf => vf.id === f.id) ? { ...f, status: 'error', error: error.message } : f
      ));
    } finally {
      setUploading(false);
      setUploadProgress({});
    }
  };
  // Clean up preview URLs on unmount
  React.useEffect(() => {
    return () => {
      files.forEach(file => {
        if (file.preview) {
          URL.revokeObjectURL(file.preview);
        }
      });
    };
  }, []);
  return (
    <div className="w-full max-w-4xl mx-auto p-6">
      {/* Drop Zone */}
      <div 
        className={`border-2 border-dashed rounded-lg p-8 text-center transition-colors ${
          dragActive 
            ? 'border-blue-400 bg-blue-50' 
            : 'border-gray-300 hover:border-gray-400'
        }`}
        onDragEnter={handleDragIn}
        onDragLeave={handleDragOut}
        onDragOver={handleDrag}
        onDrop={handleDrop}
      >
        <Upload className="w-12 h-12 text-gray-400 mx-auto mb-4" />
        <h3 className="text-lg font-semibold text-gray-700 mb-2">
          Drop files here or click to browse
        </h3>
        <p className="text-gray-500 mb-4">
          Supports images, videos, audio, PDFs, and text files up to {Math.round(maxFileSize / (1024 * 1024))}MB
        </p>
        
        <button 
          onClick={() => fileInputRef.current?.click()}
          className="px-6 py-2 bg-blue-600 text-white rounded-lg hover:bg-blue-700 transition-colors"
          disabled={uploading}
        >
          Choose Files
        </button>
        
        <input
          ref={fileInputRef}
          type="file"
          multiple
          className="hidden"
          onChange={(e) => handleFileSelect(e.target.files)}
          accept={allowedTypes.join(',')}
        />
      </div>
      {/* File List */}
      {files.length > 0 && (
        <div className="mt-6">
          <h4 className="text-lg font-semibold text-gray-800 mb-4">
            Selected Files ({files.length}/{maxFiles})
          </h4>
          
          <div className="space-y-3">
            {files.map((fileObj) => (
              <div 
                key={fileObj.id} 
                className="flex items-center p-4 border border-gray-200 rounded-lg hover:shadow-sm transition-shadow"
              >
                {/* File Icon/Preview */}
                <div className="flex-shrink-0 mr-4">
                  {showPreview && fileObj.preview ? (
                    <img 
                      src={fileObj.preview} 
                      alt={fileObj.name}
                      className="w-12 h-12 object-cover rounded"
                    />
                  ) : (
                    <div className="w-12 h-12 bg-gray-100 rounded flex items-center justify-center text-gray-500">
                      {getFileIcon(fileObj)}
                    </div>
                  )}
                </div>
                
                {/* File Info */}
                <div className="flex-1 min-w-0">
                  <p className="text-sm font-medium text-gray-900 truncate">
                    {fileObj.name}
                  </p>
                  <p className="text-sm text-gray-500">
                    {formatFileSize(fileObj.size)} • {fileObj.type}
                  </p>
                  
                  {/* Progress Bar */}
                  {uploading && uploadProgress[fileObj.id] !== undefined && (
                    <div className="mt-2">
                      <div className="bg-gray-200 rounded-full h-2">
                        <div 
                          className="bg-blue-600 h-2 rounded-full transition-all duration-300"
                          style={{ width: `${uploadProgress[fileObj.id]}%` }}
                        />
                      </div>
                    </div>
                  )}
                  
                  {/* Error Message */}
                  {fileObj.error && (
                    <p className="text-sm text-red-600 mt-1">{fileObj.error}</p>
                  )}
                  
                  {/* Success Info */}
                  {fileObj.status === 'success' && fileObj.uploadResult && (
                    <p className="text-sm text-green-600 mt-1 flex items-center">
                      <CheckCircle className="w-4 h-4 mr-1" />
                      Uploaded successfully
                    </p>
                  )}
                </div>
                
                {/* Status Icon */}
                <div className="flex-shrink-0 ml-4">
                  {fileObj.status === 'success' && (
                    <CheckCircle className="w-5 h-5 text-green-500" />
                  )}
                  {fileObj.status === 'error' && (
                    <AlertCircle className="w-5 h-5 text-red-500" />
                  )}
                  {fileObj.status === 'pending' && (
                    <button 
                      onClick={() => removeFile(fileObj.id)}
                      className="p-1 text-gray-400 hover:text-red-500 transition-colors"
                      disabled={uploading}
                    >
                      <X className="w-4 h-4" />
                    </button>
                  )}
                </div>
              </div>
            ))}
          </div>
          
          {/* Upload Button */}
          <div className="mt-6 flex justify-center">
            <button 
              onClick={uploadFiles}
              disabled={uploading || files.every(f => f.error || f.status === 'success')}
              className="px-8 py-3 bg-green-600 text-white rounded-lg font-medium hover:bg-green-700 disabled:bg-gray-400 disabled:cursor-not-allowed transition-colors"
            >
              {uploading ? 'Uploading...' : `Upload ${files.filter(f => !f.error && f.status === 'pending').length} Files`}
            </button>
          </div>
        </div>
      )}
    </div>
  );
};
export default FileUpload;CDN Integration & Performance
Cloudflare R2 comes with built-in CDN capabilities, but you can further optimize performance with custom domains and advanced caching strategies.
Custom Domain Setup
Set up a custom domain for your R2 bucket to improve branding and enable advanced features.
// Advanced caching and CDN configuration for R2
class R2CDNOptimizer {
  constructor(customDomain) {
    this.customDomain = customDomain;
    this.cacheConfig = {
      images: 'public, max-age=31536000, immutable', // 1 year
      videos: 'public, max-age=2592000', // 30 days
      documents: 'public, max-age=86400', // 1 day
      dynamic: 'public, max-age=300' // 5 minutes
    };
  }
  // Get optimized URL with CDN parameters
  getOptimizedUrl(key, options = {}) {
    const baseUrl = this.customDomain || `https://${process.env.CLOUDFLARE_R2_BUCKET_NAME}.r2.dev`;
    let url = `${baseUrl}/${key}`;
    
    // Add image optimization parameters for images
    if (this.isImage(key)) {
      const params = new URLSearchParams();
      
      if (options.width) params.append('w', options.width);
      if (options.height) params.append('h', options.height);
      if (options.quality) params.append('q', options.quality);
      if (options.format) params.append('f', options.format);
      if (options.fit) params.append('fit', options.fit); // scale-down, contain, cover, crop, pad
      
      if (params.toString()) {
        url += '?' + params.toString();
      }
    }
    
    return url;
  }
  // Generate responsive image sources
  generateResponsiveSources(key, sizes = [320, 640, 1024, 1920]) {
    if (!this.isImage(key)) return null;
    
    return sizes.map(width => ({
      width,
      url: this.getOptimizedUrl(key, { width, quality: 85, format: 'webp' }),
      fallback: this.getOptimizedUrl(key, { width, quality: 85 })
    }));
  }
  // Get cache control header based on file type
  getCacheControl(key, fileType) {
    if (this.isImage(key)) return this.cacheConfig.images;
    if (this.isVideo(key)) return this.cacheConfig.videos;
    if (this.isDocument(key)) return this.cacheConfig.documents;
    return this.cacheConfig.dynamic;
  }
  // Preload critical resources
  generatePreloadLinks(criticalFiles) {
    return criticalFiles.map(file => {
      const url = this.getOptimizedUrl(file.key, file.options);
      const as = this.getResourceType(file.key);
      return `<link rel="preload" href="${url}" as="${as}">`;
    }).join('\n');
  }
  // Helper methods
  isImage(key) {
    return /\.(jpg|jpeg|png|gif|webp|svg)$/i.test(key);
  }
  isVideo(key) {
    return /\.(mp4|webm|mov|avi|mkv)$/i.test(key);
  }
  isDocument(key) {
    return /\.(pdf|doc|docx|txt|csv)$/i.test(key);
  }
  getResourceType(key) {
    if (this.isImage(key)) return 'image';
    if (this.isVideo(key)) return 'video';
    if (key.endsWith('.css')) return 'style';
    if (key.endsWith('.js')) return 'script';
    return 'fetch';
  }
}
// React hook for optimized image loading
import { useState, useEffect } from 'react';
export function useOptimizedImage(src, options = {}) {
  const [imageSrc, setImageSrc] = useState(null);
  const [isLoading, setIsLoading] = useState(true);
  const [error, setError] = useState(null);
  useEffect(() => {
    const optimizer = new R2CDNOptimizer(process.env.REACT_APP_CDN_DOMAIN);
    
    // Generate optimized URLs
    const webpSrc = optimizer.getOptimizedUrl(src, { ...options, format: 'webp' });
    const fallbackSrc = optimizer.getOptimizedUrl(src, options);
    
    // Test WebP support
    const testWebP = () => {
      return new Promise((resolve) => {
        const webP = new Image();
        webP.onload = webP.onerror = () => resolve(webP.height === 2);
        webP.src = 'data:image/webp;base64,UklGRjoAAABXRUJQVlA4IC4AAACyAgCdASoCAAIALmk0mk0iIiIiIgBoSygABc6WWgAA/veff/0PP8bA//LwYAAA';
      });
    };
    const loadImage = async () => {
      try {
        const supportsWebP = await testWebP();
        const finalSrc = supportsWebP ? webpSrc : fallbackSrc;
        
        const img = new Image();
        img.onload = () => {
          setImageSrc(finalSrc);
          setIsLoading(false);
        };
        img.onerror = () => {
          setError('Failed to load image');
          setIsLoading(false);
        };
        img.src = finalSrc;
      } catch (err) {
        setError(err.message);
        setIsLoading(false);
      }
    };
    loadImage();
  }, [src, JSON.stringify(options)]);
  return { imageSrc, isLoading, error };
}
// Performance monitoring for R2 assets
export class R2PerformanceMonitor {
  constructor() {
    this.metrics = {
      loadTimes: [],
      cacheHits: 0,
      cacheMisses: 0,
      errors: 0
    };
  }
  // Track asset loading performance
  trackAssetLoad(url, startTime, success = true) {
    const loadTime = performance.now() - startTime;
    
    if (success) {
      this.metrics.loadTimes.push(loadTime);
      
      // Check if served from cache (heuristic)
      if (loadTime < 50) {
        this.metrics.cacheHits++;
      } else {
        this.metrics.cacheMisses++;
      }
    } else {
      this.metrics.errors++;
    }
  }
  // Get performance summary
  getPerformanceSummary() {
    const loadTimes = this.metrics.loadTimes;
    const totalRequests = loadTimes.length + this.metrics.errors;
    
    return {
      averageLoadTime: loadTimes.length > 0 
        ? Math.round(loadTimes.reduce((a, b) => a + b, 0) / loadTimes.length) 
        : 0,
      cacheHitRate: totalRequests > 0 
        ? Math.round((this.metrics.cacheHits / totalRequests) * 100) 
        : 0,
      errorRate: totalRequests > 0 
        ? Math.round((this.metrics.errors / totalRequests) * 100) 
        : 0,
      totalRequests
    };
  }
}
export const cdnOptimizer = new R2CDNOptimizer(process.env.REACT_APP_CDN_DOMAIN);
export const performanceMonitor = new R2PerformanceMonitor();Security Best Practices
Implementing proper security measures is crucial when handling file uploads and storage. Here are essential security practices for R2 implementation:
API Token Security
Secure your R2 API tokens with proper scoping and rotation policies.
File Upload Validation
Implement comprehensive file validation to prevent security vulnerabilities.
Access Control
Control who can access your stored files with proper authentication and authorization.
// Security utilities for R2 file handling
import crypto from 'crypto';
import path from 'path';
class R2SecurityManager {
  constructor() {
    this.allowedMimeTypes = new Map([
      ['image/jpeg', ['.jpg', '.jpeg']],
      ['image/png', ['.png']],
      ['image/gif', ['.gif']],
      ['image/webp', ['.webp']],
      ['application/pdf', ['.pdf']],
      ['text/plain', ['.txt']],
      ['application/json', ['.json']]
    ]);
    
    this.maxFileSizes = new Map([
      ['image/*', 10 * 1024 * 1024], // 10MB for images
      ['video/*', 100 * 1024 * 1024], // 100MB for videos
      ['application/pdf', 25 * 1024 * 1024], // 25MB for PDFs
      ['text/*', 1024 * 1024] // 1MB for text files
    ]);
    
    this.maliciousSignatures = [
      Buffer.from('MZ'), // PE executable
      Buffer.from('\x7fELF'), // ELF executable
      Buffer.from('#!/bin/sh'), // Shell script
      Buffer.from('<?php'), // PHP script
      Buffer.from('<script'), // Script tag
    ];
  }
  // Comprehensive file validation
  async validateFile(file, buffer) {
    const validation = {
      isValid: true,
      errors: [],
      warnings: []
    };
    // 1. File extension validation
    const ext = path.extname(file.originalname).toLowerCase();
    if (!this.isAllowedExtension(ext, file.mimetype)) {
      validation.isValid = false;
      validation.errors.push(`File extension ${ext} not allowed for MIME type ${file.mimetype}`);
    }
    // 2. MIME type validation
    if (!this.allowedMimeTypes.has(file.mimetype)) {
      validation.isValid = false;
      validation.errors.push(`MIME type ${file.mimetype} not allowed`);
    }
    // 3. File size validation
    const maxSize = this.getMaxFileSize(file.mimetype);
    if (file.size > maxSize) {
      validation.isValid = false;
      validation.errors.push(`File size ${file.size} exceeds limit ${maxSize}`);
    }
    // 4. File content validation
    if (await this.containsMaliciousContent(buffer)) {
      validation.isValid = false;
      validation.errors.push('File contains potentially malicious content');
    }
    // 5. Image-specific validation
    if (file.mimetype.startsWith('image/')) {
      const imageValidation = await this.validateImage(buffer);
      if (!imageValidation.isValid) {
        validation.isValid = false;
        validation.errors.push(...imageValidation.errors);
      }
    }
    return validation;
  }
  // Check file extension against MIME type
  isAllowedExtension(extension, mimeType) {
    const allowedExtensions = this.allowedMimeTypes.get(mimeType);
    return allowedExtensions && allowedExtensions.includes(extension);
  }
  // Get maximum file size for MIME type
  getMaxFileSize(mimeType) {
    for (const [pattern, size] of this.maxFileSizes) {
      if (pattern.includes('*')) {
        const baseType = pattern.split('/')[0];
        if (mimeType.startsWith(baseType)) {
          return size;
        }
      } else if (pattern === mimeType) {
        return size;
      }
    }
    return 1024 * 1024; // Default 1MB
  }
  // Scan for malicious content signatures
  async containsMaliciousContent(buffer) {
    const fileHeader = buffer.slice(0, 256); // Check first 256 bytes
    
    return this.maliciousSignatures.some(signature => 
      fileHeader.includes(signature)
    );
  }
  // Validate image files specifically
  async validateImage(buffer) {
    const validation = { isValid: true, errors: [] };
    
    try {
      // Check image headers
      const header = buffer.slice(0, 10);
      
      // JPEG validation
      if (header[0] === 0xFF && header[1] === 0xD8) {
        if (!buffer.slice(-2).equals(Buffer.from([0xFF, 0xD9]))) {
          validation.isValid = false;
          validation.errors.push('Invalid JPEG file structure');
        }
      }
      
      // PNG validation
      else if (header.slice(0, 8).equals(Buffer.from([0x89, 0x50, 0x4E, 0x47, 0x0D, 0x0A, 0x1A, 0x0A]))) {
        // PNG signature valid
      }
      
      // Additional image validations can be added here
      
    } catch (error) {
      validation.isValid = false;
      validation.errors.push('Image validation failed: ' + error.message);
    }
    
    return validation;
  }
  // Generate secure file name
  generateSecureFileName(originalName, userId) {
    const ext = path.extname(originalName);
    const timestamp = Date.now();
    const randomId = crypto.randomBytes(16).toString('hex');
    const userHash = crypto.createHash('sha256').update(userId || 'anonymous').digest('hex').substring(0, 8);
    
    return `${timestamp}_${userHash}_${randomId}${ext}`;
  }
  // Sanitize file name
  sanitizeFileName(fileName) {
    return fileName
      .replace(/[^a-zA-Z0-9._-]/g, '_') // Replace special chars with underscore
      .replace(/\.+/g, '.') // Remove multiple dots
      .replace(/^\.|\.$/, '') // Remove leading/trailing dots
      .substring(0, 100); // Limit length
  }
  // Create secure folder structure
  createSecurePath(userId, category = 'general') {
    const userHash = crypto.createHash('sha256').update(userId).digest('hex').substring(0, 16);
    const datePath = new Date().toISOString().substring(0, 10).replace(/-/g, '/');
    return `users/${userHash}/${category}/${datePath}`;
  }
  // Generate presigned URL with security constraints
  async generateSecurePresignedUrl(r2Client, key, options = {}) {
    const defaultOptions = {
      expiresIn: 3600, // 1 hour default
      operation: 'getObject',
      conditions: []
    };
    
    const finalOptions = { ...defaultOptions, ...options };
    
    // Add security conditions
    if (finalOptions.userAgent) {
      finalOptions.conditions.push(['starts-with', '$User-Agent', finalOptions.userAgent]);
    }
    
    if (finalOptions.ipRange) {
      finalOptions.conditions.push(['ip_address', finalOptions.ipRange]);
    }
    
    return await r2Client.getPresignedUrl(key, finalOptions.expiresIn, finalOptions.operation);
  }
  // Rate limiting implementation
  createRateLimiter() {
    const attempts = new Map();
    
    return (req, res, next) => {
      const clientId = req.ip || req.connection.remoteAddress;
      const now = Date.now();
      const windowMs = 15 * 60 * 1000; // 15 minutes
      const maxAttempts = 10;
      
      if (!attempts.has(clientId)) {
        attempts.set(clientId, []);
      }
      
      const clientAttempts = attempts.get(clientId);
      
      // Clean old attempts
      const validAttempts = clientAttempts.filter(timestamp => now - timestamp < windowMs);
      
      if (validAttempts.length >= maxAttempts) {
        return res.status(429).json({
          success: false,
          error: 'Rate limit exceeded. Try again later.'
        });
      }
      
      validAttempts.push(now);
      attempts.set(clientId, validAttempts);
      
      next();
    };
  }
}
// Usage in Express middleware
export const securityManager = new R2SecurityManager();
export const secureUploadMiddleware = async (req, res, next) => {
  try {
    if (!req.file && !req.files) {
      return res.status(400).json({ success: false, error: 'No files provided' });
    }
    const files = req.files || [req.file];
    
    for (const file of files) {
      // Read file buffer for validation
      const buffer = file.buffer || await fs.readFile(file.path);
      
      // Validate file
      const validation = await securityManager.validateFile(file, buffer);
      
      if (!validation.isValid) {
        return res.status(400).json({
          success: false,
          error: 'File validation failed',
          details: validation.errors
        });
      }
      
      // Generate secure file name
      const secureFileName = securityManager.generateSecureFileName(
        file.originalname, 
        req.user?.id || req.body?.userId
      );
      
      // Add security metadata
      file.secureMetadata = {
        originalName: securityManager.sanitizeFileName(file.originalname),
        uploadedBy: req.user?.id || 'anonymous',
        uploadedAt: new Date().toISOString(),
        validatedAt: new Date().toISOString(),
        clientIP: req.ip,
        userAgent: req.get('User-Agent')
      };
      
      file.secureName = secureFileName;
    }
    
    next();
  } catch (error) {
    console.error('Security middleware error:', error);
    res.status(500).json({
      success: false,
      error: 'Security validation failed'
    });
  }
};Production Deployment Guide
Ready to deploy your R2 implementation to production? Follow this comprehensive checklist to ensure a smooth, secure, and performant deployment.
Environment Configuration
Set up your production environment variables and configurations properly.
Security Hardening
Implement production-grade security measures.
Performance Optimization
Optimize for maximum performance and reliability.
Monitoring & Logging
Set up comprehensive monitoring and logging systems.
Backup & Disaster Recovery
Prepare for disaster recovery with proper backup strategies.
#!/bin/bash
# Production Deployment Checklist Script
echo "🚀 R2 Production Deployment Checklist"
echo "======================================"
# Environment Variables Check
echo "📋 Checking Environment Variables..."
required_vars=(
  "CLOUDFLARE_R2_ACCOUNT_ID"
  "CLOUDFLARE_R2_ACCESS_KEY_ID" 
  "CLOUDFLARE_R2_SECRET_ACCESS_KEY"
  "CLOUDFLARE_R2_BUCKET_NAME"
  "CLOUDFLARE_R2_ENDPOINT"
  "NODE_ENV"
)
for var in "${required_vars[@]}"; do
  if [[ -z "${!var}" ]]; then
    echo "❌ Missing environment variable: $var"
    exit 1
  else
    echo "✅ $var is set"
  fi
done
# Security Check
echo "🔒 Running Security Checks..."
if [[ "$NODE_ENV" != "production" ]]; then
  echo "⚠️  NODE_ENV is not set to 'production'"
fi
# Test R2 Connection
echo "🌐 Testing R2 Connection..."
node -e "
const { r2Client } = require('./R2Client.js');
r2Client.listFiles('', 1).then(() => {
  console.log('✅ R2 connection successful');
  process.exit(0);
}).catch(err => {
  console.log('❌ R2 connection failed:', err.message);
  process.exit(1);
});
"
# Build Check
echo "🔨 Running Build..."
npm run build || {
  echo "❌ Build failed"
  exit 1
}
# Test Suite
echo "🧪 Running Tests..."
npm test -- --coverage || {
  echo "❌ Tests failed"
  exit 1
}
# Lint Check
echo "🔍 Running Linter..."
npm run lint || {
  echo "❌ Linting failed"
  exit 1
}
echo "✅ All checks passed! Ready for deployment."
echo ""
echo "📦 Deployment Commands:"
echo "  docker build -t my-r2-app ."
echo "  docker run -d -p 3000:3000 my-r2-app"
echo ""
echo "🔗 Post-Deployment Verification:"
echo "  curl https://your-domain.com/api/health"
echo "  curl https://your-cdn-domain.com/test-file.jpg"🎉 Congratulations! You're Ready for Production
What You've Implemented:
- ✅ Complete R2 setup and configuration
 - ✅ Multi-language implementation (JS, Python, PHP)
 - ✅ Secure file upload system with validation
 - ✅ Image optimization and processing
 - ✅ CDN integration and performance optimization
 - ✅ Production-ready security measures
 - ✅ Comprehensive error handling and monitoring
 
Your Benefits:
- 💰 Save 70-90% on storage costs vs AWS S3
 - 🚀 Zero egress fees = unlimited bandwidth
 - 🌍 Global edge delivery in 250+ locations
 - ⚡ Faster performance than traditional cloud storage
 - 🔒 Enterprise-grade security and compliance
 - 📈 Easily scalable to any traffic level
 - 🛠️ S3-compatible APIs for easy migration
 
You now have everything needed to implement Cloudflare R2 in your projects and start saving significantly on storage costs. With S3-compatible APIs, zero egress fees, and built-in CDN capabilities, R2 is the perfect solution for high-traffic applications, media platforms, and any project that serves substantial amounts of data.
