
Backblaze B2 with Next.js: Complete Integration Guide
Learn how to integrate affordable, scalable cloud storage into your Next.js applications with Backblaze B2 - the cost-effective alternative to AWS S3.
Table of Contents
Why Backblaze B2 for Next.js?
In modern web development, cloud storage is essential for handling user uploads, media files, and data backups. While AWS S3 dominates the market, Backblaze B2 offers a compelling alternative that's up to 4x cheaper with an S3-compatible API. This makes it perfect for Next.js applications where cost efficiency and scalability matter.
This comprehensive guide will walk you through integrating Backblaze B2 with your Next.js application, covering everything from initial setup to production-ready implementation with security best practices and performance optimization.
Cost Effective
$6/TB storage vs $23/TB on AWS S3. Save up to 75% on cloud storage costs.
S3 Compatible
Use familiar S3 APIs and tools. Easy migration from AWS with minimal code changes.
Reliable & Secure
99.9% uptime SLA with enterprise-grade security and data redundancy.
Key Benefits: Backblaze B2 offers unlimited free egress to Cloudflare CDN, making it ideal for serving media files globally at blazing speeds with minimal costs.
Prerequisites & Setup
Before we dive into the integration, make sure you have the following:
What You'll Need:
Version 13+ (App Router recommended)
Free tier available with 10GB storage
Node.js 18+ recommended
React, Next.js API routes, and async/await
Install Required Packages
npm install @aws-sdk/client-s3 @aws-sdk/s3-request-presigner
# or
yarn add @aws-sdk/client-s3 @aws-sdk/s3-request-presignerWe'll use the AWS SDK for S3 since Backblaze B2 is S3-compatible, which means you can use familiar AWS tools and libraries.
Setting Up Backblaze B2
Step 1: Create a Backblaze Account
- 1.Go to backblaze.com/b2/sign-up and create a free account
- 2.Verify your email address and complete the registration process
- 3.Log in to your Backblaze B2 dashboard
Step 2: Create a B2 Bucket
- 1.Navigate to Buckets in the sidebar
- 2.Click "Create a Bucket"
- 3.Choose a unique bucket name (e.g.,
my-nextjs-uploads) - 4.Select Files in Bucket are: Private (we'll use signed URLs for security)
- 5.Enable Object Lock if you need compliance features (optional)
- 6.Click Create a Bucket
Step 3: Generate Application Keys
- 1.Go to App Keys in the sidebar
- 2.Click "Add a New Application Key"
- 3.Name your key (e.g.,
nextjs-app) - 4.Select your bucket from the dropdown (or choose "All" for access to all buckets)
- 5.Set permissions to Read and Write
- 6.Click Create New Key
Important: Save your keyID andapplicationKey immediately. The application key will only be shown once and cannot be retrieved later!
Important Information to Note
You'll need these values for your Next.js configuration:
- Endpoint:
s3.us-west-004.backblazeb2.com(varies by region) - Region:
us-west-004(check your bucket details) - Bucket Name: Your chosen bucket name
- Key ID: From application key creation
- Application Key: From application key creation
Next.js Configuration
Environment Variables Setup
Create a .env.local file in your Next.js project root:
# Backblaze B2 Configuration
B2_ENDPOINT=s3.us-west-004.backblazeb2.com
B2_REGION=us-west-004
B2_BUCKET_NAME=my-nextjs-uploads
B2_ACCESS_KEY_ID=your_key_id_here
B2_SECRET_ACCESS_KEY=your_application_key_here
# Optional: Public URL for CDN (if using Cloudflare)
B2_PUBLIC_URL=https://your-custom-domain.comSecurity Warning: Never commit your .env.local file to version control. Add it to your .gitignore file immediately!
Create B2 Client Configuration
Create a new file lib/b2-client.js:
import { S3Client } from "@aws-sdk/client-s3"
// Initialize B2 client with S3-compatible configuration
export const b2Client = new S3Client({
endpoint: `https://${process.env.B2_ENDPOINT}`,
region: process.env.B2_REGION,
credentials: {
accessKeyId: process.env.B2_ACCESS_KEY_ID,
secretAccessKey: process.env.B2_SECRET_ACCESS_KEY,
},
// Force path-style URLs (required for B2)
forcePathStyle: true,
})
export const BUCKET_NAME = process.env.B2_BUCKET_NAMEThis configuration creates an S3-compatible client that works seamlessly with Backblaze B2's API.
Implementing File Uploads
Let's implement a complete file upload system with both server-side and client-side components. We'll create an API route to handle uploads and a React component for the UI.
Step 1: Create Upload API Route
Create app/api/upload/route.js:
import { NextResponse } from 'next/server'
import { PutObjectCommand } from '@aws-sdk/client-s3'
import { b2Client, BUCKET_NAME } from '@/lib/b2-client'
export async function POST(request) {
try {
const formData = await request.formData()
const file = formData.get('file')
if (!file) {
return NextResponse.json(
{ error: 'No file provided' },
{ status: 400 }
)
}
// Validate file size (e.g., 10MB limit)
const maxSize = 10 * 1024 * 1024 // 10MB
if (file.size > maxSize) {
return NextResponse.json(
{ error: 'File too large. Maximum size is 10MB' },
{ status: 400 }
)
}
// Generate unique filename
const timestamp = Date.now()
const randomString = Math.random().toString(36).substring(7)
const fileExtension = file.name.split('.').pop()
const fileName = `uploads/${timestamp}-${randomString}.${fileExtension}`
// Convert file to buffer
const bytes = await file.arrayBuffer()
const buffer = Buffer.from(bytes)
// Upload to B2
const uploadParams = {
Bucket: BUCKET_NAME,
Key: fileName,
Body: buffer,
ContentType: file.type,
// Optional: Add metadata
Metadata: {
originalName: file.name,
uploadedAt: new Date().toISOString(),
},
}
const command = new PutObjectCommand(uploadParams)
await b2Client.send(command)
// Generate public URL
const fileUrl = `https://${process.env.B2_ENDPOINT}/${BUCKET_NAME}/${fileName}`
return NextResponse.json({
success: true,
fileName,
fileUrl,
size: file.size,
type: file.type,
})
} catch (error) {
console.error('Upload error:', error)
return NextResponse.json(
{ error: 'Upload failed', details: error.message },
{ status: 500 }
)
}
}
// Configure maximum file size for Next.js
export const config = {
api: {
bodyParser: false,
},
}Step 2: Create Upload Component
Create a client component for file uploads components/FileUpload.jsx:
'use client'
import { useState } from 'react'
import { Upload, CheckCircle, AlertCircle, Loader2 } from 'lucide-react'
export default function FileUpload() {
const [file, setFile] = useState(null)
const [uploading, setUploading] = useState(false)
const [uploadResult, setUploadResult] = useState(null)
const [error, setError] = useState(null)
const handleFileChange = (e) => {
const selectedFile = e.target.files?.[0]
if (selectedFile) {
setFile(selectedFile)
setError(null)
setUploadResult(null)
}
}
const handleUpload = async () => {
if (!file) {
setError('Please select a file first')
return
}
setUploading(true)
setError(null)
try {
const formData = new FormData()
formData.append('file', file)
const response = await fetch('/api/upload', {
method: 'POST',
body: formData,
})
const data = await response.json()
if (!response.ok) {
throw new Error(data.error || 'Upload failed')
}
setUploadResult(data)
setFile(null)
// Reset file input
document.getElementById('file-input').value = ''
} catch (err) {
setError(err.message)
} finally {
setUploading(false)
}
}
return (
<div className="max-w-md mx-auto p-6 bg-white rounded-lg shadow-lg">
<h2 className="text-2xl font-bold mb-6 text-black">Upload File to B2</h2>
<div className="space-y-4">
<div>
<label
htmlFor="file-input"
className="block text-sm font-medium text-gray-700 mb-2"
>
Choose File
</label>
<input
id="file-input"
type="file"
onChange={handleFileChange}
className="block w-full text-sm text-gray-500
file:mr-4 file:py-2 file:px-4
file:rounded-lg file:border-0
file:text-sm file:font-semibold
file:bg-blue-50 file:text-blue-700
hover:file:bg-blue-100
cursor-pointer"
/>
{file && (
<p className="mt-2 text-sm text-gray-600">
Selected: {file.name} ({(file.size / 1024).toFixed(2)} KB)
</p>
)}
</div>
<button
onClick={handleUpload}
disabled={!file || uploading}
className="w-full flex items-center justify-center px-4 py-2 bg-blue-600 text-white rounded-lg font-medium hover:bg-blue-700 disabled:bg-gray-400 disabled:cursor-not-allowed transition-colors"
>
{uploading ? (
<>
<Loader2 className="w-4 h-4 mr-2 animate-spin" />
Uploading...
</>
) : (
<>
<Upload className="w-4 h-4 mr-2" />
Upload to B2
</>
)}
</button>
{error && (
<div className="flex items-start p-4 bg-red-50 border border-red-200 rounded-lg">
<AlertCircle className="w-5 h-5 mr-2 text-red-600 flex-shrink-0 mt-0.5" />
<p className="text-sm text-red-800">{error}</p>
</div>
)}
{uploadResult && (
<div className="p-4 bg-green-50 border border-green-200 rounded-lg">
<div className="flex items-center mb-2">
<CheckCircle className="w-5 h-5 mr-2 text-green-600" />
<p className="font-semibold text-green-800">Upload Successful!</p>
</div>
<div className="text-sm text-gray-700 space-y-1">
<p><strong>File:</strong> {uploadResult.fileName}</p>
<p><strong>Size:</strong> {(uploadResult.size / 1024).toFixed(2)} KB</p>
<a
href={uploadResult.fileUrl}
target="_blank"
rel="noopener noreferrer"
className="text-blue-600 hover:underline block mt-2"
>
View File
</a>
</div>
</div>
)}
</div>
</div>
)
}Step 3: Use the Upload Component
Add the component to any page:
import FileUpload from '@/components/FileUpload'
export default function UploadPage() {
return (
<div className="container mx-auto py-12">
<FileUpload />
</div>
)
}File Download & Management
Generate Presigned URLs for Secure Downloads
For private files, use presigned URLs that expire after a set time:
import { GetObjectCommand } from '@aws-sdk/client-s3'
import { getSignedUrl } from '@aws-sdk/s3-request-presigner'
import { b2Client, BUCKET_NAME } from '@/lib/b2-client'
export async function generateDownloadUrl(fileName, expiresIn = 3600) {
const command = new GetObjectCommand({
Bucket: BUCKET_NAME,
Key: fileName,
})
// Generate presigned URL that expires in 1 hour (3600 seconds)
const url = await getSignedUrl(b2Client, command, { expiresIn })
return url
}List Files in Bucket
Create an API route to list uploaded files:
import { NextResponse } from 'next/server'
import { ListObjectsV2Command } from '@aws-sdk/client-s3'
import { b2Client, BUCKET_NAME } from '@/lib/b2-client'
export async function GET(request) {
try {
const command = new ListObjectsV2Command({
Bucket: BUCKET_NAME,
Prefix: 'uploads/', // Only list files in uploads folder
MaxKeys: 100, // Limit results
})
const response = await b2Client.send(command)
const files = response.Contents?.map((file) => ({
key: file.Key,
size: file.Size,
lastModified: file.LastModified,
url: `https://${process.env.B2_ENDPOINT}/${BUCKET_NAME}/${file.Key}`,
})) || []
return NextResponse.json({ files })
} catch (error) {
return NextResponse.json(
{ error: 'Failed to list files' },
{ status: 500 }
)
}
}Delete Files
import { NextResponse } from 'next/server'
import { DeleteObjectCommand } from '@aws-sdk/client-s3'
import { b2Client, BUCKET_NAME } from '@/lib/b2-client'
export async function DELETE(request) {
try {
const { fileName } = await request.json()
const command = new DeleteObjectCommand({
Bucket: BUCKET_NAME,
Key: fileName,
})
await b2Client.send(command)
return NextResponse.json({
success: true,
message: 'File deleted successfully'
})
} catch (error) {
return NextResponse.json(
{ error: 'Failed to delete file' },
{ status: 500 }
)
}
}Security Best Practices
1. Never Expose Credentials Client-Side
Always keep your B2 credentials on the server. Use API routes to handle all B2 operations.
// ❌ NEVER do this
const client = new S3Client({
credentials: {
accessKeyId: 'YOUR_KEY', // Exposed to client!
secretAccessKey: 'YOUR_SECRET', // Exposed to client!
}
})
// ✅ DO this instead
// Keep credentials in .env.local and use API routes2. Validate File Types and Sizes
const ALLOWED_TYPES = ['image/jpeg', 'image/png', 'image/gif', 'application/pdf']
const MAX_SIZE = 10 * 1024 * 1024 // 10MB
if (!ALLOWED_TYPES.includes(file.type)) {
return NextResponse.json(
{ error: 'File type not allowed' },
{ status: 400 }
)
}
if (file.size > MAX_SIZE) {
return NextResponse.json(
{ error: 'File too large' },
{ status: 400 }
)
}3. Use Presigned URLs for Private Content
Never make your bucket public. Use presigned URLs that expire for secure access.
// Generate URL that expires in 1 hour
const downloadUrl = await getSignedUrl(
b2Client,
new GetObjectCommand({ Bucket, Key }),
{ expiresIn: 3600 }
)4. Sanitize Filenames
function sanitizeFilename(filename) {
// Remove dangerous characters and generate safe name
const ext = filename.split('.').pop()
const timestamp = Date.now()
const random = Math.random().toString(36).substring(7)
return `${timestamp}-${random}.${ext}`
}5. Implement Rate Limiting
Protect your upload endpoint from abuse with rate limiting.
// Using next-rate-limit or similar
import rateLimit from '@/lib/rate-limit'
const limiter = rateLimit({
interval: 60 * 1000, // 1 minute
uniqueTokenPerInterval: 500,
})
export async function POST(request) {
try {
await limiter.check(request, 10, 'CACHE_TOKEN') // 10 requests per minute
// ... upload logic
} catch {
return NextResponse.json(
{ error: 'Rate limit exceeded' },
{ status: 429 }
)
}
}Performance Optimization
Use Cloudflare CDN
Backblaze B2 offers free bandwidth when integrated with Cloudflare CDN. This dramatically reduces costs and improves global performance.
- • Enable Cloudflare integration in B2 settings
- • Add custom domain via Cloudflare DNS
- • Serve files through CDN URLs
Multipart Uploads for Large Files
For files larger than 100MB, use multipart uploads for better reliability and performance.
import {
CreateMultipartUploadCommand,
UploadPartCommand,
CompleteMultipartUploadCommand
} from '@aws-sdk/client-s3'
// Split file into chunks and upload
// Complete example in documentationImage Optimization Example
Resize and optimize images before uploading to save bandwidth and storage costs:
import sharp from 'sharp'
async function optimizeImage(buffer) {
return await sharp(buffer)
.resize(1920, 1080, {
fit: 'inside',
withoutEnlargement: true
})
.jpeg({ quality: 85 })
.toBuffer()
}
// Use in your upload route
const optimizedBuffer = await optimizeImage(buffer)
const uploadParams = {
Bucket: BUCKET_NAME,
Key: fileName,
Body: optimizedBuffer,
ContentType: 'image/jpeg',
}Cost Comparison: B2 vs AWS S3
One of Backblaze B2's biggest advantages is its pricing. Here's how it compares to AWS S3:
| Feature | Backblaze B2 | AWS S3 |
|---|---|---|
| Storage (per GB/month) | $0.006 | $0.023 |
| Download (per GB) | $0.01 | $0.09 |
| API Calls (per 10,000) | $0.004 (Class B) | $0.005 (PUT) |
| Free Tier | 10 GB storage | 5 GB (12 months) |
| CDN Integration | Free with Cloudflare | Paid (CloudFront) |
Cost Savings Example
For 1TB storage + 1TB bandwidth/month:
When to Choose B2
- Cost-sensitive projects
- High bandwidth needs with Cloudflare
- Media-heavy applications
- Backup and archival storage
- Startups and indie projects
Common Issues & Solutions
Issue: "Access Denied" Error
Problem: Getting 403 Access Denied when trying to upload or download files.
Solutions:
- • Verify your Application Key has correct permissions (Read/Write)
- • Check that the bucket name in your config matches exactly
- • Ensure credentials are correctly set in environment variables
- • Confirm the application key is scoped to the correct bucket
Issue: CORS Errors
Problem: Browser CORS errors when accessing files.
Solution: Configure CORS in your B2 bucket settings:
[
{
"corsRuleName": "allowAll",
"allowedOrigins": ["https://yourdomain.com"],
"allowedHeaders": ["*"],
"allowedOperations": ["s3_get", "s3_put", "s3_head"],
"maxAgeSeconds": 3600
}
]Issue: Slow Upload Speeds
Solutions:
- • Use multipart uploads for files larger than 100MB
- • Implement client-side file compression before upload
- • Choose a B2 region closer to your users
- • Enable HTTP/2 in your Next.js configuration
Issue: Files Not Appearing Immediately
Problem: Uploaded files don't show up in listings right away.
Solution: B2 has eventual consistency. Wait a few seconds and implement retry logic:
async function waitForFile(fileName, maxAttempts = 5) {
for (let i = 0; i < maxAttempts; i++) {
const exists = await checkFileExists(fileName)
if (exists) return true
await new Promise(resolve => setTimeout(resolve, 1000))
}
return false
}Conclusion & Next Steps
Congratulations! You've learned how to integrate Backblaze B2 cloud storage with Next.js, creating a cost-effective, scalable solution for file uploads and management. With B2's affordable pricing and S3-compatible API, you can build production-ready applications without breaking the bank.
Key Takeaways
- Backblaze B2 is up to 75% cheaper than AWS S3 with comparable performance
- S3-compatible API makes integration with Next.js straightforward
- Free Cloudflare CDN integration dramatically reduces bandwidth costs
- Security best practices are essential for production applications
- Performance optimization and proper error handling ensure reliability
Next Steps to Enhance Your Implementation
- 1.Implement user authentication to restrict uploads to logged-in users
- 2.Add image preview and cropping functionality before upload
- 3.Set up automated backups and versioning for critical files
- 4.Implement a file management dashboard with search and filtering
- 5.Configure lifecycle rules to automatically archive or delete old files
- 6.Set up monitoring and alerts for storage usage and costs
Ready to optimize your Next.js application further? Check out our guides onSEO optimizationandstructured datato improve your site's visibility and performance.
Build Better Web Applications
Use our professional tools to enhance your development workflow!
