In this post, I’ll walk you through how I integrated AWS S3 file uploads into my project using Next.js Server Actions. The goal was to keep the API clean, secure, and easy to scale — and avoid redundant route handlers or third-party libraries.
1. Introduction
When building file upload functionality, I wanted a fast and secure way to let users upload files directly to cloud storage — without storing them on my own server or setting up an extra API layer.
My goals were:
- A clean interface for uploading
- Secure, signed URLs from the server
- No need for custom Express routes or API endpoints
- Easy integration with forms and UI
- Scalability via S3
📊 Visual Overview of the Upload Process
Diagram showing how the client, server, and AWS S3 interact using signed URLs and file metadata.
2. Configuring S3 for Secure Uploads
First, create a private bucket on AWS S3 and configure CORS to allow browser uploads. Then, add an IAM user with limited permissions to upload objects only to a specific folder.
Here’s the minimal IAM policy scoped to /uploads/
:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:PutObject"],
"Resource": "arn:aws:s3:::your-bucket-name/uploads/*"
}
]
}
🔐 Restrict access to a subfolder (
/uploads/
) to avoid exposing the rest of the bucket.
3. Setting Up the S3 Client in Next.js
Next, initialize an S3 client using the AWS SDK v3 so you can generate pre-signed URLs from the server.
🛠️ Step 1: Define Environment Variables
Add these to your .env
(and ensure it’s in .gitignore
):
AWS_BUCKET_NAME=your-bucket-name
AWS_BUCKET_REGION=your-region
AWS_ACCESS_KEY_ID=your-access-key-id
AWS_SECRET_ACCESS_KEY=your-secret-access-key
🔒 Never commit your credentials — keep
.env
private.
⚙️ Step 2: Initialize the S3 Client
Create lib/s3.ts
:
import { S3Client } from '@aws-sdk/client-s3';
export const s3 = new S3Client({
region: process.env.AWS_BUCKET_REGION!,
credentials: {
accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!,
},
});
This s3
client will be reused in your Server Actions to generate signed URLs and interact with your bucket.
4. Generating Signed URLs with Authentication
Here’s a complete Server Action that checks user authentication, generates a unique key, and returns a short-lived signed URL:
'use server';
import { PutObjectCommand } from '@aws-sdk/client-s3';
import { getSignedUrl } from '@aws-sdk/s3-request-presigner';
import { getKindeServerSession } from '@kinde-oss/kinde-auth-nextjs/server';
import { s3 } from '@/lib/s3';
export const getSignedURL = async () => {
try {
// 1. Verify the user session and retrieve user info
const { getUser } = getKindeServerSession();
const user = await getUser();
if (!user?.id || !user?.email) {
throw new Error('Not authenticated');
}
// 2. Generate a unique S3 object key using user ID and timestamp
const key = `${Buffer.from(user.id).toString('base64')}-${Date.now()}`;
// 3. Create the S3 command to put an object with the specified key
const command = new PutObjectCommand({
Bucket: process.env.AWS_BUCKET_NAME!, // Target S3 bucket
Key: key, // Unique file key
});
// 4. Generate a signed URL valid for a short period (e.g., 60 seconds)
const url = await getSignedUrl(s3, command, { expiresIn: 60 });
// Return the signed URL and key on success
return { success: true, url, key };
} catch (error) {
console.error('getSignedURL error:', error);
return {
success: false,
error: (error as Error).message || 'Something went wrong.',
};
}
};
Explanation:
getKindeServerSession()
ensures only authenticated users can upload.- A unique
key
is generated per user and timestamp. PutObjectCommand
specifies bucket and key.getSignedUrl
issues a temporary URL valid for 60 seconds.
5. File Upload Server Action
This server action handles file uploads to S3, including validation, authentication, uploading, and metadata storage.
'use server';
import { getKindeServerSession } from '@kinde-oss/kinde-auth-nextjs/server';
import { revalidatePath } from 'next/cache';
import { db } from '@/lib/db';
import { base64ToFile } from '@/lib/utils';
import { uploadFileSchema } from '@/lib/validation/upload';
import { getSignedURL } from '/actions/getSignedURL';
export const uploadFile = async (data: unknown) => {
try {
// 1. Validate input data with Zod schema
const { fileContent } = uploadFileSchema.parse(data);
// 2. Authenticate the user
const { getUser } = getKindeServerSession();
const user = await getUser();
if (!user?.id || !user?.email) {
throw new Error('You must be logged in to upload a file.');
}
const { name, size, type: fileType, base64 } = fileContent;
// 3. Get signed upload URL and key
const signedURLResult = await getSignedURL();
if (!signedURLResult.success || !signedURLResult.url || !signedURLResult.key) {
throw new Error('Failed to get signed upload URL.');
}
const { url: signedUrl, key } = signedURLResult;
// 4. Upload the file to S3
const uploadResponse = await fetch(signedUrl, {
method: 'PUT',
body: base64ToFile(base64, name),
headers: {
'Content-Type': fileType,
},
});
if (!uploadResponse.ok) {
throw new Error('Failed to upload file to S3.');
}
// 5. Save file metadata in the database
await db.file.create({
data: {
key,
name,
size,
type: fileType,
url: signedUrl.split('?')\[0],
},
});
// 6. Revalidate dashboard page to reflect the new upload
revalidatePath('/dashboard');
// 7. Return success response
return {
success: true,
fileKey: key,
};
} catch (error) {
console.error('uploadFile error:', error);
return {
success: false,
error: (error as Error).message || 'Something went wrong.',
};
}
};
Explanation:
getKindeServerSession()
ensures only authenticated users can upload.- The uploaded file is validated before any network request.
getSignedURL()
handles the creation of the signed URL and key.- The file is uploaded directly to S3 using the signed URL.
- Metadata (key, name, size, type, clean URL) is saved in the database.
revalidatePath()
ensures the frontend reflects the new upload immediately.
6. Optional: Delete Function for Cleanup
If you need to remove files (e.g., as an admin feature), use:
import { DeleteObjectCommand } from '@aws-sdk/client-s3';
import { s3 } from './s3';
export async function deleteFile(key: string) {
await s3.send(new DeleteObjectCommand({
Bucket: process.env.AWS_BUCKET_NAME!,
Key: key
}));
}