In this post, I’ll walk you through how I integrated AWS S3 file uploads into my project using Next.js Server Actions. The goal was to keep the API clean, secure, and easy to scale — and avoid redundant route handlers or third-party libraries.
When building file upload functionality, I wanted a fast and secure way to let users upload files directly to cloud storage — without storing them on my own server or setting up an extra API layer.
My goals were:
Diagram showing how the client, server, and AWS S3 interact using signed URLs and file metadata.
First, create a private bucket on AWS S3 and configure CORS to allow browser uploads. Then, add an IAM user with limited permissions to upload objects only to a specific folder.
Here’s the minimal IAM policy scoped to /uploads/:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:PutObject"],
"Resource": "arn:aws:s3:::your-bucket-name/uploads/*"
}
]
}
🔐 Restrict access to a subfolder (
/uploads/) to avoid exposing the rest of the bucket.
Next, initialize an S3 client using the AWS SDK v3 so you can generate pre-signed URLs from the server.
Add these to your .env (and ensure it’s in .gitignore):
AWS_BUCKET_NAME=your-bucket-name
AWS_BUCKET_REGION=your-region
AWS_ACCESS_KEY_ID=your-access-key-id
AWS_SECRET_ACCESS_KEY=your-secret-access-key
🔒 Never commit your credentials — keep
.envprivate.
Create lib/s3.ts:
import { S3Client } from '@aws-sdk/client-s3';
export const s3 = new S3Client({
region: process.env.AWS_BUCKET_REGION!,
credentials: {
accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!,
},
});
This s3 client will be reused in your Server Actions to generate signed URLs and interact with your bucket.
Here’s a complete Server Action that checks user authentication, generates a unique key, and returns a short-lived signed URL:
'use server';
import { PutObjectCommand } from '@aws-sdk/client-s3';
import { getSignedUrl } from '@aws-sdk/s3-request-presigner';
import { getKindeServerSession } from '@kinde-oss/kinde-auth-nextjs/server';
import { s3 } from '@/lib/s3';
export const getSignedURL = async () => {
try {
// 1. Verify the user session and retrieve user info
const { getUser } = getKindeServerSession();
const user = await getUser();
if (!user?.id || !user?.email) {
throw new Error('Not authenticated');
}
// 2. Generate a unique S3 object key using user ID and timestamp
const key = `${Buffer.from(user.id).toString('base64')}-${Date.now()}`;
// 3. Create the S3 command to put an object with the specified key
const command = new PutObjectCommand({
Bucket: process.env.AWS_BUCKET_NAME!, // Target S3 bucket
Key: key, // Unique file key
});
// 4. Generate a signed URL valid for a short period (e.g., 60 seconds)
const url = await getSignedUrl(s3, command, { expiresIn: 60 });
// Return the signed URL and key on success
return { success: true, url, key };
} catch (error) {
console.error('getSignedURL error:', error);
return {
success: false,
error: (error as Error).message || 'Something went wrong.',
};
}
};
Explanation:
getKindeServerSession() ensures only authenticated users can upload.key is generated per user and timestamp.PutObjectCommand specifies bucket and key.getSignedUrl issues a temporary URL valid for 60 seconds.This server action handles file uploads to S3, including validation, authentication, uploading, and metadata storage.
'use server';
import { getKindeServerSession } from '@kinde-oss/kinde-auth-nextjs/server';
import { revalidatePath } from 'next/cache';
import { db } from '@/lib/db';
import { base64ToFile } from '@/lib/utils';
import { uploadFileSchema } from '@/lib/validation/upload';
import { getSignedURL } from '/actions/getSignedURL';
export const uploadFile = async (data: unknown) => {
try {
// 1. Validate input data with Zod schema
const { fileContent } = uploadFileSchema.parse(data);
// 2. Authenticate the user
const { getUser } = getKindeServerSession();
const user = await getUser();
if (!user?.id || !user?.email) {
throw new Error('You must be logged in to upload a file.');
}
const { name, size, type: fileType, base64 } = fileContent;
// 3. Get signed upload URL and key
const signedURLResult = await getSignedURL();
if (!signedURLResult.success || !signedURLResult.url || !signedURLResult.key) {
throw new Error('Failed to get signed upload URL.');
}
const { url: signedUrl, key } = signedURLResult;
// 4. Upload the file to S3
const uploadResponse = await fetch(signedUrl, {
method: 'PUT',
body: base64ToFile(base64, name),
headers: {
'Content-Type': fileType,
},
});
if (!uploadResponse.ok) {
throw new Error('Failed to upload file to S3.');
}
// 5. Save file metadata in the database
await db.file.create({
data: {
key,
name,
size,
type: fileType,
url: signedUrl.split('?')\[0],
},
});
// 6. Revalidate dashboard page to reflect the new upload
revalidatePath('/dashboard');
// 7. Return success response
return {
success: true,
fileKey: key,
};
} catch (error) {
console.error('uploadFile error:', error);
return {
success: false,
error: (error as Error).message || 'Something went wrong.',
};
}
};
Explanation:
getKindeServerSession() ensures only authenticated users can upload.getSignedURL() handles the creation of the signed URL and key.revalidatePath() ensures the frontend reflects the new upload immediately.If you need to remove files (e.g., as an admin feature), use:
import { DeleteObjectCommand } from '@aws-sdk/client-s3';
import { s3 } from './s3';
export async function deleteFile(key: string) {
await s3.send(new DeleteObjectCommand({
Bucket: process.env.AWS_BUCKET_NAME!,
Key: key
}));
}