Softening the stack
Softening the stack
We have moved from treating servers like pets to treating them like cattle. Now, we are moving to a world where we don’t think about the servers at all.
This post explains how recent infrastructure updates-specifically Bun’s native S3 client and Vercel’s AI SDK-allow us to build production-ready AI applications with a radically simpler stack. The core idea is a no-database architecture that uses an object store for both files and metadata.
The ghost of stacks past
If we wanted to build an AI image generation app two years ago, we faced significant friction. The stack looked like this:
-
Heavy SDKs: We had to pull in the official AWS S3 SDK. The API was verbose. It required us to instantiate clients, create command objects, and manage credentials manually.
import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3' const client = new S3Client({ region: 'us-east-1', credentials: { accessKeyId: process.env.AWS_ACCESS_KEY_ID, secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY } }) const command = new PutObjectCommand({ Bucket: 'my-bucket', Key: 'photo.jpg', Body: buffer, ContentType: 'image/jpeg' }) await client.send(command) -
Database management: We needed a database to track prompts and user uploads. This meant setting up Postgres, adding an ORM like Prisma, and managing schema migrations.
-
Incidental complexity: We had to juggle five dashboards. Vercel for the frontend, Supabase for the database, AWS for storage, and Replicate for inference.
The new, simpler stack
Three specific updates shipped in late 2025 that collapsed this complexity.
1. Bun 1.3 (October 2025): Native S3 support
Bun 1.3 introduced a native S3 client. It is not an npm package but part of the runtime, just like fetch. Talking to storage is now a standard capability. The verbose AWS code above shrinks to this:
import { S3Client } from 'bun'
const s3 = new S3Client({ bucket: 'my-bucket' })
await s3.write('photo.jpg', buffer, { type: 'image/jpeg' })2. Railway Object Storage (October 2025)
Railway launched native Object Storage to solve the configuration problem. Instead of managing IAM policies and access keys in the AWS console, we add a storage volume in the Railway dashboard. It automatically injects the necessary environment variables (S3_ACCESS_KEY_ID, S3_BUCKET, etc.) into the application.
3. AI SDK 5 (2025): Unified image generation
Vercel’s AI SDK 5 introduced experimental_generateImage. This function abstracts the flow of submitting a prompt and polling for results into a single awaitable call. This standardization mirrors the goals of the Model Context Protocol (MCP), which unifies how applications connect to AI models.
import { experimental_generateImage as generateImage } from 'ai'
import { createReplicate } from '@ai-sdk/replicate'
// We use a factory function to pass the user's key dynamically
const replicate = createReplicate({ apiToken: userProvidedKey })
const { image } = await generateImage({
model: replicate.image('black-forest-labs/flux-schnell'),
prompt: 'purple cow eating ice cream',
aspectRatio: '16:9'
})How it works: The no-database approach
The most important part of this stack is that there is no database.
When we generate an image, we store two files in the S3 bucket: the image (.png) and a metadata file (.json). The object store’s file list becomes our index. This avoids the overhead of managing a separate SQL service.
Writing data
The /api/generate endpoint accepts a prompt and a temporary API key. We use the key once for the inference call and then discard it.
// Simplified logic for /api/generate
import { S3Client } from 'bun'
import { experimental_generateImage as generateImage } from 'ai'
import { createReplicate } from '@ai-sdk/replicate'
const s3 = new S3Client({ bucket: process.env.S3_BUCKET })
export async function POST(req) {
const { prompt, apiKey } = await req.json()
// 1. Generate the image
const replicate = createReplicate({ apiToken: apiKey })
const { image } = await generateImage({
model: replicate.image('black-forest-labs/flux-schnell'),
prompt,
})
// 2. Create a unique ID
const id = crypto.randomUUID()
const imageBuffer = Buffer.from(image.base64, 'base64')
// 3. Write image and metadata to S3 in parallel
await Promise.all([
s3.write(`${id}.png`, imageBuffer, { type: 'image/png' }),
s3.write(`${id}.json`, JSON.stringify({
id,
prompt,
createdAt: new Date().toISOString(),
}), { type: 'application/json' })
])
return Response.json({ id, status: 'success' })
}Reading data
To fetch the history, we list the objects in the bucket and filter for JSON files.
// Simplified logic for /api/history
import { S3Client } from 'bun'
const s3 = new S3Client({ bucket: process.env.S3_BUCKET })
export async function GET() {
// List all files (Bun S3 list returns an async iterator or array)
const result = await s3.list()
// Filter for metadata files
const metadataFiles = result.contents.filter(f => f.key.endsWith('.json'))
// Read them in parallel
const history = await Promise.all(
metadataFiles.map(async (fileInfo) => {
const file = s3.file(fileInfo.key)
return await file.json()
})
)
// Sort by date and return
const sorted = history.sort((a, b) =>
new Date(b.createdAt) - new Date(a.createdAt)
)
return Response.json(sorted.slice(0, 20))
}Sanity check
We should verify that our “database” is actually persisting data. Since we are using Bun.S3Client, we can write a quick test script to check the bucket contents directly from our terminal.
// check-storage.ts
import { S3Client } from 'bun'
const s3 = new S3Client({ bucket: process.env.S3_BUCKET })
console.log('Checking bucket connection...')
const result = await s3.list()
const jsonCount = result.contents.filter(k => k.key.endsWith('.json')).length
const pngCount = result.contents.filter(k => k.key.endsWith('.png')).length
console.log(`Found ${jsonCount} metadata files`)
console.log(`Found ${pngCount} images`)
if (jsonCount !== pngCount) {
console.error('WARNING: Data mismatch detected.')
} else {
console.log('Sanity check passed: Data is consistent.')
}Running this with bun run check-storage.ts gives us immediate feedback on the state of our system without needing a database GUI.
Why this matters
We are seeing capabilities diffuse downwards from user-space libraries into the platform itself. fetch started as a library, and now it is a standard. Now S3 clients and SQL drivers are moving into runtimes like Bun.
This shift reduces dependencies and supply chain risk. It creates a stack that feels solid rather than assembled. It mirrors the evolution of the frontend, where APIs like View Transitions have replaced complex animation libraries. (See: When browsers grew up).
The “no database” pattern is also underused. We are conditioned to reach for Postgres for every problem. But S3 is durable, cheap, and sufficient for many append-only workloads.
FAQ
Q: Is using S3 as a database viable for production?
A: Yes, for specific use cases. It works well for append-only logs, metadata storage, or read-heavy content where you don’t need complex joins or transactions. It eliminates the maintenance burden of a relational database.
Q: What is the main advantage of Bun’s native S3 client?
A: Performance and simplicity. The API is concise, and because it is implemented in native code within the runtime, it starts faster and uses less memory than the JavaScript-based AWS SDK.
Q: Does this lock me into Bun?
A: Strictly speaking, yes. Using import { S3Client } from 'bun' ties that specific file to the Bun runtime. However, the logic is standard; refactoring to the AWS SDK for Node.js would take minutes if you ever needed to switch.
Q: How does the AI SDK handle long-running generations?
A: The generateImage function handles the polling logic for you. If the underlying model (like Flux via Replicate) takes time to process, the SDK waits and only resolves the promise when the image is ready or fails.
Q: Can I search this data?
A: Not efficiently with this setup. If you need to search by prompt text or filter by complex criteria, you should use a real database or a search index like Meilisearch. This pattern is for simple, chronological lists.