raw postgres / drizzle / prisma → briven
the straightest path. you already have a postgres schema; briven gives you reactive queries, function hosting, and a managed deploy story on top of it. follow the ten-step playbook on /migration — this page documents the postgres-specific parts.
schema port — drizzle / prisma → briven dsl
your existing column types map directly. drizzle:
// drizzle
export const notes = pgTable('notes', {
id: text('id').primaryKey(),
body: text('body').notNull(),
authorId: text('author_id').references(() => users.id),
createdAt: timestamp('created_at').defaultNow().notNull(),
});
// briven
import { schema, table, text, timestamp } from '@briven/cli/schema';
export default schema({
notes: table({
columns: {
id: text().primaryKey(),
body: text().notNull(),
authorId: text().references('users', 'id'),
createdAt: timestamp().notNull().default('now()'),
},
}),
});prisma:
// prisma
model Note {
id String @id @default(cuid())
body String
authorId String?
author User? @relation(fields: [authorId], references: [id])
createdAt DateTime @default(now())
}
// briven
notes: table({
columns: {
id: text().primaryKey(),
body: text().notNull(),
authorId: text().references('users', 'id'), // FK relation flattens to a column ref
createdAt: timestamp().notNull().default('now()'),
},
}),conventions to know:
- column casing. briven dsl is camelCase in TS; the generated SQL is snake_case-by-convention. if your existing tables already use snake_case, the migration is zero-diff at the SQL layer.
- indexes live on the table's
indexes: [...]array, not chained on the column. compound + unique indexes go here. - generated columns (drizzle
$generated(), postgresGENERATED ALWAYS AS) aren't modelled in the dsl yet — declare the column astext()/integer(), then add the GENERATED clause via a custom migration step.
query layer — drizzle/prisma → ctx.db
briven's ctx.db is a focused query builder, not a full ORM. the 90% of select / insert / update / delete patterns translate directly:
// drizzle
const rows = await db.select().from(notes).where(eq(notes.authorId, id)).orderBy(desc(notes.createdAt)).limit(50);
// briven
const rows = await ctx.db('notes')
.select()
.where({ authorId: id })
.orderBy('createdAt', 'desc')
.limit(50);for the remaining 10% — joins, CTEs, window functions, full-text — drop to ctx.db.execute(sql, params) with a parameterised SQL string. see /functions for the full Ctx shape.
data port — pg_dump | pg_restore
since briven's data plane is also postgres, the data move is a pg_dump pipe. briven creates a per-project schema (proj_<projectId>); your existing public schema lands inside it.
# 1. open a short-lived dsn into the briven project's schema
briven db shell-token > /tmp/briven-dsn # writes a single-line dsn
# 2. dump source, restore into briven, scoped to public
pg_dump --schema=public --no-owner --no-privileges \
--format=custom \
"$SOURCE_DATABASE_URL" \
| pg_restore --no-owner --no-privileges \
--schema=public \
--dbname="$(cat /tmp/briven-dsn)"
# 3. verify row counts match
psql "$SOURCE_DATABASE_URL" -tAc 'select count(*) from notes'
psql "$(cat /tmp/briven-dsn)" -tAc 'select count(*) from public.notes'the briven dsn is short-lived (15 minutes per issuance) — issue a fresh one if your dump runs longer.
functions port — drizzle/prisma handlers → briven functions
your existing API handlers (express, fastify, hono, next.js api routes) become files under briven/functions/. one file per endpoint:
// before: express + drizzle
app.get('/api/notes', async (req, res) => {
const rows = await db.select().from(notes).where(eq(notes.authorId, req.user.id));
res.json({ notes: rows });
});
// after: briven/functions/getNotes.ts
import { query, type Ctx } from '@briven/cli/server';
export default query(async (ctx: Ctx) => {
if (!ctx.auth) throw new Error('unauthorized');
return await ctx.db('notes').select().where({ authorId: ctx.auth.userId });
});the wrapping framework goes away — briven owns the http surface. invoke from the client via briven invoke getNotes, or via the SDK's reactive useQuery.
auth port
if you were rolling your own auth on top of postgres (sessions table + cookie + bcrypt), Better Auth gives you the same primitives without the maintenance burden. magic-link + email/password + GitHub OAuth ship out of the box; bring your users.id column over and Better Auth keeps using it.