← migration

firebase → briven

the hardest path. firebase is a document database, firebase auth is its own world, firebase storage is GCS. briven is postgres + Better Auth + S3-compatible. plan a 2+ week parallel-run window.

read this first:firebase migrations expose document-shape mismatches that don't show up in unit tests. if you have a firestore field that's sometimes a string and sometimes a number, it lives across two columns on briven — catch this in step 1 of the playbook (inventory) by sampling 1k rows per collection and noting every field's observed types.

document → relational remap

three patterns cover most firestore collections:

  • flat collection — every document has the same shape. trivial: one briven table, one column per field. denormalised map fields can become either jsonb()(when you don't query inside) or sibling columns (when you do).
  • subcollectionusers/<uid>/notes/<noteId> becomes a child table with a foreign key to the parent. userId text().references('users', 'id'). queries change shape (from collection(db, 'users', uid, 'notes') to ctx.db('notes').where({ userId })) but the security model is clearer.
  • polymorphic union — a single collection where type drives which fields are populated. either: (a) one table with all union-shaped columns nullable, or (b) a base table + per-type child tables joined by id. (a) is faster to migrate; (b) is tighter to maintain. pick (a) for year-one and revisit.

schema sketch

firestore users/<uid> + users/<uid>/notes/<noteId>:

// firestore (informal)
users/<uid> = {
  email: string,
  displayName?: string,
  preferences: { theme: 'light' | 'dark', density: 'compact' | 'comfy' },
  createdAt: Timestamp,
}

users/<uid>/notes/<noteId> = {
  body: string,
  archived?: boolean,
  authorId: ref('users/<uid>'),  // implicit in firestore
  createdAt: Timestamp,
}

// briven
import { bigint, boolean, jsonb, schema, table, text } from '@briven/cli/schema';

interface Preferences {
  theme: 'light' | 'dark';
  density: 'compact' | 'comfy';
}

export default schema({
  users: table({
    columns: {
      id: text().primaryKey(),
      email: text().notNull(),
      displayName: text(),
      preferences: jsonb<Preferences>().notNull().default("'{}'"),
      createdAt: bigint().notNull(),
    },
    indexes: [{ columns: ['email'], unique: true }],
  }),
  notes: table({
    columns: {
      id: text().primaryKey(),
      userId: text().notNull().references('users', 'id'),
      body: text().notNull(),
      archived: boolean().notNull().default('false'),
      createdAt: bigint().notNull(),
    },
    indexes: [{ columns: ['userId', 'createdAt'] }],
  }),
});

data export — firestore → briven

firebase's admin SDK can stream a collection as ndjson. write a one-shot node script that walks every collection and pushes into briven via the cli's briven db shell-token dsn:

// migrate.ts
import admin from 'firebase-admin';
import postgres from 'postgres';
import { execSync } from 'node:child_process';

admin.initializeApp({ credential: admin.credential.applicationDefault() });
const dsn = execSync('briven db shell-token').toString().trim();
const sql = postgres(dsn);

const usersSnap = await admin.firestore().collection('users').get();
for (const doc of usersSnap.docs) {
  const data = doc.data();
  await sql`
    INSERT INTO users (id, email, display_name, preferences, created_at)
    VALUES (${doc.id}, ${data.email}, ${data.displayName ?? null},
            ${JSON.stringify(data.preferences ?? {})},
            ${data.createdAt.toMillis()})
    ON CONFLICT (id) DO UPDATE SET
      email = EXCLUDED.email,
      display_name = EXCLUDED.display_name,
      preferences = EXCLUDED.preferences
  `;
  // Stream subcollections.
  const notesSnap = await doc.ref.collection('notes').get();
  for (const note of notesSnap.docs) {
    const n = note.data();
    await sql`
      INSERT INTO notes (id, user_id, body, archived, created_at)
      VALUES (${note.id}, ${doc.id}, ${n.body}, ${n.archived ?? false},
              ${n.createdAt.toMillis()})
      ON CONFLICT (id) DO UPDATE SET body = EXCLUDED.body, archived = EXCLUDED.archived
    `;
  }
}
await sql.end();
console.log('done');

run it twice during the parallel-run window — first to seed, then again right before cutover to pick up writes that landed on firestore in the meantime. ON CONFLICT DO UPDATE makes both runs idempotent.

auth port

firebase auth → Better Auth. preserve users.id by passing the firebase uid as the briven user id during the export above. the cutover is a forced sign-in: users keep their email, get a fresh session.

if you used firebase's phone auth, briven doesn't have a first-class phone provider yet. plan to migrate phone-only users to email-or-magic-link before the cutover.

storage port

firebase storage is GCS. briven.tech uses MinIO; self-host is whatever S3-compatible bucket you point it at. gsutil rsync from your firestore bucket into a fresh briven bucket; the path layout is a free choice — keep your existing prefix structure and update your function code to read from ${projectId}/oldPrefix/....

reactivity port

firestore's onSnapshot → briven's useQuery("getThing", args). shapes are similar; the differences:

  • firestore subscribes to a query path; briven subscribes to a function. write the function once, every client uses it.
  • firestore returns QuerySnapshotwith per-document change events; briven returns the function's full return value on every NOTIFY. for high-fanout collections, this means more bytes over the wire — diff client-side if it matters, or paginate the function.