← migration

hasura → briven

hasura is "graphql in front of your postgres". briven is "reactive functions in front of your postgres". the postgres half ports cleanly; the graphql + permissions half becomes function code. plan a 3-5 day window for a mid-size project (50-150 tables, a dozen actions, a handful of remote schemas).

read this first:the trap with hasura migrations is that the schema port looks trivial (it's the same postgres!) and then the permissions port doubles your timeline. permissions are not optional — every public graphql query you exposed had a permission rule, and skipping any of them in the port turns a private query into an open endpoint. step 4 of the playbook is where 90% of the work lives.

what carries over for free

  • tables, columns, indexes, foreign keys, constraints pg_dump --schema-only from the hasura postgres and load it into a scratch postgres; that file is 95% of briven/schema.ts. translate with the cli's briven import --schema-sql ./schema.sqlgenerator (best-effort; eyeball before committing).
  • postgres functions + triggers + extensions pg_dumpincludes them; briven's data plane runs the same pg17+pgvector image. enable extensions explicitly via the schema dsl so the migration runner knows to load them on a fresh project.
  • row datapg_dump --data-only pg_restore through briven db shell-token. row-count both sides; numbers must match.

what becomes function code

everything graphql-shaped becomes a briven query/mutation. hasura's metadata maps as follows:

  • tracked tables / autogenerated queries — write one query()per "table-as-graphql-root" surface your clients actually use. don't port every autogenerated field — most of them aren't called. grep your frontend for the queries you actually send, port only those.
  • relationships — hasura denormalises related rows inline (users(... { posts { ... } })). in briven this is one or two joins inside the function, or a sibling useQuery on the client. for nested shapes, prefer the join in the function — fewer roundtrips, easier auth.
  • actions (HTTP webhooks) — direct port to mutation() functions. the action's url + handler becomes the body of the function.
  • event triggers — port to either (a) a briven function called from an outbox table that postgres triggers write into, or (b) a postgres trigger that calls pg_notify and a briven listener function picks up. (a) is simpler; (b) is lower latency.
  • scheduled triggers (cron) — briven has a cron primitive coming in v1; until then, ship a github actions / external scheduler that hits a briven function on a schedule.
  • remote schemas— these were "merge another graphql api into ours." in briven, write a function that fetches from the remote api in its handler and returns the data. you lose graphql stitching; you gain typed request/response shapes.

permissions port

hasura permissions live in metadata: per-table, per-role, with select/insert/update/ delete columns + a row filter expression. every one of them needs an equivalent in your function code. there is no shortcut.

// hasura: a select permission on "notes" for role "user"
// filter: { user_id: { _eq: "X-Hasura-User-Id" } }
// columns: [id, body, created_at]

// briven equivalent — inside briven/functions/notes.ts
import { query } from '@briven/cli/server';

export const list = query({
  args: { /* none — the user is identified by the session */ },
  handler: async (ctx) => {
    const userId = ctx.user?.id;
    if (!userId) throw new Error('unauthorized');
    const rows = await ctx.db('notes')
      .select(['id', 'body', 'created_at'])  // mirror the column allowlist
      .where({ user_id: userId });           // mirror the filter
    return rows;
  },
});

the pattern is mechanical but tedious. inventory every (role, table, action) triple from your hasura metadata before writing any function code — that list is the work.

auth port

hasura's auth is "decode a jwt, extract X-Hasura-User-Id, apply permissions." briven's auth is Better Auth — sessions over cookies, token-based for headless clients. two cutover options:

  • preserve user ids— easiest. your hasura users table has stable ids; export them as-is into briven's users table via the data copy. users sign in fresh (forced re-auth) but keep their data and links.
  • preserve jwts— if you can't force a re-auth (consumer app with many active sessions), keep your existing auth issuer and validate its jwts in a briven middleware. file a support ticket — this path is supported but not self-service yet.

subscriptions port

hasura's subscription → briven's reactive useQuery("getThing", args). the shapes look identical from the client's side; the wire protocol is briven's. one difference worth calling out:

  • hasura sends incremental updates over a long-lived websocket; briven re-runs the function on every relevant LISTEN/NOTIFY and pushes the full new result. for very large result sets this is more bytes per update — paginate the function or split into smaller subscriptions if it matters.
  • hasura aggregations (_aggregate) over a live query are heavy; the same logic in a briven function lets you pre-compute or cache. mostly an upgrade.

cutover checklist

  • schema + indexes + extensions match (pg_dump diff)
  • every (role, table, action) permission has a function equivalent
  • every active client query in your frontend has a briven function
  • actions ported + tested with real upstream
  • event triggers wired
  • auth strategy decided (preserve ids vs preserve jwts)
  • 48-hour parallel-run window planned + observed
  • hasura console set to read-only after cutover for 7 days