ai schema generator
describe your app in plain english, get a draft schema.tsback. powered by a self-hosted Qwen 2.5-coder 32B running on briven infrastructure — your prompt never leaves briven's network.
how to use it
- open the ai schema tab on any project at
/dashboard/projects/<p_id>/ai-schema - type a description of your data model in the textarea (up to 4000 chars)
- click generate schema. expect 5-15 seconds for a typical response — Qwen runs on a single DGX so larger prompts take longer
- click copy on the result, paste into your project's
briven/schema.ts, and review every column before committing - run
briven deployas usual — the AI output goes through the same schema-diff + migration path as a hand-written change
what makes a good prompt
- be specific about relationships: "users have many posts, posts have many comments, comments can reply to other comments" ports cleanly. "a blog" doesn't.
- name your domain entities: "projects, tasks, time-tracking entries" beats "a productivity app"
- call out denormalised fields: "each post stores its current comment count alongside the comment rows" saves a follow-up query
- mention indexes you know you'll need: "most queries are by user_id and created_at descending" signals the right index
what the AI knows about briven
the model is primed with a system prompt that:
- pins the available column helpers:
text(),bigint(),boolean(),timestamp(),jsonb<T>(),uuid() - pins the modifiers:
.primaryKey(),.notNull(),.default(...),.nullable(),.references(table, column),.unique() - insists on a primary-key column per table; prefers
text()for ULIDs; usesbigint()only for counters - adds indexes only where a non-trivial query would scan — no over-indexing
- returns only the code (no markdown fences, no explanation) so it pastes cleanly into your editor
what it can't do
- write your functions — schema only. function bodies are too app- specific to template
- guess your auth model — every table that needs per-user scoping still needs an explicit
user_idcolumn and the function-level guard - refactor an existing schema— the prompt assumes you're generating from scratch. for incremental changes, edit the schema by hand and let
briven deploycompute the diff - understand your data privacy constraints— review the output for anything you wouldn't store; the AI doesn't know your jurisdiction
privacy
your prompt and the generated schema are notlogged. only the prompt length, response length, model name, and elapsed milliseconds are recorded for operational monitoring. the request never leaves briven's infrastructure — the Ollama instance runs on a dedicated DGX VPS in EU-Central.
we do not train or fine-tune the model on your prompts. there is no "telemetry to improve the service" pipeline.
when it's offline
if the dashboard shows "AI assistant offline", the operator has not configured the Ollama endpoint (BRIVEN_OLLAMA_URL is unset on the api container). this is the default state on self-hosted briven — the feature is opt-in. self-hosters who want it should:
- run Ollama on a machine with at least 24GB GPU VRAM (qwen2.5-coder:32b quantized fits in ~22GB)
ollama pull qwen2.5-coder:32b- set
BRIVEN_OLLAMA_URL=http://your-ollama-host:11434on the api container - restart the api
more AI features (function bodies, query suggestions, performance review) are on the roadmap for year-two.