May 12, 2026
Grimoire toward v1.0: where the project stands
Current status of the work, the release gate, and what still stands before a public 1.0.
Grimoire toward v1.0: where the project stands
Grimoire is a local-first desktop notes app with built-in LLM assistance: you write Markdown, organize a vault on disk, and chat with models through Ollama while RAG pulls context from your own notes (and optional local knowledge sources). The stack is deliberately boring in the best way—Svelte and Vite on the front end, Tauri and Rust on the back end, SQLite for structured data and full-text search, and LanceDB for vectors. Nothing requires the cloud; nothing should leave the machine unless the user clearly chooses it.
This post is a status snapshot: what is already real in the app, how I think about quality before calling it “1.0,” and the concrete work left on the Phase 4 — Release checklist.
What “done” already looks like
On my internal roadmap, Phase 1 (MVP) through Phase 3 (knowledge and file features) are effectively complete from a checkbox perspective. In practice that means a lot more than “chat works.”
Core notes and PKM: folders, tags, wiki-links, backlinks, graph view, templates, properties (“Notion-style” databases), calendar and daily notes, kanban, tabs, quickswitcher, bookmarks, read/edit mode, transclusion-style embeds, version history with diff and restore, and a long tail of editor and shell polish (context menus, formatting shortcuts, focus mode, themes, streaming chat tokens, and more).
Privacy and trust primitives: vault and folder password protection with encryption at rest; locked content excluded from search and RAG until unlocked; an audit log of sensitive actions (viewable, filterable, exportable, with retention controls); checkpointed re-index so large vaults can recover progress; incremental indexing when a locked folder is unlocked so the assistant does not silently “forget” those notes.
Knowledge beyond the vault: Wikipedia as a local ZIM-backed library (reader, sync, offline handling, disk checks, FTS plus semantic search), and a file scanner for user-selected paths with incremental rescans, more formats, encoding detection, exclude globs, and staleness warnings.
LLM depth: RAG quality work (chunking, distance filtering, model eviction around embeddings, re-index commands), kanban/database context in the system prompt, a maintained in-prompt feature guide so the model can explain the app, configurable inference settings, and optional presets like a minimal “caveman” verbosity mode for slow hardware.
If the north star is “old notes become useful again instead of a forgotten archive,” the product path is aligned: search, links, calendar, kanban, RAG, and local Wikipedia all push toward rediscovery without renting someone else’s server.
The v1.0 gate: Phase 4 is not “more features”
Phase 4 is explicitly the v1.0 gate. The roadmap states that nothing in later phases starts until every item there is complete. That reframes the next months: less about surprising users with novelty, more about proving performance, retrieval quality, packaging, governance, and first-run clarity.
Some Phase 4 items are already landed—notably note version history (a prerequisite for any future “agent proposes edits” workflow) and foundational testing (Rust unit coverage in critical areas, Vitest on the front end). A test data generator and website work are also checked off at the roadmap level.
What still stands before calling a build “1.0” includes, among other things:
- Measurable performance and search quality. Cold start, time-to-first-token for RAG, save latency, and per-note embed time need documented targets and a repeatable benchmark on a reference machine profile; separately, a retrieval benchmark (synthetic vault, fixed queries, pass/fail thresholds) is treated as a release blocker alongside raw speed.
- Hardware-adaptive indexing. Defaults should scale with tiered hardware so low-end machines do not thrash or OOM while high-end machines are not artificially capped.
- Shipping mechanics: opt-in update checks (version-only, no telemetry) and optional background updates; cross-platform CI for Windows, macOS, and Linux; dependency license and reduction audits; CLA setup before the repo is wide open; external code review pass.
- Identity and migration. Bundle metadata still needs a final application identifier (today’s dev identifier is the sort of detail users should never see in
%AppData%paths); any change needs a careful one-time migration that never risks vault data. - Onboarding. An installation / first-run wizard: skippable tour, vault starter templates, dependency and hardware checks, curated model suggestions, optional Wikipedia bundles—so “local-first” does not mean “expert-only.”
- Documentation and support entry points: published user docs (getting started, shortcuts, privacy explainer, troubleshooting) and an explicit “report a bug” action that opens the website—user-initiated, not silent upload.
- Engineering hygiene: finishing the
vectormodule call-site migration to explicit domain paths so every index touch is obvious at the call site.
None of that is glamorous on a feature list. It is what makes a local-first app credible when strangers install it once and decide whether to trust it with years of notes.
How I’m working up to the release
My working style matches the gate: chat and RAG regressions are release blockers; accessibility is mandatory; uninstall must never delete note vault directories; nothing leaves the machine unless the user opts into sync (a later phase entirely). On low-end hardware, the product must remain a usable note app with LLM features off by default, with clear explanations and warnings—not a slideshow of loading spinners.
Concretely, “working up to release” means:
- Freezing scope for 1.0 around Phase 4, while Phase 5+ remains a labeled fast-follow backlog (i18n extraction, richer Wikipedia UX, file scanner depth, and so on).
- Instrumenting and gating builds with benchmarks and CI, not just manual dogfooding.
- Polishing the first hour—wizard, docs, bug reporting—so the technical stack is not the user’s problem.
- Keeping the dual-database story honest—SQLite and LanceDB stay aligned after bulk operations, embedding stability mitigations stay in place, and locked content never leaks into retrieval.
Closing
Grimoire today is already a deep application: PKM surface area, encryption, auditability, Wikipedia, file ingestion, and a serious RAG pipeline—all without a mandatory cloud account. v1.0 is less about adding headline features than about numbers, packaging, and narrative: proving speed and retrieval on real hardware, shipping installers people can trust, and documenting what happens on their disk in plain language.
If you are following along, the living source of truth is the project roadmap: phases 1–3 describe what you can try now; Phase 4 is the checklist that turns “ambitious dev build” into first public release.