Bad Brains
Anytime someone says they're building a second brain, I can tell exactly where they are with AI.
Shallow enough that you haven't seriously considered it. Tits-deep into the concept that structure will fix you. Or swimming in the realization that you just built a shittier version of the thing that helped you build it. The middle stage is where people build, and it's a trap.
I know because I built one and then slowly learned that I never needed it.
me but with less feelings
ShainAI (cringe) was a RAG system on pgvector, Next.js 14, GPT-4 Turbo embeddings. Over a thousand memories indexed. Auto-ingestion from ChatGPT (343 conversations, 5,037 chunks), Joplin (400 notes, 1,315 chunks), 900 local files. Real-time file monitoring. Git post-commit hooks. Tesseract OCR. A meta-memory loop that saved its own conversations so it could learn from how I used it.
It worked, and i used it for a couple of months. Until I realized what I'd actually built: a worse version of Claude Code, on a much smaller scale, with a much more limited scope. And then I deleted it.
structure was the pain point, not the feature
The vault model assumes retrieval is the hard part. Tagging, linking, Zettelkasten, all of it was compensating for the fact that you couldn't just ask. Now you can. And the AI doesn't need your pre-organized ontology to answer. It reads what's relevant when the question lands.
So the structure flips. It stops being infrastructure and becomes maintenance tax. You're organizing for a retrieval layer that doesn't need organization.
Meanwhile, Claude Code already sits next to the work. Session logs in every repo. CLAUDE.md files with project instructions. A memory system that remembers preferences across conversations. Per-project context that emerges from usage instead of from a tagging decision I made six months ago and forgot about.
Claude Code is your second brain. Most people haven't figured out how to make the most out of that partnership yet. It takes quite a bit of usage to start to feel the ins and outs of what all the tool is capable of. A lot of it will be things you wander into on your own, it comes from a momentary need or general curiosity.
The second brain didn't die. It got absorbed. It moved from a separate app into the tool you already work in.
private parts
This is where the objection usually lands. If Claude Code sits in the middle, a vendor sees everything. A local vault doesn't leak.
Smaller than it sounds. Claude Code retains inputs up to 30 days for abuse monitoring and doesn't train on them by default. Meanwhile, your vault syncs to iCloud or Dropbox, your email client probably leaks more sensitive data than your journal would, and your laptop is as secure as its disk encryption.
The real cases are compliance-driven, and these categorically can't be handled by a third-party model, no matter the retention policy.
And in those cases, you don't need a second brain. You need a local model. Ollama or LM Studio plus your own files gets you the AI workflow with no third party in the loop.
Either your threat model doesn't need what you think it does, or it needs something outside this argument entirely.
so
If you're building one right now, ask which thing you're doing. Solving a retrieval problem, or enjoying the build. I think either is good, actually, and I'm not here to discourage anyone. Heck, I couldn't have written this if I didn't do it myself. My recommendation is to keep going, as I learned a lot about RAG while doing it. Experience points are what count in the end, bb.