Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Indexing everything becomes unbounded fast. Shrink scope to one source of truth and a small curated corpus. Capture notes in one repeatable format, tag by task, and prune on a fixed cadence. That keeps retrieval predictable and keeps the model inside constraints.




That’s another strong point, and I think it’s the pragmatic default: shrink scope, keep one source of truth, enforce a repeatable format, and prune on a cadence. It’s basically how you keep both retrieval and any automation predictable.

The tension I’m trying to understand is that in a lot of real setups the “corpus” isn’t voluntarily curated — it’s fragmented across machines/networks/tools, and the opportunity cost of “move everything into one place” is exactly why people fall back to grep and ad-hoc search.

Do you think the right answer is always “accept the constraint and curate harder”, or is there a middle ground where you can keep sources where they are but still get reliable re-entry (even if it’s incomplete/partial)?

I’m collecting constraints like this as the core design input (more context in my HN profile/bio if you want to compare notes).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: