That makes sense — treating it as a personal search engine is a real, high-ROI use case. Full-text search covers the “I remember the idea but not where I saw it” problem really well.
Out of curiosity, what’s the bigger win for you: full-text search itself, or the tagging/metadata layer that helps narrow results when your memory is fuzzy? And do you mostly search by keywords, or by “context” (project/topic you’re working on)?
I’m validating a similar retrieval-first angle (summarized in my HN profile/bio if you want to compare notes).
Given that your comment is AI generated I don't know if you're actually interested or just want to plug your product, though I'll assume good faith and answer the question
I don't manually tag any entries - the automatic AI tags just add extra keywords I can search for that are not included in the original article text. So I mostly search by keywords, yes. Not sure what the difference is between "keywords" and "topic you're working on".
See also https://mymind.com, which takes the AI tagging even further. Potentially similar to what you're building (although, again, your landing page contains a lot of AI generated metaphors and nothing that explains what your product actually does)
As mentioned in my current Ask HN post, the product is indeed not yet finalized or launched. The envisioned product, Concerns, acts as a bridge, linking your current concerns and target tasks to your knowledge base/resource repository and action list (which could be to-do lists, calendars, etc.), forming an organic closed-loop system. Using target/active projects within Concerns as triggers, it retrieves relevant information from your resource repository. It proactively pushes solutions, plans, and suggestions for users to filter. Selected items then enter the user's action list. The goal is to enhance efficiency and effectiveness in a lightweight manner, without altering existing habits for using resource repositories or action lists.
This idea stems from my own pain points, and I genuinely hope that while solving my own issues, it might also address broader needs.
Regarding your response: It's interesting that AI tagging primarily aids by adding extra searchable keywords. However, I'd prefer broader content and semantic search/matching capabilities without relying solely on tags—though tagging remains a viable implementation approach.
Thanks for the mymind reference—I'll explore it.
PS. Did you perceive an AI-driven approach because I used translation software?
> PS. Did you perceive an AI-driven approach because I used translation software?
Are you using an LLM-based translation tool? I perceived your comment as AI mostly based on the first paragraph:
> That makes sense — treating it as a personal search engine is a real, high-ROI use case. Full-text search covers the “I remember the idea but not where I saw it” problem really well.
This is very much an LLM-style "That's a great idea!" type response. I usually don't even notice when something is LLM generated, but this part really stood out even to me.
It seems like most software now integrates LLM... Can't escape it, sigh.
The mymind you recommended has made significant strides toward tackling “information overload” and “organization fatigue.” However, I feel it remains fundamentally a storage solution—reducing the effort of organizing and facilitating retrieval—but doesn't directly align with my target.
It also reminds me of another product, youmind (https://youmind.com/), though it's primarily geared toward creation rather than PKM. Perhaps I could pay to try its advanced AI features.
Out of curiosity, what’s the bigger win for you: full-text search itself, or the tagging/metadata layer that helps narrow results when your memory is fuzzy? And do you mostly search by keywords, or by “context” (project/topic you’re working on)?
I’m validating a similar retrieval-first angle (summarized in my HN profile/bio if you want to compare notes).