kontxt ranks memory candidates using four signals: semantic similarity, recency, access frequency, and explicit importance.Documentation Index
Fetch the complete documentation index at: https://docs.4stax.com/llms.txt
Use this file to discover all available pages before exploring further.
Scoring model
At a high level:- Semantic similarity: embedding similarity between the current query and candidate memories.
- Recency decay: exponential decay with roughly a 30 day horizon.
- Access frequency: log scaled. Memories that are repeatedly useful get a boost.
- Importance score: explicit weight set at write time.
Why recency and frequency exist
Semantic similarity alone is not enough. Two memories can be semantically similar but one may be outdated. Recency and frequency help kontxt prefer memories that have proven useful recently.Importance
Importance is a manual override. Use it sparingly and only for items you always want to retrieve, such as core preferences and irreversible architectural decisions.Embedding tiers (no mixing)
Memories are tagged with anembedding_tier. Similarity is only computed among memories from the same tier so switching providers doesn’t produce meaningless scores.
Supported tiers (in priority order):
- OpenAI (
text-embedding-3-small). Optional key. - Transformers.js (
all-MiniLM-L6-v2). Offline after first model download to~/.kontxt/models/. - Ollama. Used when
ollama serveis running with an embedding model pulled. - Pseudo. Keyword level fallback. Always works.

