Context7 · DeepWiki — attaching library docs to LLMs
Context7 · DeepWiki etc. — attaching library docs to LLMs
The training-data cutoff of an LLM never quite matches the latest. Library docs and API specs change often, and the model answers with old information confidently. MCP servers and services that expose docs in an LLM-friendly form have shown up in this gap.
1. About these tools
Context7 — a library-docs service operated by Upstash. Site at context7.com, code at github.com/upstash/context7. Each library's official docs, README, and examples are pre-organized in an LLM-friendly format (markdown) and exposed via an MCP server or HTTP API.
A common usage pattern:
Append the line "use context7" at the end of the prompt.
When this phrase appears, the MCP client asks the Context7 server for a library ID and pulls a slice of the returned docs into context.
DeepWiki — a GitHub-repository documentation service operated by Devin AI (Cognition). Analyzes any GitHub repository to provide an auto-generated wiki and search, also available as an MCP server.
Ref — a service that gathers various official docs in an LLM-friendly form. Provided as an MCP server.
devdocs.io — an open-source multi-doc viewer (since 2013) created by Thibaut Courouble and others. Unified search across dozens of official docs (MDN · Python · Ruby · Node · Rust). Not directly LLM-integrated, but a forerunner of unified doc search.
llms.txt — a simple convention proposed in 2024 by Jeremy Howard and others. The idea is to put a markdown index and summary at /llms.txt or /llms-full.txt at the site root so an LLM can quickly understand the site.
# Site name
> One-line summary
## Docs
- [Getting Started](https://example.com/docs/start.md): 30-second start.
- [API Reference](https://example.com/docs/api.md): full API.
2. Context7's flow
- Look up by library identifier (
/vendor/libraryor library name). - The server returns a list of organized doc topics for that library.
- Fetch docs per topic and inject them into the model context.
The library is fetched from a registered pool, so availability depends on whether it's registered.
3. MCP registration format
{
"mcpServers": {
"context7": {
"command": "npx",
"args": ["-y", "@upstash/context7-mcp"]
}
}
}
Can also be registered with a remote HTTP transport. Authentication and rate limits live in the operator's docs.
4. Relationship with llms.txt
- When a site provides
/llms.txt, tools and services follow that index to drive context injection. - Services like Context7 provide their own organized docs whether or not the site has
/llms.txt. The two are complementary. - As llms.txt adoption grows, dependency on intermediary services may shrink.
5. Other paths
| Place | Tools / services |
|---|---|
| Library docs organized and exposed | Context7 · Ref · GitMCP. |
| Wiki-fying arbitrary repositories | DeepWiki. |
| Unified search across multiple official docs | devdocs.io. |
| Site-side self-exposure | llms.txt · llms-full.txt. |
| Indexing your own docs | Build RAG yourself (pgvector · LlamaIndex). |
| Structured code search | Sourcegraph · GitHub code search. |
The choice depends on:
- Whether the material is static or changes often.
- The need for authentication and policy control.
- Whether outbound data transmission is allowed.
- Acceptable response latency.
6. Combined with coding assistants
Places where registering Context7-like servers with an LLM coding assistant helps:
- Verifying signatures of fast-moving libraries (TypeScript SDKs).
- Recognizing changes in a new major version.
- Correcting drift between APIs the older model knows and the actual API.
7. Exposing your own library
Strands for making your own library or SaaS LLM-friendly:
- Add
/llms.txtand/llms-full.txtat the site root. - Statically host markdown docs.
- Expose API references in OpenAPI as well.
- Try registering with Context7 (when possible).
8. Token limits and excerpting strategy
Putting a library's entire docs into context fills tokens fast. Tools usually slice by topic when delivering. The caller may specify topics explicitly, or the tool may set a default.
9. Common pitfalls
Version drift — when a new version ships before the docs are updated, old information stays exposed. Check the freshness time.
Minor libraries unregistered — large ecosystems are well covered, but Korean-language and minor libraries are sometimes missing.
Trust boundary — prompt injection may hide inside docs returned from external servers. Design the trust boundary on the host side.
Outbound data — some services may use user queries as training data. Check the policy.
Context flooding — a single "use context7" line can pull in too large a bundle and crowd out other context. Specify the topic.
Broken links — when llms.txt has broken links, tools fetch the wrong page. Keep relative/absolute paths consistent.
Copyright — assume the redistributing party complies with licenses. Specify the license when publishing your own material.
Closing thoughts
The drift between an LLM's training cutoff and the current state of libraries is filled in by tools like Context7, llms.txt, and self-hosted RAG. When exposing your own library or SaaS, a single /llms.txt line is a quick starting point.
Next
- mcp-figma
- google-adk
We refer to Context7 · Context7 GitHub · DeepWiki · devdocs.io · llms.txt proposal · Anthropic MCP · Awesome MCP Servers.