Nuxt Content Documentation Reference Skill
Skill that ensures Claude fetches current Nuxt Content documentation before answering questions or implementing features
- đĄ Use Case:
- Use when working with Nuxt Content queries, MDC components, or document-driven mode - especially when API accuracy is critical
- đ¤ Expected Output:
- Targeted documentation extraction (1-3k tokens) followed by accurate answers or implementations based on current API
- â Success Rate:
- High - prevents stale API usage and reduces context bloat through targeted extraction
Prompt Content
name: nuxt-content description: Use when answering questions about OR implementing features with Nuxt Content (queries, MDC components, document-driven mode), especially when uncertain about current API - Nuxt Content is frequently updated with breaking changes, so fetch current documentation from llms.txt before responding to ensure accuracy over training data
Nuxt Content Documentation Reference
Overview
Nuxt Content evolves rapidly. Training data is stale. Fetch current docs FIRST, implement/answer SECOND.
Core principle: https://content.nuxt.com/llms.txt is the source of truth for ALL Nuxt Content questions AND implementations, regardless of how simple the task seems or how confident you feel about the API.
When to Use
Use this skill for ANY question OR implementation task about:
- Nuxt Content queries (
queryContent,where, operators) - MDC (Markdown Components) syntax
- Document-driven mode
- Content rendering components
- API references
- Version-specific features
- Troubleshooting Nuxt Content issues
Especially use when:
- Implementing a feature using Nuxt Content
- Not confident about current API syntax
- Aware training data might be stale
- About to write code involving Nuxt Content
- User mentions âNuxt Contentâ and you need to take action
Mandatory Workflow
Step 1: Fetch Documentation with Targeted Extraction
CRITICAL: Use targeted WebFetch prompts that extract ONLY the specific sections needed. Do NOT ask for full documentation dumps.
Use WebFetch on: https://content.nuxt.com/llms.txt
Prompt format:
"Extract ONLY the documentation for [specific API/feature/concept].
Include:
- API syntax with parameters
- Code examples
- Version-specific notes
- Common gotchas
Exclude everything else. Return a concise summary."
Why targeted extraction matters:
- Full docs are 50k+ tokens
- Targeted extraction returns 1-3k tokens
- 20-50x context savings vs full dump
- Faster responses, cleaner context
Exception for reusing: Reuse docs from same conversation ONLY if ALL conditions met:
- You already fetched docs for this topic in this conversation AND
- The previous fetch explicitly covered the SAME specific API/feature/concept AND
- You can answer the new question completely with the previous result AND
- Less than 10 minutes have passed
When in doubt, fetch again with targeted prompt. Be specific about what to extract.
Step 2: Answer Using Extracted Documentation
- Use extracted documentation as PRIMARY source
- Reference training data only to supplement or provide context
- Include links to relevant docs sections
- Cite what came from llms.txt vs. general knowledge
NO EXCEPTIONS to targeted fetch: Fetch docs even when:
- User says âquick questionâ
- Question seems simple
- You feel confident about the answer
- User mentions they already looked at docs
- Itâs a yes/no question
- Itâs a version confirmation
- Youâre late in the conversation
Red Flags - STOP and Dispatch Subagent First
If youâre thinking ANY of these thoughts, STOP. Dispatch subagent to fetch docs FIRST:
Question-answering red flags:
- âThis is a straightforward question I knowâ
- âUser needs this fastâ
- âFetching docs would take extra timeâ
- âLow risk if Iâm wrongâ
- âI can fetch later if neededâ
- âThey already looked at docsâ
- âThis feature is mature, my knowledge is currentâ
- âJust need to confirm what I knowâ
Implementation red flags:
- âIâll implement based on what I knowâ
- âThis seems straightforward to implementâ
- âLet me write the code quicklyâ
- âI know how queryContent worksâ
- âI remember the MDC syntaxâ
- âIâve used Nuxt Content beforeâ (in training data)
- âI can update if the code failsâ
- âJust need to write simple query codeâ
Context-bloat red flags (CRITICAL):
- âIâll fetch the full documentationâ
- âLet me get all the docs to be thoroughâ
- âIâll load llms.txt completelyâ
- âBetter to have all the infoâ
- âGeneric WebFetch prompt is fineâ
ABSOLUTELY FORBIDDEN: Fetching full documentation dump. This bloats context with 50k+ tokens.
REQUIRED: Use targeted WebFetch prompts that extract ONLY the specific section needed (1-3k tokens).
All of these mean: Use targeted extraction. Specify EXACTLY what to extract. No full dumps.
Why Training Data Fails
Nuxt Content v3 introduced breaking changes. API evolves. Features change. Your training data is from January 2025 - documentation is updated continuously.
Real examples from testing:
- Agent suggested
$containswithout checking current operator syntax - Agent confirmed MDC without verifying v3-specific changes
- Agent provided query patterns without checking current API
Every case: Agent was confident. Every case: Should have fetched docs.
Common Mistakes
| Mistake | Why It Happens | Fix |
|---|---|---|
| Answer from memory | Confidence in training data | Fetch llms.txt with targeted extraction first |
| Skip docs for âsimpleâ questions | Assume simplicity = accuracy | Simple questions need current docs too - fetch targeted |
| Postpone fetching | âCan fetch if wrongâ | Wrong answers waste user time - fetch first |
| Trust version knowledge | âFeature is matureâ | Mature features still change - verify with docs |
| Cite âofficial docs sayâ | User mentioned docs | Fetch to verify what docs actually say |
| Fetch full documentation | âBetter to be thoroughâ | Targeted extraction (1-3k tokens) vs full dump (50k+ tokens) |
| Generic WebFetch prompt | Laziness | Specify EXACTLY what section to extract |
Quick Reference
Every Nuxt Content interaction:
- Use WebFetch on https://content.nuxt.com/llms.txt with TARGETED prompt
- Extract ONLY relevant sections (not full docs)
- Use extracted info as primary source
- Provide answer/implementation based on extracted docs
- Supplement with training knowledge only if needed
- Cite source (docs vs. general knowledge)
Targeted WebFetch prompt template:
"Extract ONLY the documentation for [specific API/feature/concept] from Nuxt Content.
Include:
- API syntax with parameters
- Code examples
- Version-specific notes
- Common gotchas
Exclude everything else. Return a concise summary (1-3k tokens max)."
Context savings: Targeted extraction (1-3k tokens) vs full dump (50k+ tokens) = 20-50x savings
Rationalization Table
| Excuse | Reality |
|---|---|
| âUser needs fast answer/implementationâ | Fast wrong answer wastes more time than slow correct answer |
| âI know this from trainingâ | Training data is stale; Nuxt Content v3 has breaking changes |
| âStraightforward question/implementationâ | Simple tasks still need current API syntax |
| âLow risk if wrongâ | Wrong answer/code blocks user; high cost to be wrong |
| âThey already looked at docsâ | Verify what docs actually say - might have misread |
| âFeature is matureâ | Mature features get updates and deprecations |
| âCan fetch later if neededâ | Fetch now = right first time; fetch later = wasted round trip |
| âJust confirming what I knowâ | Confirmation requires checking source, not memory |
| âIâll implement then testâ | Implementing wrong API = debugging time for user |
| âI can update if it failsâ | Test-fix cycle wastes time; fetch docs first prevents failures |
| âIâll fetch all the docs to be thoroughâ | Full dump = 50k+ tokens; targeted extraction = 1-3k tokens |
| âGeneric WebFetch prompt is fineâ | Targeted prompt extracts only whatâs needed; saves 20-50x context |
| âThe docs are small enoughâ | llms.txt is 50k+ tokens; always use targeted extraction |
Real-World Impact
Without this skill: Agents provide outdated API syntax, wrong operator names, incorrect version information, implement features with stale APIs based on training data, blow up context with full documentation dumps.
With this skill: Agents fetch current documentation with targeted extraction before answering OR implementing, provide accurate syntax, cite correct version info, save user debugging time, keep conversation context efficient.
Context savings: Targeted extraction (1-3k tokens) vs full dump (50k+ tokens) = 20-50x savings per interaction
Time cost: Targeted fetch adds 5-10 seconds. User debugging incorrect information or broken implementations adds 5-30 minutes. Always fetch with targeted extraction first.