
What 12 Anthropic Academy Quizzes Taught Me About My Own Blind Spots
Final part. This series started because I passed all 12 Anthropic Academy certifications — then looked at my wrong answers. Over the past two weeks I've written about each misconception individually ( read the full series ). Here's the full picture. The List Prompt caching has a 1,024-token minimum — and it fails silently. I was adding cache_control to short prompts for months, paying full price. Cache breakpoints go after the last tool definition — not the system prompt. I was leaving tool schemas uncached on every call. Extended Thinking returns two blocks — a thinking block and a text block, with cryptographic signatures to prevent tampering. My streaming parser worked by accident. Re-ranking is a separate LLM step — not just sorting by similarity score. I'd been skipping it entirely and sending noisy results to Claude. Anthropic doesn't provide an embedding model — the recommended provider is Voyage AI, requiring a separate account and API key. MCP Inspector exists — and it's the f
Continue reading on Dev.to
Opens in a new tab



