Claude launched super prompts—and yes they work with ChatGPT! These will let you capture YEARS of experience and save DAYS of work on prompts. Check out how it works plus 10 new super prompts!
Nate
Anthropic just released Skills for Claude, and everyone’s missing what this actually is.
Skills aren’t a Claude feature. They’re the answer to the question I get asked most: “How do I stop rewriting the same massive prompts every single time?”
You know the drill. You want to create a financial presentation. Or evaluate an AI vendor. Or build a comprehensive job search strategy. And you sit down to write the prompt, and suddenly you’re explaining everything:
- How you want the analysis structured
- What frameworks matter to you
- Your preferences for formatting and style
- The domain context the AI needs
- Oh yeah, and what you actually want this time
You end up with a 500-word prompt just to get started. And next week when you need something similar? You’re doing it again. Copy-pasting from old prompts, modifying them, hoping you didn’t forget something important.
It’s exhausting. And it’s why most people never get past basic Q&A with AI—complex work requires too much setup.
Skills change this completely. They let you package up all that methodology once. Your frameworks. Your preferences. Your domain expertise. The stuff you’ve learned over years that’s way too nuanced to cram into a prompt. All of it goes into a skill, and then your actual prompt is just about what you want: “Create a Q3 financial presentation focused on customer retention, using this data.”
The skill handles everything else.
But here’s the part nobody’s talking about: these work everywhere. Not just Claude. Yes, you can use the exact same skill files in ChatGPT. In Gemini. Anywhere that reads files. Claude’s automatic invocation is slicker, but the portability is the real story. We finally have a way to package complex expertise that works across every AI platform.
Think about what that means. All those hard-won insights about how to structure a great pitch deck? How to evaluate vendors without getting burned? How to build financial models that actually work? That knowledge has been trapped in your head because it was too complex to turn into prompts. Now it has a home.
So what am I giving you? Ten super prompt skills I built that cover the complex work eating most of our time. I picked these to illustrate the sheer range of these skills.
- Prompting Pattern Library - The 25+ patterns that actually work, with examples—a huge help for building your own prompts and understanding what makes AI tick
- Pitch Deck Builder - Investor presentations that tell the right story—focused on narrative, data, and lots of best practice around deck building
- AI Vendor Evaluation - A framework to avoid the costly mistakes 95% of AI projects make—get your build vs. buy correct!
- Excel Editing - How to modify complex existing workbooks without breaking everything—lots of specific guidance to avoid common AI mistakes
- Excel Big File Automation - Building complex multi-tab financial models the right way
- Resume Builder - Complete system for ATS-optimized resumes that get past the robots
- Job Search Strategist - A strategist tool that treats job search like a go-to-market problem, not spray-and-pray—super thoughtful about targeting jobs
- Requirements Elicitation - Bridging the gap between PM docs and engineering implementation—surfaces missing technical details before they become problems
- Vibe Coding - A conversational builder to help you vibe code with tools like Cursor and Lovable (pitfalls included)
- Agentic Development - How to actually build software with AI agents without losing your mind
These represent years of expertise I’ve accumulated that was way too complex to put into a prompt before. The financial modeling approaches I learned the hard way. The vendor evaluation frameworks from watching too many bad deals. The job search strategies from helping dozens of people land roles. All of it finally has a place to live.
And the time savings? They’re real. Presentations that used to take 45-60 minutes of back-and-forth? Now 15 minutes. Financial models that ate up 90 minutes? Down to 30. Vendor evaluations that took half a day? Under an hour.
This article breaks down:
- Why this is the biggest leverage gain for AI work this year
- How Skills work in Claude (automatic) vs ChatGPT/Gemini (manual but still powerful)
- The principles for building your own Skills
- Real numbers on time saved across different types of work
- How this is completely different from Custom GPTs and Gemini Gems
- What’s actually inside each of the 10 skills
In six months, everyone doing knowledge work will have skill libraries. You’re getting a head start with 10 production-grade skills that handle the most time-intensive work.
This is how we stop rewriting the same prompts and finally get real leverage from AI on hard problems. Let’s go.
Subscribers get all these newsletters!
Subscribed
Grab the 10 Super Prompts here
These are different from regular prompts you copy-paste. Each skill is a zip file containing a structured markdown document with instructions, plus supporting resources like examples, templates, frameworks, and sometimes even executable code. Think of them as complete training manuals rather than single instructions.
In Claude, you upload skills once through Settings → Capabilities, and that’s it—Claude automatically invokes them when relevant to your conversation. You don’t think about them, you don’t tag them, they just work in the background. You’re having a normal conversation, and when you mention something about creating a presentation or evaluating a vendor, Claude loads the relevant skill and uses it.
In ChatGPT or Gemini, you attach the zip file directly to your conversation (just like uploading any other file), then explicitly tell the AI to use it: “Using the instructions in this file, help me create a financial model for X.” The model opens the zip, reads the complete instruction set and all the supporting materials, and follows them.
The knowledge transfer is identical across platforms—Claude’s automatic invocation is more elegant and invisible, but ChatGPT and Gemini get access to the same expertise when you reference the file. The key difference: in Claude you upload once and forget, in other platforms you attach per conversation and explicitly tell the AI to use it. Either way, you’re giving the AI a complete playbook instead of explaining everything from scratch every time.
NEW: How Claude’s New Release Saves You Days—Even in ChatGPT!
===============================================================
Anthropic released Skills yesterday, and this might be the most important productivity release of 2025. Not because Claude got smarter - it didn’t. But because we now have a way to do genuinely hard work with AI without carrying the crushing weight of prompt engineering every single time.
Here’s what changed: before Skills, doing complex work with AI meant writing elaborate prompts. Every. Single. Time. Need a sophisticated financial model? 500-word prompt. Comprehensive vendor evaluation? Another 500 words explaining your framework. PowerPoint with specific structure and branding? You’re writing an essay about how to make slides before you even get to what the slides should say.
The cognitive load was enormous. You were simultaneously trying to explain how to do the work and what the work should contain. Skills separate those concerns. They handle the how so you can focus entirely on the what.
Why This Is a 10x Moment
Let’s be concrete about what we’re talking about. Creating a comprehensive quarterly business review presentation used to require:
- Explaining your brand guidelines and formatting preferences (3-4 paragraphs)
- Describing your preferred presentation structure and information hierarchy (2-3 paragraphs)
- Specifying chart styles, font choices, and visual design principles (2 paragraphs)
- Detailing how you want data presented and what types of analysis to include (2-3 paragraphs)
- Then finally: what this specific presentation is about and what story it should tell
You’re spending 80% of your prompt on methodology and 20% on substance. And if the result isn’t quite right? You’re either revising that massive prompt or starting over.
With Skills, that same task becomes: “Create a Q3 business review focused on customer retention. The narrative should emphasize our success in enterprise accounts and ongoing challenges in SMB. Use the cohort analysis from Sheet 3.”
The skill already knows your brand guidelines, your structural preferences, your formatting standards, your analytical frameworks. You’re prompting only on the substance - what makes this presentation different from every other presentation. Your cognitive energy goes to the work that matters: the story you’re telling, the decisions you’re driving, the insights you want to surface.
This isn’t a small improvement. This is getting hours back on every complex task.
A financial model that used to take three rounds of prompting and revision - maybe 45 minutes of work - now takes one clear prompt and 10 minutes. A vendor evaluation that required reconstructing your entire evaluation framework each time - maybe an hour of work - now takes 15 minutes. A comprehensive job search strategy that meant multiple chats with context reconstruction between each - potentially 2-3 hours spread across days - now happens in a single 30-minute session.
Multiply that across every complex task you do regularly. Weekly presentations. Monthly reports. Quarterly analyses. Due diligence on vendors. Resume updates for job searches. Complex Excel models. Strategic planning documents.
We’re talking about days of time back per month for people doing knowledge work.
The Big Question: Does This Make Prompting Irrelevant?
No. I had to put that in like a headline lol
There will be bad takes about this, but the truth is that prompting as a skill continues to increase in value, and that includes this release.
I always say prompting is an evolving art as the models evolve.
In this case, the value on prompt intent is actually increased with this launch:
- You need very clear intent to build these skills
- You need to know your skill well to know what’s NOT in it
- You need to be very clear with your prompt to drive the skill correctly
- Remember a clear prompt is not always long!
- The higher value the task, the more work goes into the prompt (still)
So am I gonna stop writing and sharing prompts? No.
Do I think prompt tools are dead? Absolutely not. But I bet I see bad takes about that other places.
The bottom line is these skills or super prompts are non-linearly valuable: they’re like a Honda Accord for ok prompters, a Ferrari for good prompters, and a rocket for excellent prompters. The payoff to prompting skill remains exponential, and this release reinforced that dynamic.
How This Actually Works: The Claude Implementation
The technical architecture is elegant. Skills are folders containing instructions, examples, and resources. At startup, Claude loads only the skill names and brief descriptions - maybe 20-30 tokens per skill. This is what Anthropic calls “progressive disclosure.”
When you start a conversation, Claude scans your query against those skill descriptions. If it detects relevance, it loads the full skill content - but only for that specific skill, and only the parts it needs. This keeps token usage efficient while giving Claude access to effectively unlimited context about how to do specific types of work.
The automation is what makes this powerful. You don’t tag skills or remember to invoke them. You just describe what you need, and if a relevant skill exists, Claude uses it. The model figures out which skills matter for your task and coordinates between them if multiple skills are relevant.
Say you ask for a financial presentation. Claude might invoke three skills automatically: your financial analysis framework (which explains how you want numbers analyzed and presented), your presentation guidelines (which handles structure and formatting), and your brand standards (which ensures visual consistency). You didn’t specify any of this. The model connected the dots.
This is the difference between a tool and infrastructure. A tool requires you to remember it exists and choose when to use it. Infrastructure just works in the background, making everything you do more effective.
How This Works in Other Platforms (The Part Nobody’s Talking About)
Here’s what most coverage is missing: these skills aren’t Claude-exclusive. They’re markdown files in a standard folder structure. Which means they work in ChatGPT, Gemini, or any other AI that can read files.
The implementation is different. In ChatGPT, you upload the skill zip file to a conversation and explicitly reference it: “Using the guidelines in this file, help me create a financial model for X.” ChatGPT reads the complete instruction set and follows it. Same with Gemini.
The difference is convenience versus universality. Claude’s automatic invocation is more elegant - you don’t think about skills, you just benefit from them. But ChatGPT’s approach is more flexible - you can use different skills in different conversations, mix and match based on the specific task, or try out a new skill before committing to it in your Claude setup.
Both approaches work. The knowledge transfer is identical. You’re giving the model the same detailed instructions, examples, and context - just through different interfaces.
This portability matters because it means skills are infrastructure, not a feature. They’re a pattern for working with AI that transcends any single platform. Build a skill once, use it everywhere.
The Principles of Good Skills
The question everyone asks: what should be a skill versus what should just be a prompt?
The heuristic is simple: if you would create onboarding materials for a person doing this work, make it a skill.
Think about bringing a new employee onto your team. What would you give them? Standard operating procedures. Examples of good work. Common pitfalls to avoid. Your mental frameworks for approaching problems. Organizational context they need to be effective.
That’s what a skill is. It’s onboarding materials for AI.
Good skills share common characteristics:
They’re specific enough to drive consistent behavior. Vague instructions like “write professionally” don’t help. Specific frameworks like “use the SCQA structure: Situation, Complication, Question, Answer” give the model something concrete to follow.
They’re comprehensive about methodology without being prescriptive about content. A presentation skill should specify exactly how to structure slides and format visuals. It should not specify what to say on those slides - that’s what your prompt provides.
They’re example-rich. Show what good looks like. If you have a house style for financial analysis, include example analyses. If you have preferred chart formats, include template images. Models learn better from examples than from abstract rules.
They’re opinionated. The model needs clear direction, not a menu of options. Don’t say “you could structure this several ways.” Say “structure it this way, because X.” Justified opinions are better than endless flexibility.
They handle the parts of the work that stay consistent across instances. Your brand guidelines don’t change between presentations. Your financial analysis methodology doesn’t change between models. Your vendor evaluation criteria don’t change between assessments. That’s what belongs in skills.
The boundaries matter. Skills are for recurring work that’s complex enough to benefit from structured guidance and high-value enough to justify the setup time. One-off tasks don’t need skills. Simple tasks don’t need skills. Low-stakes experimentation doesn’t need skills.
But for the work that matters - the complex, recurring, high-value work that makes up most of knowledge work - skills change the game entirely.
Building Skills: The Practical Path
You have two paths to creating skills. The first is asking Claude to build them for you. Claude has built-in skill creation capabilities - you describe what you want, it asks clarifying questions, you provide examples and context, it builds the skill structure. This takes 15-30 minutes for a solid skill.
The second is building manually, which gives you more control. The structure is straightforward: a folder containing a SKILL.md file (with YAML frontmatter specifying name and description, followed by markdown instructions) plus any supporting resources. The instructions are written like documentation - clear, specific, example-rich.
The real skill in building skills is understanding what to include. You want enough detail to drive consistent behavior without so much detail that the instructions become overwhelming. You want examples that illustrate the principles without being so specific that they constrain the model’s ability to adapt to different contexts.
The best skills I’ve seen balance three elements: principles (the why behind how you approach this work), frameworks (the specific methodology you follow), and examples (what good execution looks like). Principles provide grounding, frameworks provide structure, examples provide clarity.
One refinement strategy I’ve found effective: build the skill in Claude, then evaluate it in ChatGPT or another model. Upload the skill and ask: “Review this skill for completeness and quality. What’s missing? What could be clearer? What assumptions aren’t stated?” Different models notice different gaps. Take that feedback back to Claude and iterate.
What You Get: The Time Savings Are Real
Let’s be specific about the leverage gain. I’ve been using skills intensively since Anthropic started testing them, and the time savings compound quickly.
Weekly recurring tasks: Presentation creation used to take 45-60 minutes including prompt refinement. Now takes 15-20 minutes. That’s 30-40 minutes back per week, or 26-34 hours per year on just this one task type.
Monthly analyses: Financial modeling and analysis used to take 90-120 minutes including context reconstruction and multiple revision rounds. Now takes 30-40 minutes. That’s 60-80 minutes back per month, or 12-16 hours per year.
Quarterly strategic work: Comprehensive strategy documents or board presentations used to take 3-4 hours of AI-assisted work spread across multiple sessions. Now takes 90 minutes in a single focused session. That’s 6-10 hours back per quarter, or 24-40 hours per year.
One-time but complex projects: Vendor evaluations, job search strategy, major analytical projects - these used to take 2-4 hours of AI work with extensive prompt engineering. Now take 45-60 minutes. Even at just 4 of these per year, that’s 6-13 hours back.
Add it up across all your complex recurring work: you’re looking at 70-100+ hours back per year. That’s 2-2.5 work weeks of time that was previously spent on prompt engineering and context reconstruction.
More importantly, the work quality becomes consistent. You’re not having good days and bad days based on how well you prompted. You’re getting reliably good output because the methodology is standardized.
The Bigger Picture: This Changes How Teams Work
The individual productivity gain is significant. The team-level impact is transformative.
Before skills, AI expertise was locked in individuals. Some people figured out how to get great results for specific tasks. Others struggled. Knowledge transfer meant sharing prompts, which often didn’t work well because prompts are context-dependent. What works brilliantly for one person in their workflow might fail for someone else in a different context.
Skills change this dynamic. When someone figures out how to consistently get excellent results for a business-critical task, that knowledge becomes capturable and distributable. Marketing teams can build skills for brand-compliant content. Finance teams can build skills for analytical frameworks. Product teams can build skills for user research synthesis.
The expertise becomes institutional instead of individual. New team members don’t have to reinvent effective approaches - they inherit them through skills. Teams develop shared language and methodology around how to leverage AI for different types of work.
This compounds. A team with a library of 10-15 well-designed skills for their core workflows has effectively 10x’d their collective AI leverage. Every person on that team is working at a higher baseline of effectiveness.
Where This Goes
Skills are infrastructure. They’re not the end state - they’re the foundation for what comes next.
Right now, skills handle methodology and context. The natural evolution is skills that handle more of the orchestration for complex multi-step work. A comprehensive due diligence skill that doesn’t just explain the evaluation framework but actively coordinates data gathering, analysis, and synthesis across multiple work streams.
The portability matters for this evolution. Because skills work across platforms, the ecosystem will develop outside Anthropic’s direct control. Someone will build better skill authoring tools. Someone will create skill testing frameworks. Someone will launch skill marketplaces. The format is too simple and too useful not to spawn an entire ecosystem.
For now, the opportunity is simpler: identify your most complex, most recurring work. The tasks where you’re spending hours on prompt engineering and context reconstruction. The work where quality and consistency matter. Build skills for those tasks.
The time you invest in building a good skill - 30-60 minutes - pays back within 3-5 uses of that skill. Everything after that is pure leverage.
What’s in the Starter Pack: Ten Skills to 10x Your Leverage - Grab The Pack Now
I’m including ten custom skills with this article, each designed to handle specific types of complex work that would otherwise require extensive prompting. Here’s what each one does and why it matters:
Agentic Development Conversational guidance for building software with AI agents, covering workflows, tool selection, and parallel agent management based on real-world experience building 300k+ line codebases entirely with AI. Contains battle-tested principles like “think in blast radius not complexity” and model selection guidance for GPT-5 vs Claude. Saves hours of figuring out how to effectively delegate to AI coding assistants and avoid common pitfalls like context thrashing or premature framework optimization.
Resume Builder Complete resume creation system with ATS optimization, industry-specific templates, and career stage customization. Includes decision trees for whether you’re creating from scratch, reviewing, or tailoring for specific roles. Contains actual working templates and formatting standards that pass applicant tracking systems. Turns the 2-3 hour resume refinement process into a 30-minute focused session, and handles the complexity of tailoring resumes for different industries without starting from scratch each time.
AI Vendor Evaluation Systematic framework for evaluating AI vendors to avoid costly mistakes, with red flag checklists, pricing model analysis, and contract term evaluation. Built on analysis of $1.2M average AI spending patterns and includes structured scorecards. Saves days of due diligence work and helps you avoid the 95% of AI projects that fail due to poor vendor selection - potentially saving tens of thousands in wasted spending.
Vibe Coding Comprehensive guide for building applications through natural language using tools like Cursor, Lovable, Replit, or Bolt. Includes tool-specific guidance, architectural decision frameworks, and a strong bias toward aggressive feature cutting to ship faster. Contains pitfall awareness and when-not-to-use guidance. Accelerates prototyping from weeks to days and helps non-technical founders or rapid prototypers avoid common traps that waste time on premature optimization.
Job Search Strategist Treats job searching as a go-to-market problem with research-driven company insights, conversational skills matching, and weighted prioritization models. Includes KPI tracking and an operating rhythm for daily/weekly activities. Contains frameworks for discovering non-obvious company insights through web search and creative application strategies that bypass the noise of traditional applications. Transforms the scattershot “spray and pray” approach into a targeted, measurable system that surfaces the right opportunities faster.
Prompting Pattern Library Collection of 25+ proven prompting patterns with “why it works” analysis, common failure modes, and model-specific guidance for Claude, GPT-4, and Gemini. Includes orchestration patterns for agent systems and cross-referenced deep-dive documentation. Makes you immediately better at prompting by providing tested patterns for specific situations - essentially giving you the accumulated wisdom of hundreds of hours of prompt engineering in a reusable format.
Excel Editor Specialized workflows for editing existing Excel files while preserving formulas, formatting, and multi-tab relationships. Emphasizes analysis before editing and includes validation frameworks to ensure completeness. Critical for anyone working with existing financial models, dashboards, or complex workbooks where breaking formulas or losing data relationships would be disastrous. Turns risky, error-prone manual Excel editing into a systematic process that maintains integrity.
Complex Excel Builder Comprehensive toolkit for creating multi-tab Excel workbooks for startups and scale-ups, including financial models, operational dashboards, and board reports. Specialized for startup metrics like ARR, MRR, CAC, and LTV with data organization, pivot tables, and visualizations. Saves days of work building sophisticated financial models from scratch and ensures best practices for board-level reporting.
Pitch Deck Builder Complete pitch deck creation with conversational discovery, narrative structuring, and context-aware chunking strategies. Guides through narrative development, data collection, and professional slide creation for investor presentations and fundraising decks. Takes the painful multi-week process of crafting a compelling pitch deck and compresses it into focused sessions with structured frameworks that ensure you hit all the critical elements investors expect.
Requirements Builder Systematic framework for analyzing product documents (PRDs, feature specs, user stories, roadmaps) to identify gaps and generate clarifying questions for PMs and engineers. Helps bridge the gap between PM documents and implementation by surfacing missing technical details rather than making assumptions. Prevents costly rework by catching specification gaps before development starts - potentially saving weeks of build time and multiple revision cycles.
Each skill represents 20-50 hours of expertise distilled into a reusable format. They’re starting points - you’ll want to customize them for your specific context, industry, and preferences. But even as-is, they’ll dramatically reduce the cognitive load of complex AI-assisted work.
Getting Started
Skills are available to Claude Pro, Max, Team, and Enterprise users through Settings → Capabilities. Anthropic provides built-in skills for common tasks like document creation, plus example skills you can customize. You can get more at the Anthropic github cookbook here.
I’m including ten skills I’ve built with this article: Vibe Coding (AI-assisted development workflows), Resume Builder (comprehensive resume creation), Prompting Pattern Library (proven prompting techniques), Job Search Strategist (complete job search methodology), Excel Editing (advanced Excel operations), AI Vendor Evaluation (due diligence frameworks), and Agentic Development (building software with AI agents).
Use these as starting points. The real power comes from building skills for your specific domain expertise - the accumulated knowledge that makes you effective at what you do.
This is one of the biggest productivity releases of the year because it doesn’t just make AI better at specific tasks. It gives us a lever to do genuinely complex work without carrying the crushing weight of explaining how to do that work every single time.
We just 10x’d our leverage for knowledge work. Time to use it.
FAQ: How Are Skills Different from Custom GPTs?
Q: I’ve been using Custom GPTs. Why would I switch to Skills?
The fundamental difference is architectural: Custom GPTs are conversational agents you talk to. Skills are tools that enhance your main conversation.
With Custom GPTs, if you need help with brand-compliant financial presentations, you’d need three separate GPTs: one for brand guidelines, one for financial analysis, one for presentation creation. You’d have to jump between three different chat windows, copy context between them, and manually coordinate their outputs. Each GPT is its own silo.
With Skills, you have one conversation with Claude. You say “create a financial presentation following our brand guidelines” and Claude automatically loads the brand skill, the financial analysis skill, and the presentation skill. They compose together seamlessly. You’re not managing multiple agents - you’re working with one AI that has access to multiple specialized knowledge bases.
Q: Can’t I just upload my Custom GPT instructions to Claude?
Yes, and that’s the portability advantage. Your Custom GPT is locked into ChatGPT’s ecosystem. A Skill is just a zip file - you can use it in Claude with automatic invocation, upload it to ChatGPT for manual reference, use it in Gemini, or even build your own tools around it. The format is open and simple.
Q: What about the composability difference?
This is the killer feature. Custom GPTs don’t stack. If you’ve built a “Financial Analysis GPT” and a “Brand Guidelines GPT” and a “Presentation GPT,” you can’t combine them. You’re picking one and living with its limitations, or manually copying outputs between agents.
Skills stack automatically. Claude looks at your request, identifies which skills are relevant, and loads them all. A presentation request might invoke five skills: brand guidelines, financial frameworks, data visualization preferences, writing style, and presentation structure. You didn’t specify any of them - the composition happened automatically based on context.
This is the difference between tools and agents. Tools compose. Agents don’t.
FAQ: How Are Skills Different from Gemini Gems?
Q: I use Gemini Gems for specialized tasks. How is this different?
Gems, like Custom GPTs, are separate conversational agents. When you create a Gem for “Email Writer” or “Code Reviewer,” you’re creating a specialized version of Gemini you switch to when you need that capability. Each Gem is its own conversation context.
Skills are embedded context, not separate agents. You don’t switch to a “presentation skill” - you stay in your main conversation, and Claude loads the presentation expertise when needed. This matters because you can combine multiple skills in a single task without context switching.
Q: Can I use Skills in Gemini?
Yes, but not with the automatic invocation Claude provides. You’d upload the skill zip file to a Gemini conversation and reference it explicitly: “Using the guidelines in this file, help me with X.” Gemini reads the instructions and follows them. It’s manual invocation versus automatic, but the knowledge transfer works.
This portability is the key difference from Gems, which only work in Gemini’s ecosystem. Skills work everywhere because they’re just structured instruction files.
Q: Why does the tool vs. agent distinction matter?
Because tools compose and agents don’t.
Say you need to evaluate an AI vendor, create a presentation about your findings, and ensure it follows your company’s brand guidelines. With Gems, you’d need to use three different Gems sequentially, copying context and outputs between them. The “Vendor Analysis Gem” doesn’t know about your brand guidelines. The “Brand Guidelines Gem” doesn’t know your analytical frameworks.
With Skills, all three concerns are addressed in one conversation. Claude invokes the vendor evaluation skill to structure your analysis, the presentation skill to format outputs, and the brand guidelines skill to ensure visual compliance. The composition happens automatically because Skills are tools the AI can combine, not separate agents you have to coordinate.
Q: So Skills are better than Gems in every way?
Not necessarily. Gems are easier to set up initially - you just describe what you want in plain language. Skills require a bit more structure (though Claude can build them for you).
But for complex work where you need multiple types of expertise simultaneously, Skills are dramatically more powerful. The composability and portability advantages compound quickly. If you’re doing serious knowledge work where you regularly need multiple specialized capabilities in a single task, Skills change the game in ways Gems can’t match.
FAQ: How Are Skills Different from Adding Files to Projects?
This is the question I get asked most frequently, and it cuts to the core architectural difference between active tool invocation and passive knowledge retrieval.
When you add files to a Claude Project or ChatGPT, you’re creating a passive knowledge base. The AI uses one of two approaches to access this information:
In-context loading (Claude Projects when small): If your project knowledge fits within the context window, Claude loads everything upfront. Every file you’ve uploaded sits in active memory for every conversation. This works, but it’s token-intensive and hits limits quickly.
Retrieval-Augmented Generation or RAG (both platforms at scale): When your knowledge base grows large, both Claude and ChatGPT switch to semantic search. They chunk your documents, create vector embeddings, and search for relevant passages when they think they might be helpful. ChatGPT custom GPTs always use this approach with a 25-file limit. Claude Projects automatically switch to RAG when you exceed context limits.
The critical word here is “passive.” The AI searches these files when it thinks they might be relevant, retrieves chunks based on semantic similarity, and includes those chunks as context. It’s a retrieval system—the AI is asking “what in these files might help answer this question?” and pulling relevant snippets.
Skills work completely differently. They use what Anthropic calls “progressive disclosure” with active tool invocation.
When you enable Skills, Claude starts each conversation knowing only the name and brief description of each available skill—roughly 20-30 tokens per skill. When you describe a task, Claude actively decides which skills are relevant to that specific task based on matching the task description to the skill descriptions. Then, and only then, does Claude invoke those skills as tools—explicitly loading the specific instructions and resources needed.
This is tool calling, not retrieval. Claude isn’t searching for possibly-relevant content. It’s making a deliberate decision to invoke a specific capability, like calling a function in code.
The architecture enables several things RAG cannot:
Unbounded context per skill: Because Skills live in a filesystem and are loaded only when invoked, a single skill can contain effectively unlimited reference materials. A complex financial modeling skill might include dozens of example models, template files, and calculation scripts. None of this consumes tokens until Claude explicitly invokes that skill.
Executable code: Skills can bundle Python scripts or other executable code that runs in Claude’s sandboxed environment. When Claude needs to validate a form or process data, it can run the skill’s code without that code consuming any context window tokens. The code executes, Claude sees only the results, and token usage stays minimal. This is impossible with passive file uploads.
Composability: This is the killer feature. Claude can invoke multiple skills simultaneously and coordinate between them. Ask for a quarterly board presentation and Claude might invoke: your financial analysis skill, your presentation guidelines skill, your brand standards skill, your data visualization preferences skill, and your board reporting requirements skill—all working together in the same task. With project files, you’re hoping RAG retrieves relevant chunks from each document. With Skills, you’re explicitly combining five different expert procedures.
Token efficiency: Project files either load everything (token-intensive) or search everything (computationally expensive, may miss relevant content). Skills load only what’s needed, only when needed. A 50-skill library might represent thousands of pages of instructions and examples, but Claude starts with just the names and descriptions—maybe 1,000 tokens total. Then it loads only the 2-3 skills relevant to your specific task.
Q: When should I use Projects versus Skills?
Use Projects for accumulated context that builds over time—a product launch with evolving plans, research that builds on previous findings, a campaign that unfolds over weeks. Projects are workspaces where context persists and grows.
Use Skills for repeatable procedures you want applied consistently—your brand guidelines, your analysis frameworks, your document templates, your coding standards. Skills are methodologies that should work the same way every time, regardless of what project you’re working on.
Use both together when your work benefits from both persistent context and standardized procedures. Your product launch project might reference brand guidelines and presentation skills, while accumulating product-specific documents and decisions.
Q: Can I use the same approach in ChatGPT?
Yes, with manual invocation instead of automatic. Upload a skill zip file to a ChatGPT conversation and explicitly reference it: “Using the financial analysis framework in this file, analyze these results.” ChatGPT reads the complete instruction set and follows it, just like Claude does when it invokes a skill.
The difference is that ChatGPT won’t automatically recognize when to use the skill—you need to explicitly tell it. Claude’s automatic skill invocation is more elegant, but ChatGPT’s manual approach gives you explicit control over which instructions to apply in each conversation.
The knowledge transfer is identical. The portability is what makes skills infrastructure rather than a feature.
Q: Why does the active tool invocation versus passive retrieval distinction matter?
Because calling a skill is a deliberate, explicit action—like calling an expert consultant for a specific task. Searching project files is probabilistic—like hoping relevant information surfaces when you search a filing cabinet.
With RAG-based project files, you’re always wondering: did it find the right information? Did it miss something important? Is it using outdated guidance because the semantic search pulled an old document?
With Skills, you see exactly which skills Claude invoked. In Claude’s thinking, you’ll see “Reading brand guidelines skill” or “Using financial analysis skill.” The invocation is transparent and deliberate. You know precisely which procedures and frameworks Claude is applying.
This transparency matters for high-stakes work. When you’re creating a board presentation or evaluating a major vendor, you need to know the AI is using your complete methodology, not just relevant-seeming chunks it retrieved from a document search.
And that’s the TLDR on Claude Skills! What will you build with Skills today?
I make this Substack thanks to readers like you! Learn about all my Substack tiers here
Subscribed