I saw job search advice creating more noise—so I went to first principles, rebuilt the strategy around verification, figured out AI-native tips that work, and made 5 prompts to help you get that job!
Nate

The AI job market is a crowded room where everyone is yelling to be heard.
Frankly, so is the job market as a whole.
Maybe it’s time to change strategies.
I spent the last few months studying the job market AND the advice out there for AI and jobs.
Spoiler: most of it makes that job search room more noisy.
We’re being told to AI-customize our resumes, our cover letters, to email job recruiters.
And I get it. It makes sense to use the power of AI to help make us more packaged and polished, yeah? It’s the idea that has launched a thousand AI job startups.
But the problem is that by October 2025 the returns on that strategy are getting lower and lower. And we’re putting more and more resources after those diminishing returns.
It’s a giant came of speed dating, we’re all yelling, and no one can figure out where their match is.
So I went back to first principles. I looked at WHY the job market isn’t working properly anymore. I analyzed over a hundred pieces of AI advice, over 1000 AI job applications that have come across my desk in the last few months, and I looked for the hidden principles that no one is talking about.
What I found is fascinating: we’re using AI all wrong.
Everyone’s using AI to generate more signals—better resumes, shinier portfolios, more applications. But when everyone can generate perfect signals at zero cost, those signals become worthless. The real opportunity isn’t generation. It’s verification.
Companies don’t need more candidates. They need to tell which candidates are real. When hiring managers can’t distinguish between AI-generated polish and genuine capability, the winner isn’t who signals hardest—it’s who makes verification easiest.
That’s a fundamentally different game. And it requires fundamentally different tools.
Below, I break down the five principles that separate candidates who get hired from those who drown in the noise. Then I give you five prompts that help you actually implement them:
- The Process Portfolio Builder - Turns your existing projects into verification artifacts that show how you think, not just what you built (because hiring managers can’t tell if that polished output came from you or ChatGPT)
- The Adaptive Competence Assessment - Creates a self-administered test that finds your actual capability ceiling by getting progressively harder until you hit your limit (way more signal than a resume that claims “expert in X”)
- The Company Problem Analysis Framework - Helps you analyze a company’s real challenges before you apply, turning your application into bilateral value creation (most companies don’t even know what they need—help them figure it out)
- The Capability Space Mapper - Maps you to problem types instead of job titles, because “AI PM” means fifteen different things and you’re getting filtered out by keyword matching that misses your actual fit
- The Verifiable Trial Proposal - Structures a paid one-week trial offer that makes saying yes trivially easy for companies (you solve their “is this person real?” problem before they’ve even interviewed you)
These five prompts translate the article’s principles into frameworks that actually help you build the stuff you should be building anyway—just faster and better structured.
If the job market is using AI as a megaphone to yell louder, we’re gonna change the strategy entirely so we’re not stuck in a rat race. Instead of yelling, we’re going to use AI to put the megaphone down, walk up to the hiring manager, and start an actual conversation.
Let’s dive in.
Subscribers get all these newsletters!
Subscribed
Grab the AI Job Search Prompts
These five prompts translate the article’s principles into something you can actually use. But they only work if you’re willing to do real work.
Here’s what I mean: you can’t feed these prompts polished resume bullets and expect magic. You need to give them honest, detailed context about your actual projects—including where you got stuck, what failed, what you’d do differently. Then you need to spend time executing what they suggest. Building the portfolio piece. Taking the assessment. Analyzing the role. Creating the work sample.
These aren’t advice prompts that give you tips to nod along to. They’re production power tools designed to help you shape your work in a way that produces meaningful evidence that makes it much easier for hiring teams to evaluate your candidacy.
I won’t hide the ball: you can’t substitute real work here. The prompts are designed to make it vastly easier to showcase the great work you already do. By building verifiable demonstrations of capability, they’ll give you positioning that ninety-five percent of candidates won’t match.
The choice is yours.
The Job Market Broke. Here’s Why—And What Actually Works Now.
===============================================================
Everyone knows LinkedIn is dead. The problem is that most of the advice I see online is still optimizing for that dead system.
I want to step back and look at the root causes of what’s happening with the AI job market collapse. Not just tactics—principles. By the end of this piece, you’ll understand what’s actually going on at a fundamental level, you’ll have a clear sense of your actionable options, and if you’re in the hiring chair, you’ll understand how to start differentiating in how you hire.
Let’s get into it.
The Core Problem: AI Made Signals Free (And Worthless)
The job market used to work because signals were expensive to produce.
A resume took time. A good, customized resume took more time. Cover letters required genuine thought. I used to be able to read a resume and sense the effort behind it. The cost worked because it separated signal from noise. Strong candidates could afford the effort to customize applications because their returns were higher. Weak candidates faced diseconomies of scale—each additional application was nearly as costly as the first.
AI has collapsed that cost to zero.
When you can write a perfect resume in three minutes and pump out ten different custom resumes at zero marginal cost, there is no information in that signal. The fancy word for this is Shannon entropy. Shannon entropy is playing out in labor markets. The less fancy way of saying it is that because it doesn’t cost anything to make information, that information loses signal value in hiring, and we’re all in trouble.
That’s what we feel, right?
But here’s what’s interesting: we mostly talk about this from the talent side. A thousand applications per job sucks for everybody. The problem is, both sides right now tend to give advice that creates more noise to cut the noise. Yell louder. Put a portfolio out there. Build a social media presence. Post more job descriptions if you’re a hiring manager. It all adds up to this cacophony of noise in the AI job market.
What I want to suggest to you is that the information equilibrium that existed before 2022 is permanently gone. It is not coming back. More noise does not fix this.
In the past, strong candidates could afford the effort to raise the noise level and break through with signal. Weak candidates faced limitations with their ability to actually put the effort in and generate quality work. LLMs have destroyed the value of effort from good candidates, and they make it equally cheap for everyone to produce infinite signals.
I think we have to start by just admitting: the old game that we played before 2022 is over, and we don’t know how to play the new game yet. That’s what I’m getting to here.
Both Sides Are Drowning
Every current “solution” is adding to that noise. And I want to be honest about that. When you optimize your resume with Claude, when you optimize your portfolio website with GPT, it all adds to the noise.
But here’s what most people miss: companies face exactly the same collapse.
Job descriptions are now free to generate. So companies post roles they don’t fully understand, claiming to need “AI expertise” without defining it. Posting a role is no longer a costly signal that forced them to crystallize what they actually wanted. Both sides are creating noise trying to cut through noise.
This is not a one-sided problem. Candidates can’t tell which roles are real versus noise. Companies can’t tell which candidates are real versus noise. We’re trying to solve a one-sided problem when it’s fundamentally a two-sided market failure.
From Credentialing to Verification
What I want to suggest here is that we need to move from a world where information is cheap to produce for everybody to a world where we start to see verification instead of credentialing.
Credentialing is what we used to do. Credentialing is what a resume is for. Credentialing is when we have certifications. Verification actually shows in a provable way that we have the skill.
I think we’re trying to make our little baby steps that way when we talk about proving work through a portfolio. But we can go a lot farther than that if we go back to first principles and actually reason this through.
Let me walk you through five principles for what a verification world looks like. One of the hard things that has made it difficult to articulate this is that a marketplace like the talent marketplace is sometimes stuck in a bad equilibrium where every single stakeholder has an incentive to change it, but none of us can do it by ourselves.
I want to give you tools that work even in a difficult equilibrium like we’re in right now. The five principles that follow are scalable. They work both in the current system and they have teeth that let you get into a better equilibrium if we can all work together as a tech ecosystem.
Principle One: Process Over Outcome
Outcomes are more easily fakeable now. LLMs generate code, they generate writeups, they generate demos at scale. Process patterns are closer to that verification world. Process patterns are hard to fake.
We look for them in interviews already. The iteration cycles you took to get something done. Where you got stuck. How you debugged some vibe code. What you would do differently.
Effective LLM use, effective LLM building, effective LLM writing has a shape. You iterate, you backtrack, you override. But it’s much, much easier to distinguish the shape of good LLM co-work versus blind acceptance.
One of the things that we should start thinking about is making our process the product when it comes to the talent marketplace.
This has concrete implications for your portfolio. If you’re looking at your portfolio as an outcome, maybe you want to look at it as a process or a story that you’re telling where you include the debugging and the getting stuck and what you’d do differently.
The most effective portfolio site I have ever seen told a full three-year story of a product. Every stage along the way was honest about mistakes, showed failed designs. It was absolutely compelling. The process matters more than the outcome, and you can’t fake the process the way you can fake the outcome in the age of AI.
For hiring managers, this means asking for work trails, not just outcomes. Look at commit history. Ask candidates to walk you through their decision-making, not just their final code. Ask them to record themselves solving a problem with narration about why they made specific choices.
Principle Two: Make Verification Easy, Not Signals Better
Companies don’t need better candidates, actually. Most of them have all the candidates they need sitting in the applicant pool, as the applicants will tell you. It’s that they can’t tell who’s real.
Stop optimizing for better resumes and shinier portfolios in that world because the companies won’t be able to tell. Instead, start optimizing for things that are more verifiable.
Work trials where you solved a real problem—paid, one to two weeks, time-bounded, with clear success criteria. Live problem-solving videos where you record yourself solving a meaningful problem in real-time. No script, no polish, just thinking. Decision logs that document why you made specific choices, what alternatives you considered, and what trade-offs you accepted.
By the way, as a hiring manager, you should be looking at work trials. That is actually a good way to get a sense of how people work in this world and it gives them something they can show.
What about live problem-solving? You can get on with a candidate and solve a problem together. That’s a great way to make this work as well. And if you’re a candidate, you don’t have to wait. You can live-solve a meaningful problem. I’ve seen people do it in videos where they get on and they say, “I took a look at your onboarding funnel. These are the three things I think I’d change. This is why. This is how I’d change it. This is how I’d test it.”
You’re showing that process and you’re sort of surfacing verification because one of the things companies want to do is this, but they by and large don’t know how and they are stuck in the existing default circumstance. The goal is to shake up the status quo a little bit and get people thinking differently, because I think that both sides need to think differently to shake this equilibrium loose.
The winner isn’t who signals hardest. It’s who makes the hiring decision easiest.
Principle Three: Use LLMs for Verification, Not Just Generation
This one’s important and I think it’s slept on. We’re using LLMs to produce cheap text—resumes, cover letters, portfolios. This creates noise. But LLMs aren’t just content generators. They’re also judges, evaluators, researchers, and verifiers. They can assess judgment, grade the appropriateness of decisions, and generate adaptive tests that extract maximum signal. We’re ignoring this capability.
Think about it: a cryptographically signed LLM conversation shows your prompt quality and your iteration pattern. You can’t fake that at scale. Now, you may not be able to cryptographically sign it because I’m not sure I know of a startup that does that yet. But you can still, right now, show your prompt quality and iteration pattern. Again, we’re going back to that process piece.
LLM-generated adaptive assessment finds your competence ceiling efficiently. What that means is you can actually get the LLM to progressively test you and ask you harder and harder questions. I wrote an AI fluency assessment just a few days ago on the Substack and it had some of that built into it, but you can go farther.
You can actually design an LLM competence assessment that asks harder and harder questions as you go to eventually find where you top out. And I think that’s actually useful not just for hiring managers to find signal, it’s also useful again on the process side for talent to show what you’re capable of.
If you can go through and you can take the hardest, most gnarly product management questions that an LLM can throw at you and answer them in a high-quality way after going through fifteen easy, medium, increasingly difficult ones, that says something—especially if you can see the whole process, if you can see that you’re not gaming the system.
We are overdue for using LLMs to create signal where there just hasn’t been any signal whatsoever. We’re pouring all of this energy for AI into making noise in a crowded, noisy marketplace, but there are quiet spaces where nobody’s talking at all. Why aren’t we using AIs a little bit more creatively beyond just generating resumes and cover letters?
Here’s what this looks like practically: use semantic matching on capability spaces rather than keywords. Match on problem types they need solved, not keywords in a job description. Build adaptive competence assessments that test until failure. Show your LLM conversation trails—the prompts, the iterations, where you overrode the model.
Principle Four: Bilateral Value Creation
You want to help companies verify themselves. I know this sounds funny if you’re talent—why do the companies need the help? But trust me, most companies do not know what they really need. They don’t. They’re posting LLM-generated job descriptions for fuzzy roles, and they need help to clarify in most cases.
You can interview them about the problem space. You can write analyses of their challenges. You can offer trials that validate their needs. I know people who are doing this and are sort of taking command of the job process because the company is trying to figure out the answer and it feels really good for them when a talented candidate comes along and says, “Let me help you get clarity on this role. This is what you actually need.”
If you want a cheat code for more senior interviews, a lot of your senior interviews for director and up roles look like that because they’re all custom made. You end up in a place where you are helping the company to figure out for both of you what the company really needs in the role and then secondarily whether you’re a fit.
In that situation, you’re not just proving your capability, you’re helping them understand what capability they are looking for. That is the kind of value that an AI resume can’t give. That is the kind of value that reminds them that you produce value that can’t be gotten from Claude or ChatGPT. It’s something that is essential in the human-to-human connection of work, which by the way, lest we forget, is the whole point of all of this.
Here’s what this means practically: when you get that first call with a company, don’t lead with “here’s why I’m qualified.” Lead with “let me understand what you’re actually trying to solve.” Interview them. Ask about their pain points, their constraints, what they’ve tried before, why it didn’t work. Then send them a synthesis document that captures their problem space more clearly than they articulated it themselves.
That document is worth more than any resume. It shows strategic thinking, communication skill, and genuine interest in solving their actual problem. Most companies are drowning—they need help clarifying their own needs as much as they need help finding candidates.
Principle Five: Capability Spaces Over Job Titles
An AI PM means different things at different companies. We all know that, but we lack a vocabulary for the next level.
Job titles are often noise at this stage because the roles are evolving so quickly, and that’s part of what makes the talent marketplace so noisy. Instead of looking at all AI PM roles, position yourself across capability spaces.
Think about technical communication. Maybe that’s a strength for you. Think about system design under situations of uncertainty. Think about LLM evaluation. Is that a skill that you have? Think about rapid prototyping. Build a project that works across multiple capabilities. Show your process. Match on problem types that they need to have solved.
One of the things that I think is actually really slept on is that we have semantic search available now that will allow you to match on much more than just keywords. And yet our entire job ecosystem still runs like the 2010s on keywords. Why is that? Why can’t we have a job semantic search that matches not on keywords but on the capabilities, the themes?
It’s not that hard. There, you can actually build one yourself. If you wanted to do a project where you built a RAG and you could build out a listing of jobs in a particular job family, and you could semantically search to see where the correct role targets are, all of the tech is on the table. That is basically a weekend project at this point.
You can transcend the title matching game entirely with work like that. And the larger point, whether you want to build a RAG for your personal job search or whether I’m inspiring someone to do that, the larger point is this: think in capability spaces. Think in terms of what are the capability sets you can show, how can you lay out that process really transparently, and then you can get into a space where you can start to show what you know in a way that’s provable.
Here’s what this means practically: instead of targeting “AI Product Manager” roles, create a capability map. What problems can you solve? Technical communication to non-technical stakeholders. LLM evaluation methodology. Rapid prototyping in ambiguous domains. System design under uncertainty. Then build one project that demonstrates all of these. Your portfolio becomes capability proof, not job title matching.
Why This Gets Better As The Market Gets Worse
As more LLMs create more noise, as the crowd runs to have LLMs generate resume after resume, generate AI answers for interviews after AI answers for interviews, verification is only going to become more valuable, not less valuable.
The tactics I’m laying out here are designed to have increased returns. The more the market breaks, the bigger your advantage for making vetting easier because that is the core problem companies are facing.
I don’t want to give you principles here that require everyone who is listening to yell louder and compete with each other. Instead, I want to give you things that let you zig when the market is zagging. And right now, the market is zagging hard toward yelling in a noisy marketplace with AI.
You are building with these kinds of moves toward a new equilibrium while everyone else is clinging to the old one. And that gap is going to widen with time.
The LLM noise crisis is not going away. I said at the top, this is a permanently broken system. It’s broken permanently not because of anybody’s bad intent, but because LLMs have permanently reset the cost of this kind of information to zero.
This is not really advice for navigating a broken system. It is positioning you for the future system that will replace it, and it’s setting you up to work well even now in a system that is not quite ready to reach that new equilibrium. It’s a principle for bridging. How can we succeed now and zig while the market is zagging, and how can we build toward a better equilibrium?
What This Means For You Tomorrow
If you’re looking for work in AI, here’s where to start:
Pick one project you’ve built. Add decision logs showing your thinking—what you tried first, where you got stuck, what you’d do differently. Record yourself walking through it for fifteen minutes, narrating your trade-offs. Not a polished demo. A thinking-out-loud session.
Then identify one company you want to work with. Study their product, their blog, their recent launches. Write a short analysis of their strategic challenge. Not a resume. Not a cover letter. An actual analysis that shows you understand their problem space. Send it to them with an offer: “Here’s my thinking. If this resonates, I’d be happy to do a one-week paid trial on [specific problem].”
That’s verifiable. That makes their vetting problem trivial. That positions you in Track One (high-trust) or Track Two (verification), not Track Three (chaos).
If you’re hiring in AI, here’s where to start:
Stop optimizing job descriptions. Start documenting real problems you need solved. Not “AI engineer with 5 years experience.” But “We need someone who can build LLM evaluation frameworks for ambiguous product requirements. Here’s the current situation, here’s what we’ve tried, here’s what we need.”
Then structure your hiring around verification, not credentialing. Offer paid trials—one to two weeks, clear deliverable, real problem. Ask for work trails in portfolios—commit history, decision documentation, evidence of iteration. Use live problem-solving sessions instead of whiteboard interviews. Watch how candidates think, not what they produce.
And be honest about what you don’t know. Most companies posting “AI PM” roles don’t actually know what they need. Say that. “We’re figuring this out. If you can help us clarify what this role should be, that’s valuable.” The candidates who can do that are the ones you want anyway.
The Bottom Line
Information became free in the last two years. Verification became priceless.
The job market isn’t coming back to the old equilibrium. The companies that figure out verification first will have ten-times better hiring outcomes. The candidates who make verification easy will bypass the noise everyone else is drowning in. The platforms that build verification infrastructure will capture the market.
Everyone else is still playing the old game, which is already over.
Think about that.
I make this Substack thanks to readers like you! Learn about all my Substack tiers here
Subscribed
