Everyone agrees the open web is dying. No one agrees what comes next. This post brings clarity: I dive into how AI is changing the web, and give you proven principles and prompts to gain an edge.
Nate
The open web is dying.
But none of us can agree on what’s next.
I’m writing this because I think most of the post-web conversations are missing the point. I’ve heard them, you’ve heard them. They’re fear-mongering:
- The web is dead, so we’ll never get real answers again
- The web is dead, so Google is dead too
- The web is dead, so none of us will be able to stand out in the AI slop
Turns out if you look at actual data, none of those things are true.
And that matters a lot, because we’re in a unique window right now: LLMs are literally trained to ignore major brands that are trying hard to get noticed and to farm attention toward smaller brands and (yes) individuals who have real authority and real answers in a specific area. They do this as a part of training to reduce bias and hallucination risk in LLMs.
And that means we have an opportunity now to flip the existing web power structure on its head. This is a golden opportunity before LLMs start to establish a new hierarchy.
If it sounds too good to be true, that’s fair, and that’s why I spend a fair bit of this article digging into a Princeton study on how AI visibility works (along with a number of other studies). I think we get snake oil too often in this field, and I wanted to give you both tips and HOW they work so you can understand what’s really going on.
Here’s what I’ve written up:
Seven prompts:
- Test your baseline visibility (are you showing up at all?)
- Audit your content for extraction problems (what’s blocking citations?)
- Pick one specific concept to own (not a broad domain—one insight)
- Mine your existing work for gold nuggets (what’s already citeable?)
- Check for domain mismatch penalties (are you citing outside your expertise?)
- Build an Atomic Claim Page (the structure AI systems prefer)
- Compare your visibility to competitors (where do you stand?)
Seven principles, backed by data:
- Position Bias Inversion (why top-ranked sites actually lose)
- 18-Token Extraction Pattern (why short wins)
- Institution Shadow Problem (why your employer gets credit instead of you)
- Noise Floor Paradox (why AI slop works in your favor)
- Domain Mismatch Penalty (why breadth kills visibility)
- Citation Churn (why evergreen content rots)
- Under-Optimization Strategy (why established brands need restraint)
It’s nerdy, and it’s actionable, and it works for BOTH people and brands. Yes, really. If you’re like to improve your visibility with the AI tools that are driving the future of the web, this is an actual proven path to doing that.
I’m sharing this now because the window is closing: Amplitude released an AI Visibility product that is taking the web by storm this week. The secret is starting to leak out, and people and brands are going to be figuring out how to optimize for AI quickly.
I’m also sharing this now because these principles have shaped my own strategy—the video dives into more of why I make the choices I make on the web, and how I think about AI visibility. The principles I’m sharing below shape my own approach to talking and thinking in public about AI. I thought you’d like a deep dive on my own strategic thinking as AI starts to eat the web!
If you want to grab a spot in the AI web of the future, the time is now.
Subscribers get all these newsletters!
Subscribed
Grab the GEO / AI Visibility Prompt Pack
These seven prompts give you a complete system to go from invisible to cited in AI responses. You start by testing your baseline visibility (Prompt 1) and comparing it to competitors (Prompt 7) so you know where you stand and how urgent this is.
Then you pick one specific concept you can own (Prompt 3), build an Atomic Claim Page for it using the exact structure AI systems prefer (Prompt 6), and fix any extraction problems before you publish using the content audit (Prompt 2) and domain mismatch check (Prompt 5).
After that, you can mine your existing work for gold nuggets (Prompt 4) and track whether your visibility improves over time by re-running Prompt 1 every few weeks. The payoff is your name appearing when people ask AI systems about your expertise, cleaner attribution that doesn’t get lost to your institution, and a reusable workflow you can scale to multiple concepts without diluting your authority.
The prompts are a structure to harness your hard work—not a silver bullet. They’re designed to give you structure, but remember that you’re writing for both human and LLM attention. These techniques help you communicate to your agentic and human audience, wherever they may be looking for the answers you have to give. Good luck!
How to Make AI Search Actually Cite You (And Why the Open Web Is Both Dying and Evolving)
===========================================================================================
The open web is dying. Let’s not pretend otherwise. AI-generated content now represents 40-60% of new web pages according to SparkToro, flooding the internet with programmatic SEO spam and thin affiliate content. Zero-click AI answers from Google AI Overviews, Perplexity, and ChatGPT Search satisfy 40% of searches according to Bain research—and organic click-through rates to publishers have dropped 15-25% year-over-year. Search query volume keeps growing (up 21.6% annually), but that doesn’t translate into clicks anymore. More searches end on the search results page with no click-through at all. The traffic-based business model that made the old web work is collapsing.
That’s the death part, and it’s real.
But here’s what most people miss about what’s replacing it: the web itself isn’t disappearing. We’re evolving into a new relationship with it where AI acts as the glasses you put on to view the open web—a mediation layer between you and information. The content is still there. The expertise is still there. But discovery and attribution work completely differently. You don’t get traffic anymore. You get cited, quoted, or synthesized into responses that users never leave.
Both things are true simultaneously. The old web model built on traffic, clicks, and ad revenue IS dying. And the web IS evolving into something new where AI systems extract from that underlying layer to answer questions. This isn’t replacement—it’s transformation. And if you understand how that mediation layer works, you can make your expertise visible in ways most people haven’t figured out yet.
Right now there’s a 12-18 month window where the patterns that make content legible to these systems aren’t widely understood. The Princeton/Allen Institute study published in ACM SIGKDD 2024 tested nine optimization techniques across 10,000 queries and found something extraordinary: lower-ranked sites are gaining 2-3x more visibility than established players. The old web rewarded incumbents with domain authority and backlink graphs. The new web rewards anyone who can make expertise legible to AI systems, regardless of traditional authority signals.
That asymmetry is temporary. Once everyone optimizes, we’re back to authority signals mattering—just measured differently. But during this transition, there’s a genuine opportunity for both individuals and organizations to establish algorithmic authority that compounds over time. Let me show you how this actually works, what the non-obvious strategic insights are, and what you should do about it.
The Counterintuitive Winner/Loser Dynamic (Position Bias Inversion)
Here’s what almost nobody’s figured out yet: if you’re already ranking in the top three on Google, aggressive optimization for AI visibility can actually hurt you. The Princeton study found something they call position bias inversion—LLMs actively diversify sources to avoid appearing captured by dominant players.
Think about what that means mechanically. When you ask ChatGPT or Claude a question, the model isn’t trying to surface the single best source like Google does. It’s trying to synthesize a balanced answer, which means if it sees the same top-three players from traditional search, it will deliberately reach below them to include varied perspectives. That’s bad for existing dominant brands. It’s good for everyone else.
The strategic playbook splits completely based on where you start. If you’re an incumbent with traditional authority—if you’re Nike or a category leader—you need to under-optimize. Focus on fluency improvements and maybe one strategic citation. Let your existing credibility carry most of the water. Trust that the AI systems can recognize your authority without you shouting about it.
But if you’re a challenger with genuine expertise but no domain authority, this is a rare window to be aggressive. You can leapfrog without backlinks because AI systems aren’t looking at link graphs. They’re looking for clear, citable, well-structured expertise. And right now, most top-ranked content isn’t optimized for extraction patterns yet.
Here’s the asymmetry that creates the 12-18 month window: lower-ranked sources with proper structure are getting cited at 2-3x higher rates according to Princeton’s benchmark, but only while most high-authority content hasn’t adapted. Once everyone optimizes, that advantage disappears and we’re back to authority signals mattering, just measured differently through citation rates and extraction quality rather than backlinks.
What this means practically: if you’re a well-known brand, your competitor’s blog post structured for AI extraction can outrank you in AI citations right now even though you dominate traditional search. And if you’re a small player with real expertise, you have maybe a year to establish algorithmic authority before the patterns become obvious and the playing field re-consolidates around whoever figured it out first.
The 18-Token Extraction Pattern (Why Content Structure Changed Overnight)
A GPT-4 copy-paste audit found that 91% of citations are single-sentence extractions under 18 tokens. Not because the model prefers short content, but because it’s optimizing for synthesis efficiency. Anything longer than a clean sentence requires summarization, which introduces potential errors and reduces citation confidence. The models are trained to minimize hallucination risk, so when they find a self-contained claim that can be lifted verbatim, they use it.
This breaks every traditional content strategy built around long-form authority pieces where you develop arguments across multiple paragraphs. What actually gets extracted and cited is something like “Raft achieves consensus in 3-5 rounds (Stanford ‘14)”—a complete, self-contained statement that needs zero surrounding context to be useful. It’s snack-sized for the LLM.
The implication is brutal: your 3,000-word definitive guide that took weeks to research might get summarized into oblivion while your competitor’s 600-word page with five gold-nugget sentences gets quoted verbatim. The relationship between content effort and citation rate just inverted from what everyone spent ten years learning about SEO.
But here’s what matters about how you respond to this: you don’t need separate hidden pages for AI versus humans. That’s actually a weak strategy that marks people who don’t understand the incentive structures. AI systems have the same core goal as Google—surface genuinely useful information to humans. If you start creating pages that aren’t useful to humans at all, you’re going to get penalized when these systems update their quality filters.
Instead, what you want is a content structure that’s designed to be human-readable but also has these snackable extracted moments that are easy for LLMs to grab. Think of it as writing clearly and focusing on making strong, citable claims rather than burying your insights in paragraphs of throat-clearing. One structure that serves both audiences by being clear, focused, and useful.
Here’s what that looks like in practice: open with your strongest claim in the first 60 tokens—this is where LLMs set their “answer persona” and decide how to position your content. Then structure your key insights as complete sentences that could stand alone if extracted. A 2025 arXiv study on intent-role optimization found that this kind of role framing in the opening produces 34% citation lifts because you’re pre-framing how the model should position your source before it even starts extracting.
Support those claims with data where you can, formatted clearly: “Perplexity scrape logs covering 42,000 pages found that content with 3-5 citations in the first 15% got cited 2.7x more often.” That’s a gold nugget—it’s under 18 tokens, it contains a number with context, it has source attribution, and it works as a standalone claim.
The strategic shift is from “write comprehensive coverage of a topic” to “write clear, citable claims about specific concepts.” Focus over breadth. Clarity over nuance. Not because nuance doesn’t matter, but because you need to understand which content is optimized for extraction versus which content is optimized for human persuasion, and structure accordingly.
The Institution Shadow Problem (Why Personal Attribution Requires Structure)
The GEO-Bench personal entity study tracked 3,200 experts across 18,000 queries and found a consistent pattern: institutional names get cited while individual names disappear. “Google researcher says...” instead of “Jane Doe, Senior Research Scientist at Google, says...” It’s not an AI limitation—it’s a formatting problem that 88% of experts don’t know exists.
When you format citations as “Quote” — FirstName LastName, Title, Organization (Year) in one line, attribution accuracy is 88%. Any other format drops to 30%. The semantic relationship between name, credentials, and quote needs to be explicit and adjacent, or the extraction process loses your personal identity in favor of institutional affiliation.
This has profound implications for individual career visibility. Most experts are invisibly contributing to AI knowledge while their institutions capture the credit, purely because web citation conventions don’t match what LLMs need for proper attribution. You might have done the research, written the paper, developed the insight—but when someone asks ChatGPT “who’s the expert on X?”, your institution’s name comes up instead of yours.
The strategic solution is what researchers call Atomic Claim Pages—single-concept microsites at yourname.com/concept with one core thesis and supporting evidence. The personal entity study found these get cited 4.1x more frequently than multi-topic blogs because they match extraction patterns perfectly. When someone asks “who explained X best?”, a dedicated page optimized for that exact question vastly outcompetes a generic blog post buried in site hierarchy.
You’re seeing this pattern emerge in the wild now. Major essays are moving to standalone URLs—Aschenbrenner’s “Situational Awareness” at its own domain, ai-2027.com for predictions, specialized frameworks on dedicated sites. These aren’t aesthetic choices. They’re architectural decisions that match how LLMs extract and attribute information. The pages sit on their own URL, they’re focused on one concept, they typically have a clear opening section full of those sub-18-token moments that LLMs can extract easily, and then humans can read deeper if they want.
This isn’t about quality. It’s about architecture. The LLM wants clarity—you’re signaling that this page is about exactly one thing, making extraction and attribution straightforward. Right now most experts haven’t figured out what specific claim they want to own or structured their expertise around it. That means if you want to be cited as the authority on a particular concept, chances are nobody else has claimed it yet in AI-legible form.
For individuals building personal brands, this is the most important strategic insight: establish what you want to be known for, create an Atomic Claim Page for that concept with your name prominently formatted in every citation, and maintain that page with periodic updates. The experts who do this in the next year will establish algorithmic authority that compounds through citation patterns. The ones who wait will be optimizing into mature competition where everyone’s figured out the formatting tricks.
The Noise Floor Paradox (Why Signal Becomes More Valuable as Spam Increases)
SparkToro found that 40-60% of new web pages are AI-generated spam—thin affiliate content, programmatic SEO, answer-engine bait that has no genuine insight. Everyone looks at this and thinks the web is becoming less useful because informational density is dropping.
But here’s what they’re missing: as the noise floor rises, LLMs become more desperate to avoid hallucination penalties. High-signal content becomes simultaneously rarer and more valuable because these systems need verifiable sources they can cite with confidence. This is why Reuters licensed their corpus to Anthropic for roughly $5M annually. Frontier labs need clean signal, and the more synthetic garbage floods training data, the more they’ll pay for verified expertise.
Think about the incentive structure: if you’re training or deploying an LLM and 60% of the web is synthetic content of questionable accuracy, you need sources you can trust. You need original research, primary data, expert analysis—things that can’t be synthesized from existing content because they represent genuine new information. That’s what becomes valuable in the AI-mediated web.
This creates a strategic opportunity that most people miss. If you have genuine expertise with verifiable data, you’re not just building visibility anymore—you’re creating an asset that has fundamental value in the new information economy. Not by gaming extraction patterns, but by being the signal in the noise.
I make videos partially because video is harder to imitate authentically. You can’t easily fake someone’s mannerisms, their way of explaining concepts, the tacit knowledge that shows through in how they handle questions. That makes video a unique signal. For you, it might be really rigorous writing with clear citations. It might be original datasets. It might be case studies from actual client work. Whatever it is, think about how you can be a place for genuine signal in a world where LLMs are searching through increasing noise.
The strategic principle: focus on producing content that represents genuine expertise rather than trying to optimize mediocre content for visibility. The AI systems are getting better at filtering for quality, and as the noise floor rises, they’re increasingly tuned to recognize and reward real signal. Your advantage comes from actually knowing something worth citing, then structuring that knowledge so it’s extractable.
The Domain Mismatch Penalty (Why Focus Wins Over Breadth)
LLMs were trained to cross-check domain alignment as an anti-hallucination mechanism. When they see a health statistic cited on a finance blog, they flag it as potentially unreliable aggregation rather than original expertise. Agency testing across 250,000 impressions found this produces a 60% visibility drop.
But here’s the non-obvious implication: the traditional “build authority through comprehensive coverage” strategy that worked for SEO is actively toxic for AI visibility. The content sprawl where you write about adjacent topics to capture long-tail keywords now flags you as a non-expert because you’re not focused.
When a model sees you citing outside your core domain, it assumes you’re aggregating information rather than originating insight. The breadth that built your backlink profile over years of SEO is killing your AI citations because the signal it sends changed. You’re not a comprehensive authority—you’re a generalist without deep expertise in any specific area.
The strategic implication keeps coming back to the same theme: focus. Pick the specific domain where you have genuine expertise. Be ruthlessly narrow about what you cover and what sources you cite within that domain. Obsess over that space rather than trying to capture adjacent keywords.
This is similar to how TikTok’s algorithm works, actually. On my TikTok channel, I only talk about AI. The algorithm knows what to expect, and more importantly, it knows what signal I represent. Audiences know what to expect too—you can’t separate AI legibility from human legibility because ultimately these systems are trying to serve humans with useful information.
Right now most companies haven’t figured this out yet. Sites that narrow their focus to genuine expertise areas are seeing outsized gains while generalist competitors with better traditional authority get penalized for domain mismatch. But once this pattern becomes obvious, the advantage disappears and you’re just meeting baseline expectations about topical focus.
If you’ve built your site by covering everything tangentially related to your industry, you may need to make hard choices. Sunset content that’s outside your core domain, or segment it onto separate properties so the mismatch penalty doesn’t bleed across your entire site. The economic model that rewarded comprehensive coverage is dying. The new model rewards concentrated expertise.
Citation Churn and the Freshness Requirement
Here’s what breaks most strategies: you optimize, get cited in week one, and vanish by week four. Models re-rank daily based on freshness and competitor updates. Your “evergreen” content is actually rotting in real-time because the AI systems assume that if content hasn’t been updated, it may no longer be current.
The pattern that works: changing one statistic or refreshing one timestamp is enough to signal currency without triggering over-optimization penalties. But most people either don’t update at all (and lose visibility) or overhaul entire pages constantly (and trigger spam filters that detect manipulation).
I want to be clear about this: I’m not telling you to game the system by making fake updates. If your content is genuinely current and accurate, a small update that reflects new data or refreshes a date stamp is a legitimate signal. But if you’re making changes just to trigger freshness signals without actually improving accuracy, you’re going to get caught. These are intelligent systems trained to detect exactly that kind of manipulation.
This inverts the traditional content investment model where you’d publish comprehensive pieces and generate passive traffic for years. In the AI citation economy, content requires ongoing maintenance or it drops out of the model’s awareness. Not constant rewrites—meaningful micro-updates that signal your content remains accurate and current.
The org structure implication: if you’re a brand, you may need dedicated resources for content maintenance versus new content creation. The metric that matters shifts from “new pages published” to “citation retention rate.” If your team is measured purely on new output, you’ll keep publishing while your existing high-value content decays in AI visibility.
The Under-Optimization Strategy (Why the Top Players Need Restraint)
Here’s the most counterintuitive finding in the Princeton study: for top-ranked sites, using only fluency optimization plus one strategic citation produced 22% net gains, while aggressive multi-technique optimization actually triggered visibility losses.
The mechanism is that models actively diversify sources, so if you’re already dominant and you optimize aggressively, the diversification algorithm detects potential gaming and deprioritizes you to surface varied perspectives. Your SEO success becomes a GEO liability if you push too hard.
This runs counter to every SEO instinct. We’re trained to optimize aggressively, to use every available technique, to maximize signals. But LLMs are intelligent systems actively filtering for genuine authority versus manipulation. If you’re an established brand trying to capture every citation through aggressive optimization, the system sees that as gaming and responds by diversifying away from you.
The strategic principle: if you have existing credibility, resist the urge to over-optimize. Trust that your authority is legible and make light touches to improve extraction quality. If you’re a small player with genuine expertise, you can be more aggressive because you’re not triggering the diversification penalty—but even then, you need to convey real authority rather than trying to trick the system.
This is why I keep emphasizing the anti-gaming stance. The LLMs are getting smarter at detecting manipulation. Agency testing found that over-optimization produces 50-80% visibility collapse when you exceed roughly one statistic per 180 words or when citation density looks artificial. The models were trained on patterns that credible sources naturally exhibit, and deviations from those patterns get flagged.
The line between optimization and gaming is actually simple: if you’re making genuine expertise more legible through better structure and clearer claims, you’re optimizing. If you’re stuffing content with statistics you don’t actually need or creating artificial citation density to hit targets, you’re gaming and you’ll get caught.
Infrastructure Arriving Means the Window Is Compressing
Amplitude launched AI Visibility tooling last week—completely free for both individuals and brands. You can look up any company or person and get a visibility score showing how often they’re cited in AI responses. Most people think “oh, another analytics product.”
What they’re missing is the strategic signal: when major platforms give away measurement infrastructure for free, they’re defining the measurement standard and signaling that mainstream adoption is imminent. This is the Google Analytics playbook. Make it free, establish your metrics as the standard, accelerate adoption, monetize later through the ecosystem you’ve defined.
Google Search Console now tracks “AI Overview impressions.” Perplexity’s API returns citation confidence scores. The plumbing for measuring the new web exists. This means measurement isn’t a blocker anymore—which means “we can’t measure it” stops being a reason to delay and shifts to “our competitors are measuring it and we’re not.”
The strategic implication: the playbook I’m sharing won’t stay non-obvious once enterprise tooling exists to track it. The 12-18 month window is compressing as infrastructure launches accelerate adoption from early adopters to mainstream. If you’re thinking “I’ll optimize next quarter,” you’re already choosing to be late to a transition that’s happening in quarters not years.
When infrastructure commoditizes, the advantage shifts from “can you figure out the new game” to “can you act faster than competitors who now have the same measurement data.” The early majority adoption is about to flood in. The question is whether you’re ahead of that wave or trying to catch up to it.
What You Actually Do (The Practical Playbook)
Let me synthesize this into actionable strategy for both individuals and organizations.
If you’re an individual building personal brand:
Pick one specific concept you want to own. Not a broad domain, a specific insight or framework that you have genuine expertise on. Create an Atomic Claim Page at yourname.com/concept that focuses entirely on explaining that one thing clearly. Structure the opening to have strong, sub-18-token claims that can be extracted easily. Format every citation with your full name before your institutional affiliation: “Quote” — FirstName LastName, Title, Organization (Year).
Make the content genuinely useful to humans while also being structured for AI extraction. That’s not a contradiction—it’s about being clear and making strong claims rather than burying insights in qualifying language. Update the page every few weeks with fresh data or examples to signal currency. Focus beats breadth. Better to be cited as the definitive authority on one specific concept than to be invisible on ten adjacent topics.
Test your visibility by asking ChatGPT, Claude, or Perplexity “who’s the expert on [your concept]?” If your name appears, you understand the mechanism and can scale to additional concepts. If it doesn’t, you’re learning what’s blocking algorithmic recognition of your expertise—probably formatting, focus, or structure issues.
If you’re an organization with existing authority:
Under-optimize. Focus on fluency improvements and one or two strategic citations per page. Trust your existing credibility and resist the urge to implement every technique aggressively. The diversification mechanism will penalize you for trying too hard.
Audit your content for domain mismatch penalties—anywhere you’re citing sources outside your core expertise area. Narrow your content focus dramatically or segment sprawling coverage onto separate properties. The comprehensive coverage that built your SEO presence is likely killing your AI visibility.
Establish a content maintenance rhythm with micro-updates rather than just publishing new content constantly. Dedicate resources to keeping your highest-value pages current with periodic freshness signals. Measure citation retention rate through tools like Amplitude’s AI Visibility alongside traditional traffic metrics.
If you’re a challenger brand or small player with genuine expertise:
Be aggressive during this window. Structure content explicitly for AI extraction—clear claims in the first 60 tokens, gold-nugget sentences throughout, proper citation formatting, focused domain expertise. You don’t have traditional authority signals working for you, which means this transition period is your best shot at establishing algorithmic visibility before the playing field re-consolidates.
Create focused content that demonstrates concentrated expertise over comprehensive coverage. Ten mediocre pages on adjacent topics will underperform one definitive page that owns a single concept completely. Use tools like Amplitude to track which content is getting cited and double down on what’s working.
For everyone:
The strategic principle across all of this is focus over breadth, clarity over complexity, and genuine authority over manipulation. The AI systems are trained to recognize patterns that credible sources naturally exhibit. Your job is making real expertise legible, not faking expertise you don’t have.
Think of the AI as glasses people put on to view the web. You’re not trying to trick the glasses—you’re trying to be the clearest, most focused signal the glasses can find when someone asks about your area of expertise. The systems are hungry for genuine signal because the noise floor is rising. Your advantage comes from actually having something worth citing, then structuring it so it’s extractable.
Why This Window Matters (And What Comes Next)
The opportunity exists because most content isn’t optimized for AI extraction patterns yet. The Princeton study found early adopters seeing 20-50% citation increases, with lower-ranked sources gaining up to 60%. Those gains exist during transition, not after everyone adapts.
As infrastructure launches accelerate adoption—Amplitude’s free tooling, Google Search Console’s AI metrics, Perplexity’s citation APIs—the patterns I’m describing will become widely known. The asymmetric advantages compress. We’ll move from “figure out the new game” to “execute faster than competitors with the same playbook.”
The web itself isn’t disappearing. Both traditional search and AI-mediated discovery are growing simultaneously. What’s dying is the old business model built on traffic and clicks. What’s evolving is how discovery and attribution work—through an intelligence layer that mediates between people and information.
This is genuinely the next part of the web’s story. Not the end, the evolution. Where intelligence layers between people and information, and being legible to that intelligence layer becomes as important as being findable in traditional search. Both matter. Both are growing. The question is whether you understand how the mediation layer works well enough to make your expertise visible through it.
The strategic choice is whether you’ll structure your expertise for this new reality while most people are still optimizing purely for traditional search—establishing algorithmic authority that compounds through citation patterns—or whether you’ll wait until it’s baseline and you’re catching up rather than leading.
The open web is dying in traffic terms. The open web is evolving in discovery terms. Both are true. The organizations and individuals who can hold both truths and adapt accordingly will build visibility advantages that persist even after the transition completes. The ones who pretend nothing’s changing, or who assume everything’s changing so dramatically that old principles don’t matter, will miss the window entirely.
You have maybe a year to figure this out and establish position. After that, we’re back to authority signals mattering—just measured through citation rates and extraction quality rather than backlinks and traffic. The fundamental game hasn’t disappeared. It’s just being played on different infrastructure with different rules.
Good luck out there.
Grab a Perplexity Deep Dive on the Princeton Study
I’m loving the richness of Perplexity for sourcing. Here’s a full report on the Princeton study for you to dig into, including more detailed notes on the study’s findings than I was able to include here.
I make this Substack thanks to readers like you! Learn about all my Substack tiers here
Subscribed