

Five B2B thought leadership formats that earn credibility and reach with real examples and analysis.


There's a version of this article that lists 10 thought leadership examples from McKinsey, HubSpot, and a few recognizable SaaS brands and calls it done. You've probably already read that one.
The problem with most roundups like that isn't the examples. It's that they show you what was made without explaining why it worked. You see the output: the polished PDF, the benchmark report that circulated for three years, the LinkedIn piece that got 800 reposts. But you miss the mechanism.
At Campfire Labs, we've produced enough thought leadership content (across product analytics, fleet safety, events, sales tech, health tech…) to see the patterns. Genuinely influential B2B thought leadership tends to fall into a small number of recognizable formats, each built around a specific mechanism for earning credibility or generating reach.
Here are five of those formats, with real examples and explanations of why they work.
What it is: A proprietary model, usually a maturity curve, a diagnostic framework, or a staged progression, that gives your audience language for something they were already experiencing but couldn't articulate.
For example, Swoogo built a five-stage maturity model for events programs: Stage 1 - increase registration, through to Stage 5 - increase customer lifetime value. Each stage came with clear criteria for where a team currently sits and what investment is required to progress. The model wasn't a product walkthrough or a veiled pitch. It was a diagnostic tool event leaders could drop into a conversation with their CMO and immediately use.

Why it works: A named framework does something almost no blog post can: it becomes a reference point. Once a reader internalizes your model, they carry it into their next presentation, their next budget request, their next hire decision. The company that named the thing occupies a permanent position in how that reader thinks about the category, a totally different outcome than "they read our article once."
What to get right: The model has to be genuinely diagnostic, not a product tour with stage names on it. If readers can use the framework before ever buying your product, you've probably built it correctly.
What it is: Original research based on data only you have access to. Ideally not a survey of 400 marketers that any SaaS company could commission; your data, from your product, your customers and proprietary methodology.
Netradyne's ROI model, built from analysis of 1.3 billion miles of driving data, is a great example. The core finding: every 50-point improvement in a fleet's driver score correlates with a 13–15% reduction in accidents per million miles. Their customers improved driver scores by an average of 150 points in the first year, putting most fleets on track for a 30%+ accident reduction. No competitor could run the same analysis, because no competitor had the same data.
The model didn't just support sales conversations. It changed them. Instead of "our platform reduces accidents," the conversation became "here is a finding grounded in more driving data than any driver will accumulate in a lifetime."

Why it works: It's unreplicable. A competitor can write a blog post about driver safety. They cannot publish a finding derived from 1.3 billion miles of data they don't have. Proprietary data turns a sales claim into a research finding, and those are received very differently.
What to get right: Lead with the finding, not the methodology. Some research pieces bury the headline in a section labeled "Key Findings" on page four. Put the number in the title, the opening paragraph, and any executive summary. Methodology follows as credentialing, but it's not the headline story.
What it is: A credible, specific case that the established approach in your category no longer works — and an explanation of what replaces it.
As an example of 'broken playbook' thought leadership, we really like Mixpanel's work. Their content argues that traditional PM rituals (Figma prototyping, A/B testing, linear feature roadmaps, stuff like that) were designed for deterministic software and don't hold up when you're building AI products. The argument isn't contrarian for its own sake: AI products are probabilistic, context-dependent, and always evolving. A feature behaves differently depending on user context. A/B testing assumes a stable relationship between input and output — AI products don't have that. The content named the exact rituals that break down, explained the structural reason each one fails, and proposed what PMs need to think about instead.

Why it works: It validates something the reader already suspects. Most experienced practitioners have a nagging sense that part of their current approach is running on fumes, especially when their tools keep changing. A piece that names that feeling, explains the root cause, and offers a coherent alternative earns disproportionate trust.
What to get right: Specificity is everything. "The landscape has changed, and traditional approaches no longer apply" is a press release opener, not a thought leadership piece. Name the exact ritual — the quarterly roadmap, the Figma prototype, the NPS survey — explain the structural reason it fails for this type of product, and show the mechanism that replaces it. Vague broken-playbook pieces are worse than no opinion at all.
What it is: You convene practitioners from your industry or adjacent ones, get them on record about a significant trend, and publish their perspectives in a structured editorial format — not a transcript dump.
Drift (acquired by Salesloft) sat down with sales leaders from three enterprise SaaS firms to share predictions about how AI would transform B2B buying. They didn't endorse the product or discuss features, they just went on the record with genuine opinions: "Humans will still be at the center of buying." "Whoever adapts to AI first will win." "Sales will be a combination of technology and personal touch, the two aren't in conflict." The company became the convener of a conversation that was already happening in private.
Why it works: Three reasons: It borrows the credibility of the room: practitioners trust other practitioners more than they trust brands. The distribution math is favorable: the people quoted tend to share it, and their audiences are usually your audiences. And it positions your company as an organization that thinks seriously about the category, not just about their product.
The dynamics are similar to what the best benchmark reports in B2B content do well: give practitioners something they can point to.
What to get right: Quotes that contain a stance. The roundup lives or dies on whether contributors actually said something interesting. Push for opinions, not descriptions. "We're investing in AI tools" is a description. "The reps who treat AI as a threat to their job will be replaced by the reps who don't" is a stance.
What it is: You take a company your audience admires, identify one thing they do exceptionally well, reverse-engineer the organizational logic that produces it, then translate it into principles others can apply.
A while back, Mintlify published a piece asking why Stripe has the best developer documentation in the industry. Patrick Collison's footnote-heavy, research-paper-style emails established a writing standard from the top down; templates and writing office hours removed friction for engineers who weren't confident writers; documentation requirements were embedded into engineering career ladders so shipping undocumented code had professional consequences. Mintlify answered how, not just what, and that distinction is what makes the piece useful and compelling.

Why it works: "Why does [admired company] do this so much better than everyone else?" is a question every practitioner quietly asks about their own field. A piece that answers it rigorously — with the specific mechanisms behind the excellence, not just the vibes — earns credibility, backlinks, and social sharing from the community working on the same problems. It positions the publisher as someone who thinks analytically about craft. That's a meaningful signal, and it's rare.
What to get right: Explain the mechanism, not just the outcome. The question you're answering isn't "what does this company do?" It's "why does it work — and what would have to be true for someone else to replicate it?"
These five formats are useful only insofar as they serve something worth saying. The proprietary data play only works if your data is genuinely novel. The broken playbook only lands if the conviction behind it is real. The named framework only becomes a reference point if it actually helps someone think more clearly about their work.
Start with what your company actually knows that nobody else does. That might be an observation from inside your product, a pattern visible only from your data, or a market shift you're witnessing in real time. Once you have that, the format tends to follow naturally. Proprietary data suggests one path. A genuine conviction about what's breaking in your category suggests another.
The formats in this piece aren't formulas. They're patterns that recur because each one is built around a mechanism that earns trust or generates reach. Use them to sharpen what you already have, not as a substitute for having something worth saying. If you're thinking about building a thought leadership program and want a partner to help, this guide on choosing a thought leadership agency is a good place to start.