top of page
Search

Smoke, Signals, and Sleepers. Know Your Sources

  • Writer: Interstate of Green
    Interstate of Green
  • 4 days ago
  • 8 min read

by Chris Tomlinson


A note on how this was made: This article was built using Claude, Anthropic's AI assistant, as a research and writing tool. The analysis, source selection, context, and direction were provided by Chris Tomlinson. We are being upfront about this because we think people deserve to know how content is produced. AI helped synthesize information and structure the argument. The thinking — and the opinions — are ours.


Most mock drafts are dressed-up guesses. They recycle the same consensus information, copy each other's picks, and present the result as analysis. The goal of this series is different: to walk through exactly how to build a mock draft that is genuinely trying to be right — not one that is trying to generate engagement.

That requires two things. First, a clear-eyed framework for knowing who to actually trust and what information to throw out before you write down a single pick. Second, a working knowledge of the humans making draft decisions and the patterns they reliably repeat. This is Part 1 — the sources. Part 2, publishing tomorrow, covers the decision-makers.


1. Defining Accuracy — In This Context

Before getting into the methodology, it helps to clarify how accuracy is defined here — specifically within mock draft ranking formats like The Huddle Report, which scores analyst predictions after each draft. The common assumption is that accuracy means predicting the exact draft slot. The way it is actually scored in those systems is different, and understanding that distinction shapes everything that follows.


In mock draft scoring formats, a correct prediction is a player-to-team match — not a slot call. If a team trades up or down on draft night and still selects the player you projected for them, that registers as a hit.


This reframe changes the exercise. Slot prediction requires correctly anticipating every trade that happens on draft night — an almost impossible task. Player-to-team prediction requires deep knowledge of front office tendencies, scheme needs, and organizational philosophy. It is a more tractable problem, and a more meaningful one for fans and analysts alike.


Cascade Stability — The Hidden Variable in Overall Accuracy

Once you accept that the goal is maximizing correct player-to-team links across all 32 picks, a second insight follows: some individual picks matter much more than others. Not because they are harder to get right, but because getting them wrong cascades incorrectly through the picks that follow.


A high-leverage pick is one where the two most likely outcomes produce meaningfully different downstream boards. A low-leverage pick is one where both outcomes lead to roughly the same board regardless. The distinction is not about confidence — it is about consequences. Before locking in any contested pick, ask: if I'm wrong about this one, how many picks downstream get disrupted?

KEY RULE:  When two picks produce nearly identical downstream boards, the higher-confidence individual pick is the right call — not the one that theoretically "stabilizes" a board that is already stable either way.


2. Building a Trust Hierarchy for Analysts

Not all mock drafts are equal, and treating them that way is the most common mistake. There is a significant difference between an analyst recycling consensus information and one who has genuine access to NFL front offices. The graphic below separates them.


Tier 1: Insider Intel

  • Peter Schrager (ESPN)  Publishes only two mock drafts per year, both sourced from conversations with GMs and team executives at NFL events — not from tape study. His April mock, released after the annual league meetings in Phoenix, is built from direct front-office conversations. When Schrager projects a pick, he heard something from someone making the decision.

  • Daniel Jeremiah (NFL Network)  Former NFL scout with deep front-office relationships. His 3.0 mock is the most current and reliable of his versions. He consistently outperforms his peers on independent accuracy metrics.

  • Albert Breer (The Ringer / Sports Illustrated)  Does not produce full mock drafts, but his intel nuggets carry significant weight. When Breer describes a pick as a "fait accompli" on a podcast days before the draft, that is the kind of signal worth acting on.


Tier 2: Accuracy-Tracked Analysts

The Huddle Report has tracked mock draft accuracy for over two decades using a standardized scoring system. This data is the most objective measure available.

  • Brendan Donahue (Sharp Football Analysis)  Averaged 48.2 out of 64 possible points per year over the last five years — the second-highest tracked average. His process is systematic and post-free agency.

  • Jason Boris (Times News)  Holds the highest five-year average tracked, with a record-setting 2024 score of 59 points. He publishes exactly one mock per year — on draft morning. If it is available before your deadline, use it.

The key insight: consensus is not correlated with accuracy. The analysts consistently ranked most accurate are the ones willing to deviate from the crowd when their sourcing supports it.


Tier 3: Beat Writers — Team-Specific Only

A beat writer covering one team for an entire season has better information on that team's first-round pick than any national analyst. The key connections entering the 2026 draft:

  • Mike Reiss (ESPN, Patriots)  Covered the team for the Boston Globe and Herald before ESPN. Has maintained access through multiple regimes. The gold standard for Patriots intel.

  • Jeff McLane, Zach Berman, Dianna Russini (Eagles)  Russini in particular broke the AJ Brown trade conversations before anyone else. When these three point the same direction on a pick, it carries weight.

  • Jeff Zrebiec, Jamison Hensley (Ravens)  The Ravens are one of the most tight-lipped organizations in the league. When information comes from either of these two, it is worth taking seriously.

RULE:  When a beat writer and a national insider independently arrive at the same pick for the same team, confidence should increase significantly. Two independent data points is rare enough to be meaningful.


The Independent Scouting Layer

Insider intel tells you what teams will do. It does not tell you whether the underlying player evaluation supports it. Those are different questions, and answering only the first one leaves a meaningful gap in your framework.

The most valuable cross-check is a comprehensive independent scouting document — one built entirely from tape, without any access to front-office conversations. When insider intel and independent tape evaluation point to the same player, confidence rises substantially. When they diverge by a significant margin, that gap deserves an explanation before you finalize a pick.

Dane Brugler's The Beast, published annually through The Athletic, is the most thorough example of this kind of work in public coverage. It covers hundreds of prospects with full scouting reports built from tape, not from league sources. Using it as a secondary validation layer — not as a replacement for insider intel, but as a pressure test against it — is one of the most underused practices in public mock draft construction.

APPLICATION:  The question to ask: does the insider intel have a credible tape-based explanation behind it? If independent scouting has the same player ranked significantly lower, someone is wrong. Figure out who before locking in the pick.


Top-30 Visits as a Signal Layer

In the weeks before the draft, each NFL team may bring up to 30 prospects to their facility for formal pre-draft visits. These are reported publicly and collectively form one of the most useful — and most misread — data sets available.

The common mistake is treating a visit as confirmation of interest. It is not. Teams regularly invite players they have no intention of drafting, either to gather information, conduct medical evaluations, or obscure their actual target. A top-30 visit alone tells you a team considered a player worth their time — nothing more.

  • Use visits as a negative signal:  If a team has a clear need at a position and has not visited any of the top available players there, that is meaningful. No visit often means no genuine interest.

  • Use visits as confirmation:  When a beat writer reports ongoing interest in a specific player AND that player received a top-30 visit with that team, the two independent signals reinforce each other.

  • Use visits as a smokescreen detector:  A team hosting several players at a position when the consensus has them going elsewhere is often deliberate misdirection. Apply the "who profits" test from Section 3.


Betting Markets as Independent Confirmation

Sharp betting markets aggregate information faster than most reporters can publish it. When a draft prop moves significantly in the days before the draft, it often means someone with genuine knowledge has acted on it. This does not make markets infallible, but it makes them a meaningful independent layer worth checking.

The most useful markets are pick-specific probabilities (which player goes at a given slot) and team-specific position props (which position does a team take first). When those markets and insider intel point the same direction, treat it as confirmation. When they diverge with a heavily sourced pick showing weak or opposing market support, that tension is worth examining before you commit.


3. The Smokescreen Problem

The week before the NFL Draft, teams are actively planting false stories with reporters. This is standard competitive strategy. If you do not account for it, you will get burned every year.


"Who profits if people believe this?" is the single most important question to ask about any pre-draft report.


Teams leak misinformation for several specific reasons:

  • Discouraging trade competition:  If a team loves a player at pick 12, they may leak that they love him at pick 8. Other teams stop trying to trade up past them.

  • Encouraging rivals to overpay:  Starting buzz around a player you do not actually want can manufacture a bidding war and deplete a competitor's assets.

  • Suppressing a player's stock:  Creating doubt about a prospect ahead of the draft can prevent teams higher on the board from selecting him — leaving him available for you.


Apply extra scrutiny to:

  • Jerry Jones:  The Cowboys owner is notably open with the press, but his openness is selective and strategic. His quotes are frequently designed to generate headlines rather than convey genuine intent. Discount heavily.

  • Unsigned "trading up" stories:  Any report that a team is considering trading up, without a specific named source, should be treated as noise. These are usually planted to create leverage in trade negotiations.

  • Late-breaking buzz on a player absent from earlier coverage:  Almost always agent-driven or a manufactured market.


The Analyst Vehicle Problem

Smokescreens are usually planted through reporters. But there is a more specific version of this problem that targets the highest-trust analysts — and it is more effective precisely because those analysts' credibility makes the misinformation more believable.

Consider how Schrager's mock draft process works: he publishes exactly two mocks per year. The first arrives after the annual league meetings, sourced from direct conversations with GMs and executives. The second comes in the final week before the draft. His accuracy record — and reputation — is built on the second mock.


That structure creates a window between the first publication and the second. A front office that plants a story with a trusted analyst at the league meetings shapes the national narrative for two weeks. If the corrected information then surfaces in the analyst's second mock, the record stays clean. The team got useful misdirection at no cost to anyone.


This does not mean first mocks are unreliable. It means the second mock — released after teams have completed their final pre-draft preparations — carries meaningfully more weight. When both versions of a high-trust analyst's mock agree on a pick, confidence is high. When they diverge, the second version reflects the information that survived the most scrutiny.

RULE:  Treat any analyst's first mock as directionally useful but subject to revision. Their final published version is what matters for accuracy. Build your framework around the most recent iteration of each source, not the most prominent one.


COMING IN PART 2

All of this — the source hierarchy, the smokescreen filters, the analyst pipeline problem — is about separating reliable information from noise. But filtered information still has to go somewhere. It has to be applied to actual picks, by actual teams, run by actual people with documented habits and entirely predictable patterns. That is what Part 2 covers: the GMs, the coaches, the sleepers, and the contrarian pick done right.



 
 
 
bottom of page