![](https://cdn.prod.website-files.com/67a0e0ea7b04f593ce5257de/67a37e022946e956cc0b625e_Why%20Your%20SLED%20Sales%20Team%20Is%20Working%20Too%20Hard.webp)
Most SLED sales teams are doing it wrong. They're chasing deals based on gut feel, relationships, and the latest lead from marketing. This is like trying to win at poker by memorizing your opponents' facial expressions while ignoring the math behind the cards.
The math is simple: let’s say your win rate in public sector sales probably hovers around 30%. That means 70% of your effort is wasted. The real question is: (how) can you identify the winning 30% before you spray and pray?
The Machine Knows
We’ve been building predictive Machine Learning models for public sector sales teams for years now. The secret is data - lots of it. First, we pull everything from Salesforce or HubSpot: wins, losses, deal timelines, products sold, close reasons, etc. Then we layer in Pursuit’s proprietary entity intelligence: IT spend, departmental headcounts, whether they have certain roles like a CISO, fiscal year timing, subscription software spend, procurement rules and more. Finally, we add data published by each agency within the federal government, including Census, NCES, IPEDS, FBI, HUD, FEMA, BLA, EMMA, and more. As of early 2025, we're processing over 500 different types of data columns at Pursuit.
Here's the beautiful part: the model gets smarter every single day. Every new win or loss from your Salesforce or HubSpot integration automatically feeds back in. Don't worry about information overload - the model figures out what matters. Irrelevant columns? It ignores them. The more data you feed it - both rows of deals and columns of entity characteristics - the more accurate it gets at spotting winners.
The Gold Standard Metric of Prediction: AUROC
AUROC (Area Under the Receiver Operating Characteristic Curve) is just a fancy way of saying "can your model predict a future winner from a future loser?" A score of 0.5 means your model is guessing randomly. In most business contexts, 0.7 is a meaningful advantage, 0.8 means it's time to realign your GTM motion around it, 0.9+ means you've struck gold.
The beauty of AUROC is that it works even when your data is messy - and in public sector sales, your data is always messy. You'll have 3-4 losses for every win. AUROC doesn't care. It just wants to know: can you rank the good prospects above the bad ones? Pursuit kicks out an A, B, C, or D ranking for each account. If you have enough win / loss to generate a good score by product, we do that too.
Your GTM Is Probably Backwards
Most teams get this wrong. The hire experienced AEs with relationships, and let those relationships create data that drives sales. To be fair, that was the old game. But it leads to AEs spread thin across a large number of accounts, while their BDRs spray and pray. Here's what actually works:
- Focus your AE’s demand generation efforts only on A-tier accounts
- BDRs focus on As and Bs
- Let marketing handle C and D accounts with automation — trying to find handraisers and build awareness.
Technical Approach (That Actually Matters)
Here's how the model works, in simple terms. Imagine you have thousands of really smart salespeople, each looking at your data in a slightly different way. One might focus on budget, another on location, and another on past buying patterns.
Each of these "digital salespeople" is actually a decision tree. Think of a tree as a series of yes/no questions: "Is the budget over $1 million? Are they near an existing customer? Have they had flood damage recently?"
The clever part is how these trees work together. The first tree makes its best guess on whether the deal would be closed won or lost. Then the second tree focuses on fixing the first tree's mistakes. The third tree fixes what the first two got wrong, and so on. It's like having a team where each person learns from everyone else's mistakes.
Why is this better than having one super-smart tree? Because some patterns only appear when you look at multiple things together. Maybe having a big budget only matters if you're also in a flood-prone area. Or being near another customer matters more in some states than others. Our thousands of trees can spot these hidden patterns.
Every night, the model takes all this learning and updates your Salesforce or HubSpot with fresh predictions. When your team starts work the next morning, they know exactly which prospects to call first. No guesswork required.
The Power of Counterintuitive Insights
Let's talk about something that happens with every client: the "that can't be right" moment. Here are some real examples of what our models have discovered:
A school district's total enrollment turned out to be less important than their special education population percentage. Districts with higher special ed populations were actually 3x more likely to buy, even when they had smaller overall student counts. The model spotted this pattern because these districts often had more complex administrative needs and larger federal funding pools.
Annual financial reports showed operating expenses mattered less than predicted. The combination of having both an active capital improvement plan and unfilled IT positions listed on their jobs page was 3x more predictive of purchases than total budget alone.
While total population seemed important, the model found that municipalities with both an updated hazard mitigation plan (within 2 years) and regular city council technology committee meetings were 2.5x more likely to buy than those ranked purely by size.
Many assumed bond rating was key. But the model revealed that having both a published digital transformation roadmap and an active request for information (RFI) page was 4x more predictive than credit rating alone.
How The Model Gets Better
It's far from uncommon to hear questions like "Can we emphasize IT budget more? Population size?"
Here's the counterintuitive truth: letting the model find its own weights works better than forcing our intuitions onto it. That 0.75 AUROC score? It means the model is already finding patterns that work.
But the model does get better. Every new piece of data helps:
- New wins and losses automatically feed in from Salesforce and HubSpot, including customer proximity patterns
- Fresh data from public sector websites and newly posted documents
- Updated federal government data streams, from new natural disaster reports to the latest NCES and IPEDS educational performance metrics
The beauty is that you don't have to decide what matters. Feed in the data, and the model will tell you.
What This Means For You
Account targeting isn't just important - it's existential. Here's the math: There are 109,000 accounts in the SLED TAM. Your rep selects the top 25 target accounts. If 10 of those accounts have no real propensity to buy - maybe they're locked in contracts, out of budget cycle, or just not ready - your rep is effectively working part-time. Full time payroll, part time production.
Every hour spent on a low-propensity account is an hour not spent on an account that's ready to buy. In B2G sales, where cycles are long and complex, you can't afford to guess wrong.
If you're running a B2G sales team without ML-driven targeting, you're fighting with one hand tied behind your back. Your competitors are probably already doing this. Every quarter you wait is a quarter you're probably focusing on the wrong prospects.
The future of B2G sales isn't about having the best relationships or the biggest team. It's about knowing exactly where to point them. The math is clear.