Part Three of Beyond the Count: the science, the stories, and the stakes
In my last post, I explored why the way we count wolves in Minnesota might not be telling us what we think it is—and why that matters more than ever as conversations about delisting and hunting re-emerge.
Since then, I’ve received thoughtful responses from readers asking the same thing I’ve been asking for years:
“If this method is flawed, what’s being done to fix it?”
So today, I want to share the questions I’ve asked—formally and informally—to those responsible for Minnesota’s wolf population estimate. Many of these were submitted directly to the Minnesota DNR or discussed in meetings, reports, and proposals over the past eight years.
To date, these questions have largely gone unanswered.
And yet, these are the kinds of questions that should be at the center of any effort to understand, improve, or defend the accuracy of our wolf monitoring methods.
Key Methodological Questions
1. Sample Size Adequacy Relative to Population Size
- Each year, only ~39–58 wolves are collared across ~38–51 packs, representing just 1–2% of the estimated 2,700–2,900 wolves statewide.
- That’s less than 10% of the estimated ~500 total packs.
Questions:
• Has any power analysis been conducted to determine whether this sample size is statistically sufficient to detect real trends in population size or pack behavior?
• Has MNDNR explored expanding the dataset using camera traps, acoustic monitoring, or genetic sampling?
2. Infrequent Updates to Occupied Range Estimates
- The estimated occupied range (73,972 km²) hasn’t been updated since 2017—even as habitat, prey availability, and development shift.
Questions:
• Has MNDNR considered using remote sensing or GPS collar data to reassess wolf range more frequently?
• Could prey modeling or habitat connectivity tools support a more dynamic understanding of range?
3. Use of Territory Estimates Instead of Home Range
- The estimate relies on Minimum Convex Polygon (MCP) territory mapping, which is known to overestimate true space use.
- MCPs are not considered biologically accurate representations of home range in modern ecological studies.
Questions:
• Has any comparison been done between MCP and more accurate models like Kernel Density Estimation (KDE) or Brownian Bridge Movement Models (BBMM)?
• Could resource selection functions (RSFs) be used to generate habitat-informed home range estimates?
4. Assumption That Lone Wolves Represent Exactly 15% of the Population
- Every year, 15% is added to the estimate to account for lone wolves—with no variation and no field-based validation.
Questions:
• What is the empirical basis for this fixed 15% figure?
• Has any effort been made to verify lone wolf prevalence using collar data, sightings, or genetic signatures?
5. Inconsistent Application of the 1.37 Territory Scaling Factor
- Packs with <100 GPS points have their territory size multiplied by 1.37—a correction factor applied inconsistently year to year.
- For example: 17–24% of packs were scaled in 2016–2021; only 7.7% were scaled in 2023.
Questions:
• What determines when the 1.37 correction is applied or not?
• Has this correction factor been validated against updated GPS datasets or movement models?
6. Mortality and Missing Wolves Not Accounted for in Estimates
- Many collared wolves are recorded as mortality or missing each year—sometimes over 40%.
- Yet population estimates only include active collared wolves and do not adjust for known or probable deaths.
Questions:
• Why are missing or dead wolves excluded from adjustments to the total estimate?
• Has MNDNR considered building a mortality model to estimate total loss due to dispersal, poaching, disease, or human conflict?
7. Increasing Confidence Intervals Suggest Widening Uncertainty
- In 2022–23, the margin of error reached ±800 wolves, or ~27% of the entire population estimate.
- Yet the report conclusion still reads: “The population is stable.”
Questions:
• What is driving the increase in margins of error over time?
• Could Bayesian modeling or Monte Carlo simulations help capture uncertainty more robustly?
8. Exploring Better, Cheaper, and More Ethical Methods
There are numerous non-invasive and cost-effective tools now available to support more accurate estimates.
Questions:
• Has MNDNR explored genetic sampling from scat, to assess both population size and genetic diversity?
• Could a camera trap grid system validate pack numbers and locations more objectively?
• Has MNDNR tested howling surveys to estimate pack presence?
Why These Questions Matter
I completely understand the challenges of conducting large-scale wildlife monitoring, especially with limited budgets and competing priorities. But when population estimates are used to justify lethal control, delisting, and the removal of federal protections, we have an obligation to get it right—or at least to acknowledge where our models fall short.
I continue to believe that scientific transparency and methodological improvements will only strengthen Minnesota’s wolf monitoring efforts, and build public trust in the decisions we make about these animals.
I share these questions not to discredit, but to engage.
Because wolves deserve better than guesswork.
And so do we.
If you’d like to read the full breakdown of how Minnesota’s current method works—and why it may no longer be enough—you can find last week’s piece here.
Thanks for being here,
Devon


Leave a Reply