- Measurably Better
- Posts
- Measurably Better: Issue #1
Measurably Better: Issue #1
Our inaugural newsletter! 🎉

🌟 Editor's Note
Hello from Budapest! Today’s a special day as it marks the first issue of the Measurably Better newsletter! Maybe one day we’ll all look back and it’ll feel like looking at AirBnb’s website from March 2009. The beginnings of our humble beginnings!
‼️USAID prevented 92 million deaths in 20 years
What's new: Researchers analysed 133 countries over two decades and found USAID funding prevented 91.8 million deaths—the most comprehensive evaluation of development aid effectiveness ever conducted.
Why it matters: This methodology offers a blueprint for demonstrating program impact at scale, while the funding cuts create both challenges and opportunities for M&E careers.
By the numbers: The Lancet study used fixed-effects models across 21 years to find USAID funding was associated with:
15% reduction in overall mortality
32% reduction in under-five child mortality
65% drop in HIV/AIDS deaths
51% drop in malaria deaths
The methodology matters more than the results. The researchers didn't just evaluate health programs, they measured total impact across all sectors and used multiple analytical approaches:
Panel data with robust economic and demographic controls
Negative controls (injury mortality) to test specificity
Triangulation with difference-in-difference models
Integration of retrospective evaluation with forecasting
Between the lines: This shows how to strengthen causal inference in complex development contexts where attribution is hard.
The urgent part
The 83% USAID funding cuts in 2025 could cause 14+ million additional deaths by 2030, according to the study's forecasting models. That's one death every 13 seconds.
What to watch: How other donors respond. The study notes this funding shock would be "similar in scale to a global pandemic"—except it's a policy choice.
đź’Ľ Jobs & Opportunities
Jobs & Opportunities board coming soon!
Impact Evaluation Manager Lead Exposure Elimination Project LEEP is an impact-driven, evidence-based nonprofit that aims to eliminate childhood lead poisoning affecting 1 in 3 children worldwide. Primarily focused on lead paint, they seek an Impact Evaluation Manager with excellent analytical skills to manage the evaluation of LEEP programs’ impact and cost-effectiveness.
| Request for Proposals Independent Age The Boosting Advice Grants programme funds organisations to provide advice to older people who are experiencing financial hardship by supporting them with income maximisation and cost reduction. Independent Age is seeking the services of an experienced evaluator to develop and execute an impact programme evaluation.
|
🔍️ Cheaper survey sampling that actually works
What's new: IDinsight tested four alternatives to expensive conventional household sampling and found three that deliver reliable results at lower cost.
Why it matters: Most M&E practitioners face the brutal choice between rigorous-but-expensive conventional sampling or cheap-but-questionable methods like random walks. Now there are proven middle-ground options.
The big picture: Traditional household surveys force you to pick between two bad options: spend big on conventional two-stage sampling (select areas, list all households, then sample) or use sketchy "random walk" methods that may bias your results.
What works:
Voter roll sampling — 96% household coverage in rural India, though only 78% in urban areas
Rooftop sampling — Uses Google/Microsoft building datasets to select households near randomly chosen buildings
Grid-based sampling — Like conventional sampling but uses population grids instead of census areas
What doesn't: The popular "right-hand rule" method excludes many households, creates variable selection probability, and can't be replicated reliably.
How it works: Each method trades some precision for significant cost savings. Rooftop sampling works best for smaller surveys where you're optimizing quality and cost. Grid-based sampling shines when granular census data isn't available.
The bottom line: You don't have to choose between broke and biased anymore. Test these methods in your context, but ditch the right-hand rule completely.
Go deeper: Contact [email protected] for implementation guidance on any of these approaches.
The Learning and Evaluation for Australian Funders (LEAF) network is hosting an exclusive roundtable with Professor Patricia Rogers, a global evaluation leader, on July 22 (Melbourne) and July 24 (Sydney).
With traditional randomized controlled trials often impractical for philanthropy, this workshop introduces rigorous alternatives like process tracing and qualitative comparative analysis that can still assess real impact without control groups.
The full-day session costs AUD $499 and targets intermediate to expert M&E practitioners at funding organizations. Rogers brings 30+ years of experience as RMIT's former Professor of Public Sector Evaluation and BetterEvaluation's founder.
🤩 Coming Soon: Your Global Guide to M&E Master's Programs
Thinking about leveling up your M&E career with a master's degree? We've been digging deep into universities worldwide to find the graduate programs that go the extra mile and focus specifically on monitoring and evalution practice.
From Oxford's prestigious EBSIPE program to specialised degrees in Australia, Germany, Kenya, and beyond, we're mapping out the landscape of quality M&E education globally.
What's coming: A comprehensive breakdown of 20+ programs across 6 continents, with the real details on curriculum, delivery formats and costs.
Stay tuned—this lands in your inbox in the upcoming weeks.
Did You Know? The first computer bug was literally a bug—in 1947, Grace Hopper found a moth trapped in a Harvard Mark II computer, coining the term "debugging" in the process.
Thanks for reading!
Sophie
How satisfied are you with this issue of the newsletter? |