Data Collection (and Analysis) as Development

Here’s a paragraph from a boarder-line scathing short essay on “Data Driven Development Decisions” posted by The Springfield Centre. A self-proclaimed “leader in the market systems approach to development in low and middle-income economies – also referred to as making markets work for the poor”. The central point of the essay is this: setting the monitoring and evaluation function of your program apart from the rest is antithetical to the purpose of the development project.

Essentially, the job of a development programme aiming to stimulate systemic change, is to get one over on the system. Systems are pretty knowledgeable. Of all the things there are to know, they know most of them. You have to find out (or try to find out) something that it doesn’t know in a way that benefits the poor. That might be a business model that the system didn’t know was profitable, a function that the system didn’t know was needed to improve efficiency, or an input that the system didn’t know was more cost effective. One thing that systems – particularly in developing countries – aren’t that great at is producing and aggregating information into nice little digestible packages. So, in order to find the little nuggets of information that might allow you to change the system, you need to put in the effort – and that effort is in data collection.

A few things:

First, yes. I happen to agree that the key effort of a development project (for those who aren’t poor at least) is to collect and analyze data. However, it’s not just any data collection. The necessary qualifier is the collection of good data that actually informs about the things we think the data is telling us. A huge problem is collecting data that is of poor quality and doesn’t actually inform about any pertinent questions.

Second, what is data collection? Data collection need not always be quantitative. Lots of really good work has been done with careful qualitative data collection and analysis. I cut my teeth in development studies in the IDS program at Calvin College. It was (and still is) a great program and has been integral in my education and career, but most folks in the program seemed to have a bias against quantitative data collection and analysis. This might be because reading and understanding quantitative evaluations and studies is more difficult (when anyone is first learning) than reading and understanding qualitative evaluations and studies. However, once I actually got out into the world and started collecting my own data I realized that careful qualitative data collection is much more difficult and time intensive than careful quantitative data collection.

Third, It seems to me that many who are either (a) averse to working in monitoring and evaluation or (b) who resist making it the key function of their program (and not just a function to appease donors), hold these feelings because they believe that monitoring and evaluation is boring. They were first interested in development work because they wanted to help people directly and that (somehow) certainly doesn’t include sitting behind a spreadsheet all day.

I have two responses to this: (a) Well, good development is actually (strictly speaking) boring, not very glamorous, and doesn’t photograph well. (b) Although, if one can get past the mundane tasks (that are actually part of every job) the task of development is actually super exciting. Being at the forefront and being able to witness (perhaps) the most dramatic reduction in human suffering of all time is certainly exciting and anything but boring.

 

Bad Research is Bad Research.

I want to highlight a recent post by AidLeap on Why Programme Monitoring is so Bad. It brings up an important point and sheds light on an issue I’ve personally experienced.

A manager in a field programme that I evaluated recently showed me the glowing findings from his latest monitoring trip – based on a total sample size of two farmers. When I queried the small sample size, he looked shocked that I was asking. “It’s OK”, he explained, “We’re not aiming for scientific rigour in our monitoring.”

I regularly hear variants of this phrase, ranging from the whiny (“We’re not trying to prove anything”), to the pseudo-scientific (“We don’t need to achieve 95% confidence level!”) It’s typically used as an excuse for poor monitoring practices; justifying anything from miniscule samples, to biased questions, to only interviewing male community leaders.

I’ll second hearing the statement “We don’t need to achieve 95% confidence level”. I’ve also heard “Well, when we really want to do a deeper evaluation we’ll use a comparison group”.

There seems to be a sort of bias with folks who work for organizations and agencies that actually do stuff against spending resources (money and time mostly) finding out if their work actually works. In principle this has changed as even the smallest of organizations have a “Data Analysis Intern”or a “M&E Fellow”. In practice, however this monitoring and evaluation is typically pretty terrible.

This is likely due to a lack of rigorous (usually double-blind) peer-review that is the norm in academia. As well as the standards and goals of the various institutions. But that doesn’t change the reality.

This is unfortunate because if we think the work of these organizations is worth doing, then it is certainly worth doing well. Most of the time we just think the work is worth doing… and then stop there. Nobody tests whether the reality actually matches the theory.

Evaluating the effectiveness of something, especially when it comes to human livelihoods is important, no matter who you are. As such, performing evaluations as a veteran practitioner, a data analysis intern, or an M&E fellow is no different than performing evaluations as a professional scientist (economist, sociologist, anthropologist, agronomist, ect.) or tenured university professor. Bad research is just bad research. Full stop.

[If you’ve read this far, and haven’t already please read the original AidLeap blog post.]