Explaining the (negligible) Impacts of Microfinance

In a recent article over on Five Thirty Eight, Ben Casselman writes about why microloans don’t solve poverty. It is an excellent summary of the recent rigorous evaluations into the impacts of microfinance. He notes that microfinance is a $60 billion industry across 6 continents, won a Nobel Peace Prize back in 2006, and yet we are only just now understanding if it actually works.

I’ve written before (links here and here) on the 6 independent randomized control studies finding that microfinance fails to make the average participant better off and yet that it doesn’t make anyone worse off. Since these studies were published the pertinent task has been to explain these findings. Here again, Ben does an excellent job summarizing this research.

The ‘not everyone is an entrepreneur’ explanation: 

One popular explanation is that many participants in microfinance don’t use the loans to start or grow a business. Instead they use the extra cash to cover expenses or smooth consumption. When loans are used in this manor they are unlikely to cause any measurable changes in the long run. This explanation suggests that perhaps microloans is only truly valuable to a small group of people with a specific set of attributes and characteristics. The typical business owner in a developing country has not chosen to be an entrepreneur as they are simply participating in this activity by default. Expecting the poorest around the world to ‘entrepreneur themselves’ out of poverty is beginning to seem like nothing more than a pipe dream of an idealist. It seems quite obvious that only those who truly desire and aspire to grow their business will have a chance of benefiting from microloans.

What is challenging about this explanation is that there remains a lack of knowledge as to how to predict who will benefit from microloans prior to receiving the loans. This lack of knowledge causes confusion as to the explicit goals of microcredit and hinders positive iteration. This issue is further complicated by some of my (and other’s) research that examines if living under conditions of poverty for years (and perhaps generations) can squelch aspirations and other essential elements of hope. If this is the case, then microfinance programs could benefit from aiming to boost aspirations, rebuild personal agency, and diminish internalized perceived constraints. (In fact a study on such a program is ongoing as I type.)

The ‘early-adopter vs. late-adopter’ explanation:

This second explanation has a lot of nuance to it. Maybe the negligible and small impacts of microloan programs (measured in the 6 RCTs) are driven by the fact that these studies were evaluating microloan programs in places where similar type programs already substantially existed. Perhaps way back in the 1990s and early 2000s when microfinance was first being rolled out the impacts were positive and maybe even large (for those who had the skills and desire to invest in their business). We actually don’t know, but given the exuberance of many of the primary actors in the early days of microfinance (Muhammad Yunus et al.) it may be safe to assume that the impacts in the early days were not as small or negligible as they are today.

This actually is not a new idea. Ever since way way back in 1957 with the work of Zvi Griliches the idea that technology adoption within a given population follows an S-AAEAAQAAAAAAAABxAAAAJGNjNTY3OTYzLTAwNDgtNGVmMC1iNjY5LTExOWM0Yzg5ZTg4Ywshape curve has been standard. The figure to the right shows first their are “innovators” then the “early adopters” followed by the “early majority” and finally technology reaches “saturation”. (If you’re interested here is a figure showing the adoption and diffusion of technologies such as the TV, electricity, cars, etc.) What is important to note here is that this behavior largely occurs over and over for almost any technology because the benefit of adopting the technology is the greatest for those who first adopt. Other than a “strategic delay” for purposes of social learning, the benefits of a technology diminish as more and more of the given population adopts the technology. As a technology nears saturation, the “laggards” as they are sometimes called, adopt simply to “keep up with the Jones’s” and receive little to no benefit.

For example: my grandparents just got iPhones. Prior to a couple months ago they owned flip phones that were not even enabled to send or receive text messages. They didn’t spring for the iPhone 1 through 5 because they didn’t have the skills or desires to use those phones to their potential. Now they both have iPhone 6’s. The technology is no-doubt better than their old flip phones. But does the fancy “smart” capabilities benefit their day-to-day lives very much? I’d venture to guess the benefits are very small or negligible.

Now, it’d be wrong and misleading to evaluate the social and economic impact of the iPhone based on those who adopted it in 2015. The iPhone has transformed the way the world runs, specifically for those who adopted it five to six years ago. This might be what is going on with microloans. The “early borrowers” of microloans first took loans back in the late 1990s and early 2000s. The 6 randomized studies above evaluated the impact of microloans several years later, probably on the “late borrowers”, and found negligible and small benefits.

(For nerdy readers, Bruce Wydick has a note forthcoming the in Journal of Development Effectiveness specifically on this explanation.)

The Future of Microfinance

What does this mean for the future of microfinance? I think it is instructive to consider how Apple has handled iPhone technology. They’ve continued to innovate and improve. Better cameras, longer lasting batteries, increased functionality, etc. I think, when these two explanations are taken in conjunction, it is clear what microfinance organizations need to do. Innovate. Make the product better. Apply insights from behavioral science. Inspire increased aspirations. Encourage personal agency. Break down internalized constraints. Don’t just stand pat and continue to offer the plain vanilla “microfinance 1”.

 

Data Collection (and Analysis) as Development

Here’s a paragraph from a boarder-line scathing short essay on “Data Driven Development Decisions” posted by The Springfield Centre. A self-proclaimed “leader in the market systems approach to development in low and middle-income economies – also referred to as making markets work for the poor”. The central point of the essay is this: setting the monitoring and evaluation function of your program apart from the rest is antithetical to the purpose of the development project.

Essentially, the job of a development programme aiming to stimulate systemic change, is to get one over on the system. Systems are pretty knowledgeable. Of all the things there are to know, they know most of them. You have to find out (or try to find out) something that it doesn’t know in a way that benefits the poor. That might be a business model that the system didn’t know was profitable, a function that the system didn’t know was needed to improve efficiency, or an input that the system didn’t know was more cost effective. One thing that systems – particularly in developing countries – aren’t that great at is producing and aggregating information into nice little digestible packages. So, in order to find the little nuggets of information that might allow you to change the system, you need to put in the effort – and that effort is in data collection.

A few things:

First, yes. I happen to agree that the key effort of a development project (for those who aren’t poor at least) is to collect and analyze data. However, it’s not just any data collection. The necessary qualifier is the collection of good data that actually informs about the things we think the data is telling us. A huge problem is collecting data that is of poor quality and doesn’t actually inform about any pertinent questions.

Second, what is data collection? Data collection need not always be quantitative. Lots of really good work has been done with careful qualitative data collection and analysis. I cut my teeth in development studies in the IDS program at Calvin College. It was (and still is) a great program and has been integral in my education and career, but most folks in the program seemed to have a bias against quantitative data collection and analysis. This might be because reading and understanding quantitative evaluations and studies is more difficult (when anyone is first learning) than reading and understanding qualitative evaluations and studies. However, once I actually got out into the world and started collecting my own data I realized that careful qualitative data collection is much more difficult and time intensive than careful quantitative data collection.

Third, It seems to me that many who are either (a) averse to working in monitoring and evaluation or (b) who resist making it the key function of their program (and not just a function to appease donors), hold these feelings because they believe that monitoring and evaluation is boring. They were first interested in development work because they wanted to help people directly and that (somehow) certainly doesn’t include sitting behind a spreadsheet all day.

I have two responses to this: (a) Well, good development is actually (strictly speaking) boring, not very glamorous, and doesn’t photograph well. (b) Although, if one can get past the mundane tasks (that are actually part of every job) the task of development is actually super exciting. Being at the forefront and being able to witness (perhaps) the most dramatic reduction in human suffering of all time is certainly exciting and anything but boring.

 

Bad Research is Bad Research.

I want to highlight a recent post by AidLeap on Why Programme Monitoring is so Bad. It brings up an important point and sheds light on an issue I’ve personally experienced.

A manager in a field programme that I evaluated recently showed me the glowing findings from his latest monitoring trip – based on a total sample size of two farmers. When I queried the small sample size, he looked shocked that I was asking. “It’s OK”, he explained, “We’re not aiming for scientific rigour in our monitoring.”

I regularly hear variants of this phrase, ranging from the whiny (“We’re not trying to prove anything”), to the pseudo-scientific (“We don’t need to achieve 95% confidence level!”) It’s typically used as an excuse for poor monitoring practices; justifying anything from miniscule samples, to biased questions, to only interviewing male community leaders.

I’ll second hearing the statement “We don’t need to achieve 95% confidence level”. I’ve also heard “Well, when we really want to do a deeper evaluation we’ll use a comparison group”.

There seems to be a sort of bias with folks who work for organizations and agencies that actually do stuff against spending resources (money and time mostly) finding out if their work actually works. In principle this has changed as even the smallest of organizations have a “Data Analysis Intern”or a “M&E Fellow”. In practice, however this monitoring and evaluation is typically pretty terrible.

This is likely due to a lack of rigorous (usually double-blind) peer-review that is the norm in academia. As well as the standards and goals of the various institutions. But that doesn’t change the reality.

This is unfortunate because if we think the work of these organizations is worth doing, then it is certainly worth doing well. Most of the time we just think the work is worth doing… and then stop there. Nobody tests whether the reality actually matches the theory.

Evaluating the effectiveness of something, especially when it comes to human livelihoods is important, no matter who you are. As such, performing evaluations as a veteran practitioner, a data analysis intern, or an M&E fellow is no different than performing evaluations as a professional scientist (economist, sociologist, anthropologist, agronomist, ect.) or tenured university professor. Bad research is just bad research. Full stop.

[If you’ve read this far, and haven’t already please read the original AidLeap blog post.]

Captivated by Elite Capture in Development

Community-based and -driven Development projects have become an important form of development assistance, with the World Bank’s portfolio alone approximating $7 billion. A review of their conceptual foundations and evidence on their effectiveness shows that projects that rely on community participation have not been particularly effective at targeting the poor. There is some evidence that such projects create effective community infrastructure, but not a single study establishes a causal relationship between any outcome and participatory elements of a community-based development project. Most such projects are dominated by elites, and both targeting and project quality tend to be markedly worse in more unequal communities. A distinction between potentially “benevolent” forms of elite domination and more pernicious types of capture is likely to be important for understanding project dynamics and outcomes. Several qualitative studies indicate that the sustainability of community-based initiatives depends crucially on an enabling institutional environment, which requires government commitment, and on accountability of leaders to their community to avoid “supply-driven demand-driven” development. External agents strongly influence project success, but facilitators are often poorly trained, particularly in rapidly scaled-up programs. The naive application of complex contextual concepts like participation, social capital, and empowerment is endemic among project implementers and contributes to poor design and implementation. The evidence suggest that community-based and -driven development projects are best undertaken in a context-specific manner, with a long time horizon and with careful and well-designed monitoring and evaluation systems.

This is the abstract of Mansuri and Rao’s paper “Community-based and -driven Development: A Critical Review“.

My time in Kenya got me thinking a lot about (what I learned is called) “elite capture” in development program implementation. It’s the idea that decentralized and localized development projects may suffer from the local implementers or politicians influencing the target of the program so that it benefits them more directly, instead of the people the project is designed to help. Reading about this stuff is fascinating! Two excellent papers I’ve read so far on elite capture are:

Pan and Christiaensen (2012) “Who is Vouching for the Input Voucher? Decentralized Targeting and Elite Capture in Tanzania” Do the political elite get greater access to “pro-poor” agricultural input vouchers in Tanzania? Yes.

Sheely (2015) “Mobilization, Participatory Planning Institutions, and Elite Capture: Evidence from a Field Experiment in Rural Kenya” Does encouraging ordinary citizens to attend participatory local government planning meetings reduce the influence of the elite on local politics? No.

Sorry for the pay walls on the links, but if this has peaked your interest, The Economist has an excellent summary article: Targeting Social Spending.