Evaluating for Unintended Consequences – Small Farmer Cooperatives in Guatemala

Church_Banner

Economic development is not an exact science. Far from being predictable, the outcomes of our development assistance may differ from what we plan or expect. We may formulate a solid development hypothesis or theory of change, but there are many variables that enter into the implementation process. We cannot control for all these variables or anticipate the impact they may have. Often, surprises result. Or as we say in the business, there are often unintended consequences we didn’t expect.

A recent example from an evaluation of agricultural micro-enterprise activities in Guatemala illustrates how important it can be to be watchful for unintended consequences.

A short time back, USAID provided technical support to help Guatemalan Mayan Indian farmers form small agricultural marketing cooperatives. For some cooperatives the US government funded the import of corn silos, large round metal ‘Butler Bins,’ for Guatemala farmers to store their grains at harvest. The USG also provided technical guidance to the cooperative on how to buy and handle grains that their members wanted to sell.

The objective of the program was to help the new cooperatives purchase grains at prices better than the trucker (coyotes) or middlemen were offering member farmers at harvest when prices were depressed because of abundant supplies. The goal was to provide farmers with better, low-cost marketing services than truckers would offer. The marketing cooperative would, it was hoped, take over the truckers’ business of buying, storing and reselling their members’ food grain crops. A successful, financially sustainable cooperative meant a successful program. That was the development hypothesis.

When evaluators assessed how well the farmer cooperatives were doing a few years after the corn silos were installed and the project assistance had ended, they were surprised to find that corn silos of a number of the farmer cooperatives were empty at the times they should be full. In some cases, the cooperatives were only nominally functioning with not much trading, if any, going on.

Had the evaluators used the original project metric of how much corn the cooperatives traded with their members or how financially viable the cooperatives were from their buying and selling transactions, they would have given the cooperative program a very poor rating.

But when the evaluators began to probe deeper and ask the cooperative farmers why the corn silos were empty and why the cooperatives were not buying and selling their members’ grain, the evaluators came away with a different impression of the program.

“No. We no longer buy grain from our members,” the cooperative leaders told the evaluators. “But we don’t want you to take away the corn silos from us,” they pleaded.

“We did not come to take away your silos,” replied the evaluators. “But we would like to know why they are empty and why you still want them if you don’t use them to trade grain.”

“You see,” said the farmers, “the first few years we had the silos we filled them with grain and we were able to buy and sell at good prices for our members. But then the truckers returned and they began to offer our members better prices than they had before we had the silos; prices so good that our cooperative members decided to sell to them and not the cooperative.

“With the silos, the truckers know our members can always go back to trading with the cooperative. We need the silos. Empty or full they help us get better prices from the truckers.”

The lesson to be learned from this evaluation is that performance criteria laid out at the time a project is designed – in this case, how successful the cooperatives were at using their corn silos for buying and storing grain – may not be the best or even the correct metric to use. A more relevant criterion, in this instance, should be how effective the corn silos were at helping farmers bargain for better prices in the market place. Greater farmer marketing power was an unintended consequence and, in this case, the primary impact of the program and something much more relevant to measure than what originally was intended.

The example underscores that those evaluating projects need to be aware of – and on the lookout for – unintended consequences of the activities they examine.

Share this
post

Share on facebook
Share on linkedin
Share on twitter
Scroll to Top