Electronic versus paper-based data collection: reviewing the debate

|

This page in:

This post was co-authored by Sacha Dray, Felipe Dunsch, and Marcus Holmlund.

Impact evaluation needs data, and often research teams collect this from scratch. Raw data fresh from the field is a bit like dirty laundry: it needs cleaning. Some stains are unavoidable – we all spill wine/sauce/coffee on ourselves from time to time, which is mildly frustrating but easily discarded as a fact of life, a random occurrence. But as these occurrences become regular we might begin to ask ourselves whether something is systematically wrong.

So it is with survey data, which is produced by humans and will contain errors. As long as these are randomly distributed, we usually need not be overly concerned. When such errors become systematic, however, we have a problem.
 
Data for field experiments can be collected either through traditional paper-based interviews (PAPI, for paper assisted personal interviewing, not for this) or using electronic devices such as tablets, phones and computers (CAPI, for computer assisted personal interviewing). There are some settings where CAPI may not be appropriate, but it seems safe to say that CAPI has emerged as a new industry standard. Even so, we sometimes encounter resistance to this no-longer-so-new way of doing things.
 
The next time you are planning a survey and your partners consider CAPI impractical or inferior to PAPI, refer to the following cheat list of some of the advantages of
electronic data collection. These are based on some of the things we have found to be most useful in various surveys carried out as part of DIME impact evaluations.

1. Avoid mistakes before they happen. Did you hear the one about the 6-year-old with two children who was also his own grandmother? With a long and complex paper questionnaire, such errors can easily occur. CAPI allows us to build in constraints to, for example, create automatic skip patterns or restrict the range of possible responses.

2. Ask the right questions. As part of an experiment to test the effectiveness of different forms of incentives on midwife attrition in Nigeria, we used our survey to check whether midwives received the type of incentive they were assigned to. Here, enumerator error could lead to contamination of the experiment by informing the interviewee of different forms of incentives potentially received by their peers, and so it was vital that exactly the right questions be asked for each midwife. To ensure this, the question was automatically filtered depending on answers to a set of simple questions early in the questionnaire, each of them with their own restrictions. This level of assurance cannot be achieved with a paper survey. 

3. Change your mind. Despite our best efforts including extensive piloting, we sometimes ask questions in the wrong way. In a recent survey of Nigerian health workers, we soon learned that what we were calling a “uniform” is more commonly considered a “cover” or “coverall”. We were able to correct this early in the survey and to include a picture of what we meant directly in the questionnaire just to be sure, all without the logistics of re-printing or distributing paper questionnaires. 

4. Get better answers. Getting reliable information about socially undesirable attitudes or behaviors is difficult. In Honduras, we are testing an employment readiness and labor market insertion program for at-risk youth and are interested in participation in violence and criminality, among other things. These are difficult questions to ask directly, but using a tablet allows us to easily isolate this set of questions for self-administration without worrying about having to recompile separate sheets for respondents later on. Also, this provides better protection for respondents, as their answers to these and other questions are not easily accessible to outsiders. And, where we are concerned with framing (i.e., where the order of questions might affect the responses), with a tablet we can easily randomize the order of questions. On paper, this is difficult and becomes increasingly complex as the number of questions to randomize increases. Finally, there are some things that are best captured not by asking questions but by observing, and here we may build simple games or tasks into the tablet and directly record behavior. 

5. Know how work is progressing. The graph below shows the number of surveys (y-axis) completed by each enumerator (x-axis) for a survey on a flood risk prevention intervention in Senegal. Team 1 (enumerators 1-3) stands out for being relatively more “productive”, but that outlier status triggered further analysis of their data, revealing several anomalies: ultimately it was agreed that the team’s work would be repeated from scratch. A similar analysis could have been done with paper-based surveys, but this would rely on accurate reporting by the survey firm which may not have the incentive to tell the full truth about situations like the one below (the graph shows completed interviews uploaded to the server, a more objective metric). It also would have taken longer. 



6. Eliminate data entry error. Data collected on paper forms eventually needs to be digitized. While this process can be done concurrently with data collection, it is another source of error, both because of the potential for errors in the data entry program itself and by those persons responsible for entering data. We have not directly compared the quality of raw data collected through CAPI vs. PAPI, however, but perhaps someone else has?
 
7. Experimentally test survey design: CAPI makes it much easier to run survey experiments, for example randomizing the order or framing of questions to see whether this has an effect on the answers received. We did this for a survey in Nigeria to measure the effect of framing statements on satisfaction with the conditions at primary healthcare facilities positively or negatively. We find that patients report extremely high satisfaction with positively framed statements – above 90% for 8 of 11 statements (e.g. The staff at this facility is courteous and respectful). However, this satisfaction drops significantly – by between 2 and 22 percent, depending on the statement – when the same statements are posed with negative framing (e.g. The staff at this facility is rude and disrespectful).
 
There are, of course, other important advantages of electronic data collection, such as shorter survey times (as documented in Caeyers, Chalmers, and De Weerdt 2010), reduced waste and clutter, and an easier life for enumerators (less to carry, less to pay attention to during interviews). That said, you do need tablets or smartphones (though these can be reused in other surveys), there is a bit of a learning curve in programming the first questionnaires, and there is always the potential that technical glitches can lead to data loss. But such issues are outweighed by the considerable advantages that electronic data collection yields in terms of data quality.
 
What are your experiences with electronic vs. paper surveys? Any must-dos or don’ts?
 
For more on this topic:
Sarah Hughes
May 25, 2016

I recently spoke to your Development Data Group (the Statistical Seminar series) about managing fieldwork while incorporating newer technologies and addressed some additional issues that arise when shifting from PAPI to CAPI. Resistance to CAPI also includes 1) sponsors' concerns about cost and 2) data collection teams' concerns about changing field processes. Regarding the former, sponsors may focus on programming costs and may not consider the time spent cleaning PAPI data in their cost calculations. Empirical studies comparing costs are rare, and quickly become out of date given the rapidly dropping costs of hardware and the increase in free easy-to-program CAPI software programs that don’t require advance programming skills. The World Bank’s own Survey Solutions is an example. Regarding the latter, changes in the tasks of locally-based data collection teams can engender resistance to new technology, new methods, new training needs and a reduction in control over how data are handled. When data go directly from the field interviewer to an analyst in Washington, the field supervisor, central office data manager and other locally-based professionals perceive and experience a loss in their value to the survey project. In addition, costs for new technology represent an unknown for many local data collection teams. When organizations negotiate fixed-price contracts with local data collectors, the local teams often haven’t received the final sample, the questionnaire is not finalized and a final decision on the mode to use (CAPI or PAPI) may still be pending. When the local team does not include staff with strong programming skills, choosing the familiarity of PAPI is a logical risk-reducing strategy.
Finally, your experience with the interviewer data anomalies shown in the figure (item #5) is not unique. Per-completed-case pay structures, common in many countries, are notorious for emphasizing speed and may contribute to corner cutting or even faked cases. Sponsors should encourage proposals for data collection that have per day pay, incentives for quality or other quality-enhancing systems that work alongside the monitoring that CAPI provides.

David Evans
May 25, 2016

These are great additions, Sarah. Thank you!

Segun Oguntoyinbo
May 25, 2016

'I have thoroughly enjoyed your contribution. I work for a data firm in Nigeria and I confirm all the examples are intimately familiar - the icing on the cake being the 6 year old with 2 kids and being its own grandmother. Despite the rigorous training of enumerators for a coverage survey in Nigeria, we encountered several children who were immunized before they were born. PAPI's main disadvantage is that it is too late before errors are discovered and revisits shall stretch timelines to the limit. CAPIs produce more accurate results, are more efficient, significantly less costly and perhaps most important - faster - given these days of very tight timelines. For us as a firm, its goodbye to PAPI - forward ever, backward never'.

Felipe Dunsch
May 26, 2016

Thank you Segun for sharing this perspective from a data firm.

Anne Solon
May 26, 2016

This is a great introduction to some of the benefits and concerns with using CAPI. Another factor to consider is there often a lag between the end of fieldwork and getting clean usable data with the use of PAPI which is often frustrating to researchers who are under time constraints to ‘get going’. The use of CAPI allows early access to files, even during fieldwork, that have already been through a cleaning/consistency check in the field in front of the respondent. On a practical note, at Young Lives we also considered the logistics of charging our tablets during the course of fieldwork, especially in rural settings. This is directly linked with the comment related to the concerns around the cost of the equipment needed to use CAPI. Users must remember they’ll possibly need extra batteries or external battery packs, USB sticks and encryption software. We did find a market for selling our tablets post fieldwork to re-coup some of the costs. This blog is written from a researcher’s perspective and I very much look forward to reading more about the data management experiences in your work.

Felipe Dunsch
May 26, 2016

Thank you Anne for these additions. Good point on logistics. And thank you for the idea for a follow-up post.

Bassey Archibong
October 25, 2018

CAPI is very city compliant, apart from battery issues for tablets and phones and their use in remote areas with no regular electricity for charging, It does appear CAPI is most suited for coded quantitative surveys. Use of CAPI in participatory research might be limited. Very importantly are those commissioning certain research/survey ready to invest in purchase of appropriate phones, give time for training of data collectors on the use such phones

Faizan Diwan
May 26, 2016

Thanks for the great blog post. A related post that readers will find useful is about measurement and issues around the quality of collected data on IPA's blog: http://bit.ly/1sRIwcd
And really interesting comments too. Sarah, your point about the risk-averseness and concerns about changing from PAPI to CAPI that teams on the ground have is well-taken.
I work at Dobility, which created SurveyCTO, and we're always also trying to figure out how to address these constraints to change. It's definitely true that the nature of the work for field managers/supervisors ends up changing and I think a good approach is to invest a little bit in training them to adopt new roles: they may no longer need to be scrutinizing individual paper surveys for skip pattern errors but digital data collection software can also make it much easier for them to review aggregated data, generate reports of surveyor performance and errors (even without advanced statistical knowledge), and act on that data at a higher level to manage the team and improve its performance. So their work could actually shift to something that builds a broader skill set and is as or more productive, even if there is an initial learning curve.

Felipe Dunsch
May 31, 2016

Thanks Faizan for the pointer to the IPA blog post. Very useful indeed!
There is definitely a learning curve for field staff, especially those that have experience with both techniques, as they might have to "un-" or "re-learn" things they were trained to do in the past.

Christopher Robert
May 26, 2016

Errors without bias
Thanks for this post, guys! Great stuff. Just wanted to push gently back on one statement here about errors in the data: "As long as these are randomly distributed, we usually need not be overly concerned." This is a common sentiment that I think arises in part from our training in statistics, classical measurement error vs. bias, etc. -- but I've come to think that it might be a bit dangerous/misleading in practice. Samples are generally limited, statistical power is nearly always less than one would like, and in the end noisy zeros and other imprecise results have plagued nearly every impact evaluation I've seen up-close. The damage done by imprecision is, I think, massive. One strategy is just to invest in larger samples, but another is to do more to systematically reduce even mean-zero errors so that, for a given sample, results are more precise. I suspect that many investments in greater data quality can be much more cost-effective, per unit of standard-error reduction, than increases in sample size. Sorry -- I know that I'm preaching to the choir here, but I just wanted to add this comment to the discussion! Thanks again for sharing this!

Felipe Dunsch
May 31, 2016

Thanks Chris, thanks for pointing this out. It is of course a very valid point and our formulation might have been a bit sloppy there.
I think you are indeed preaching to the choir as we are all pulling in the same direction to reduce errors (of all kinds) and to improve precision when we collect data.

Alison Connor
June 09, 2016

I agree with the authors but I’d like to caution that electronic surveys and data collection systems are only as good as they are programmed to be, and they require significant support infrastructure to facilitate a smooth and efficient data collection process.
At IDinsight, we did a head-to-head comparison of electronic and paper-based enumeration to compare time and error rates between the two processes for the Zambian Ministry of Community Development, Mother and Child Health (MCDMCH). They were preparing for a large-scale household survey for their innovative Social Cash Transfer program and wanted to know which data collection method would maximize quality while reducing costs (more info here: bit.ly/24kgNMo).
Our study found that electronic surveys had several important advantages:
1. More survey questions were asked aloud. The questions appear on the screen one by one and often cannot be skipped.
2. Programmed quality checks were effective. Where checks were built-in, electronic surveys had fewer out-of-range value errors, missing values, etc.
3. Enumerators were happier. Enumerators enjoyed using the modern technology and felt it made their work more efficient while contributing to less physical burden (compared to carrying paper surveys).
But electronic surveys were not a magic bullet:
1. Enumerators still made mistakes (sometimes more mistakes). Compared to paper-based surveys, electronic surveys had more errors on the complex components of the survey, especially the household roster. Electronic surveys often constrain enumerators to a pre-programmed order whereas on paper, they have more flexibility to fill out a table of information as they wish (i.e. ask all variables for one person at a time or ask about values for each person one variable at a time). On paper, enumerators can also record information as it comes, allowing for a more organic data collection process.
2. Not much time was saved. Enumerators still spent much of their time walking between houses, which is a challenge an electronic survey cannot fix!
3. Visual aids in the electronic survey may have caused more confusion. The electronic surveys included pictures to help the enumerator ask about household assets. The images, however, were sometimes incongruous with the survey population. This may have caused respondents to under-report their assets because theirs did not resemble the one in the picture.
Electronic surveys are also as good as the technical infrastructure that supports them. Key bottlenecks include the charging technology, accessing mobile networks to sync surveys, and maintaining an updated case management interface so that households were not accidentally skipped.
As a result of IDinsight’s findings, MCDMCH changed their plan to scale up the Social Cash Transfer program with an electronic data collection system. They will continue with the paper-based system until the challenges with the electronic system can be worked out, saving thousands of dollars and avoiding a lot of headaches.
So YES, technology can make our lives infinitely easier and can solve many of our problems, BUT they still require thought and resources to help them reach their full potential! Other organizations can follow MCDMCH’s lead and pilot electronic systems to identify challenges upfront and maximize the true benefits of an electronic system.

Nagraj Rao
July 01, 2016

Dear Alison,
The bottlenecks you point out are not a CAPI issue but a survey protocol issue. Here are the reasons why:
1) You can program the questionnaire however you want (asking one person all the questions at one time or different persons the same question). It depends on the survey protocol and not on CAPI. CAPI is flexible as long as you program it that way. However, there are very good reasons why survey protocols are set a certain way and why being too flexible is not recommended - a) to avoid information from one participant accidentally being filled in the wrong row (not all enumerators are excellent at handling this on complex questionnaires), and b) minimizing enumerator heterogeneity (impacts) in survey results/outcomes.
2) CAPI cannot magically transport you to a household (that's not the objective) and both PAPI and CAPI require walking! If your sample had a high cellphone/internet penetration rate you wouldn't need to walk but could administer email/cell phone surveys. The fact of the matter is that these are challenges in some countries which is why you need to walk household to household. Despite the walking, the study by Chalmers et al. referenced in the blog showed a 10% reduction in data collection time (this is 2010 when CAPI was in its nascent stages), which is still quite cost saving, let alone that it didn't account for data coding and processing time that was saved post data collection using PAPI. I am confident that overall it outweights the total time and costs.
3) You state that "The images, however, were sometimes incongruous with the survey population". This again is a survey protocol issue. You can customize the picture that will appear on the screen for any product by the region that the survey is being administered, if prior research had been done on such regional variations. You could also choose not to have visual aides. There is complete flexibility in CAPI! For example, in consumption based surveys, visual aides tremendously help distinguishing between a small, medium and large pile of bananas or packets of salt.
In short - I'd recommend investment in survey methodology/protocol prior to conduct of surveys (CAPI or PAPI). There is lots of innovation going on in CAPI as well and I agree it is not perfect, but not for the reasons you listed above.

Atew
September 18, 2018

I really enjoyed the content of this blog regarding CAPI or PAPI. One question I have is ..apart from the 7 practical points you have mentioned,is there any theoretical foundations or fame work which can depict or clarify why CAPI generate better data quality compared to PAPI?

Stats
July 22, 2019

This is a great introduction to some of the benefits and concerns with using Data Collection. And thank you for the idea for a follow-up post. A related post that readers will find useful is about measurement and issues around the quality of collected data on IPA’s blog. Once Again Thanks for Great Valuable Information.

More Details Visit Now: https://bit.ly/2ReR5hz