Despite increasing attention to the impact of ICT on teaching and learning in various ways, the ICT/education field continues to be littered with examples of poor evaluation work. A few of them arrive in my in-box every week.
There are many potential reasons advanced for the general poor quality of much of this work. One is simple bias -- many evaluations are done and/or financed by groups greatly invested in the success of a particular initiative, and in such cases findings of positive impact are almost foregone conclusions. Many (too many, some will argue) evaluations are restricted to gauging perceptions of impact, as opposed to actual impact. Some studies are dogged by sloppy science (poor methodologies, questionable data collection techniques), others attempt to extrapolate finds from carefully nurtured, hothouse flower pilot projects in ways that are rather dubious. (The list of potential explanations is long; we'll stop here for now.)