Syndicate content

Add new comment

Submitted by Ed Gaible on
Mike, hi, You're pointing to a very large number of evaluations of a single, dedicated ed-tech solution. While these evaluations might be a bit late to the party--especially given the investments that are now being planned--there's a terrific opportunity to start to quantify what works (essential factors for success), what doesn't work, and what might now be so essential in terms of ed-tech projects. Results would be germane to the OLPC, of course, but could also establish baseline inputs and expectations for other large-scale PC-based projects. What would it take for the evaluators contracted by the larger agencies (WB, IDRC, et al) to compare approaches and develop a set of common indicators or a minimal shared framework? Such a framework might enable comparison of data on, say, teacher-development inputs (on a per teacher basis), maintenance/repair inputs, content inputs, and a few other factors; the impact of these inputs might be assessed in relation to simple outcomes over a given period, such as number of messages/emails/blogposts per kid, time spent using the XO per kid, that sort of thing. Now's the time.