A regular complaint among practitioners
and academics alike is that we do not really know how effective technology
interventions have actually been. All too often high quality monitoring and
evaluation are ignored, underfunded, or left as an afterthought. Moreover, even
when it takes place, the design of evaluation activities often means that they
are more expressions of wishful thinking,
rather than rigorous reviews of why different elements of a program might have
been successful or not.
Three particular problems are pertinent
with evaluating Digital Citizen Engagement: first, actually identifying the
extent to which it is the technology, rather than anything else, that has had
the impact; second, the use of generalised ‘official’ statistics, be they from
governments or operators, which may not sufficiently differentiate between
ownership of a device, and actual usage thereof; and third, getting the balance
right between expected and unexpected outcomes. Digital engagement need not
always be a positive outcome! Professor
Tim Unwin