An overview of the Higher Education ICT Literacy Assessment

7 January 2007

David M. Williamson, Irvin R. Katz, and Irwin Kirsch (2005) [online PDF] available at http://www7.nationalacademies.org/bose/
ICT%20Fluency_Assessment_Overview_Article.pdf

This paper, originally presented to the 2005 AERA conference, contains a wealth of argument about the validity of ICT assessment. Here the topic is presented as ICT literacy. This word ‘literacy’ is laden with other connotations for me – to do with natural use of (in this case ICT). If someone is ICT literate that means so much more than saying they are ICT competent. One is about understandings, internalisations the other about surface skills I believe. The paper’s context is in HE, specifically an assessment that measures a HE student’s abilities to use technology to research, organize and communicate information. What is says though goes much beyond this context and speaks to my interest in assessment at 16.

The authors start from the findings of a 2001 panel looking at ICT assessment. The panel identified a number key issues of concern to policy makers and practitioners in the education community:

  • ICT is changing the very nature and relevance of knowledge and information.
  • ICT literacy, in its highest form, has the potential to change the way we live, learn and work.
  • ICT literacy cannot be defined primarily as the mastery of technical skills.
  • There is a lack of information about the current levels of ICT literacy both within and among countries.

In the amplification of the second bullet point they state “The transformative nature of information and communication technologies might similarly influence and change not only the kinds of activities we perform at school, at home and in our communities but also how we engage in those activities.” (ibid, p5)

They then go on to distinguish between issues of access and of proficiency – stating that research into the Digital Divide is insufficient in addressing issues of measuring ICT literacy. Providing the access is not enough – many schools found this with the introduction of Regional Broadband Consortia (or maybe the RBCs found this – I suspect schools knew already!).

The paper then discusses evidence-centred design of assessments. Again, the context for this paper is different to mine as they are trying to design an Internet-delievered test, an approach which may be running into difficulty in English schools. Nevertheless they provide a concise overview of this field and validity theory (Messick, 1989), psychometrics (Mislevy, 1994), philosophy (Toulmin, 1958), and jurisprudence (Wigmore, 1937). The process of assessment design they identify consists of four key questions:

  • Purpose: Who is being measured and why are we measuring them? What types of decisions will we be making about people on the basis of this assessment?
  • Proficiencies: What proficiencies of people do we want to measure to make appropriate claims from the assessment?
  • Evidence: How will we recognize and interpret observable evidence of these proficiencies so that we can make these claims?
  • Tasks: Given limitations on test design, how can we design situations that will elicit the observable evidence needed?

These issues again seem central to my thinking at this stage.

Later they break down ICT literacy into seven key evidences – things that are to be measured (or assessed).

  • Define: The ability to use ICT tools to identify and appropriately represent information need.
  • Access: The ability to collect and/or retrieve information in digital environments.
  • Manage: The ability to apply an existing organizational or classification scheme for digital information.
  • Integrate: The ability to interpret and represent digital information.
  • Evaluate: The ability to determine the degree to which digital information satisfies the needs of the task in ICT environments.
  • Create: The ability to generate information by adapting, applying, designing, or inventing information in ICT environments.
  • Communicate: The ability to communicate information properly in context in ICT environments.

This model seems rather to close to a skills taxonomy for my liking but it may be useful as one model among many for trying to look at how learner’s construct their knowledge.


Futurelab lit review on e-assessment (2004)

7 January 2007

Futurelab, a UK technology and learning ‘thinktank’, commissioned this 2004 literature review (1) on e-assessment. In compiling the report, the authors (Jim Ridgway and Sean McCusker, School of Education, University of Durham and Daniel Pead, School of Education, University of Nottingham) have, not surprisingly, covered a lot of ground to do with assessment per se and not just its technologically-enabled version.

In talking about the use of e-portfolios, the report concludes that “Reliable teacher assessment is enabled. There is likely to be extensive use of teacher assessment of those aspects of performance best judged by humans (including extended pieces of work assembled into portfolios)” (ibid, page 2)… for me, hidden in this is the validity argument. It comes through reliability of teacher assessment, and extended pieces of work. Both of these should help validity I believe.

Section 1 of the report then talks of the nature of assessment – formative and summative. Throughout this the authors continually refer back to the purpose and validity of assessment. Also, the learner is placed at the centre of the the described processes. Of particular note for me is the mendacity quotient, whereby summative assessment often encourages students to actively hide what they don’t know.

Section 2 discusses how and where assessment should be driven, with the focus also on technology as the report is on e-assessment. There are some more generally-applicable points covered here though. “Metcalfe’s Law” of increasing value through networks is used to underpin the need to tie assessment into rapidly increasing technologically-enabled social networks. More simply, perhaps, the use of peer networks for assessment might also be part of this… My work aims to look at the validity of external assessment by using self and peer viewpoints as comparators. In addition to social changes, the report identifies other drivers on assessment change as globalisation, mass education, defending democracy and government-led policies. Here there is a disappointing (for me) relative sparsity of focus on the needs of the learner, although demands of lifelong learning are brought out in the section summary (ibid, page 9).

Section 3 discusses the developments in e-assessment. Or so its section heading states. Actually there is much in here about assessment in general and the need to make it relevant to learner needs and valid. “… some [developments] reflect a desire to improve the technical quality of assessment (such as increased scoring reliability), and to make the assessment process more convenient and more useful to users (by the introduction of on-demand testing, and fast reporting of results, for example).” (ibid, p15).

Under the heading “Opportunities and challenges for e-assessment”, section 4 is a rich vein of resources and opinion on the use of assessment to assess deeper level skills, understandings etc. While the summary of the section appears very parsimonious about what has been written, sub-section 4.1 is full of how assessment should be enabling learning.

Finally, the appendix on page 24 is a good overview of, to use its title, “The fundamentals of assessment” .

(1) Ridgway J, McCusker S and Pead D (2004) Literature Review of E-assessment: A Report for Futurelab [online PDF] available at http://www.futurelab.org.uk/download/pdfs/research/lit_reviews/
futurelab_review_10.pdf