David M. Williamson, Irvin R. Katz, and Irwin Kirsch (2005) [online PDF] available at http://www7.nationalacademies.org/bose/
This paper, originally presented to the 2005 AERA conference, contains a wealth of argument about the validity of ICT assessment. Here the topic is presented as ICT literacy. This word ‘literacy’ is laden with other connotations for me – to do with natural use of (in this case ICT). If someone is ICT literate that means so much more than saying they are ICT competent. One is about understandings, internalisations the other about surface skills I believe. The paper’s context is in HE, specifically an assessment that measures a HE student’s abilities to use technology to research, organize and communicate information. What is says though goes much beyond this context and speaks to my interest in assessment at 16.
The authors start from the findings of a 2001 panel looking at ICT assessment. The panel identified a number key issues of concern to policy makers and practitioners in the education community:
- ICT is changing the very nature and relevance of knowledge and information.
- ICT literacy, in its highest form, has the potential to change the way we live, learn and work.
- ICT literacy cannot be defined primarily as the mastery of technical skills.
- There is a lack of information about the current levels of ICT literacy both within and among countries.
In the amplification of the second bullet point they state “The transformative nature of information and communication technologies might similarly influence and change not only the kinds of activities we perform at school, at home and in our communities but also how we engage in those activities.” (ibid, p5)
They then go on to distinguish between issues of access and of proficiency – stating that research into the Digital Divide is insufficient in addressing issues of measuring ICT literacy. Providing the access is not enough – many schools found this with the introduction of Regional Broadband Consortia (or maybe the RBCs found this – I suspect schools knew already!).
The paper then discusses evidence-centred design of assessments. Again, the context for this paper is different to mine as they are trying to design an Internet-delievered test, an approach which may be running into difficulty in English schools. Nevertheless they provide a concise overview of this field and validity theory (Messick, 1989), psychometrics (Mislevy, 1994), philosophy (Toulmin, 1958), and jurisprudence (Wigmore, 1937). The process of assessment design they identify consists of four key questions:
- Purpose: Who is being measured and why are we measuring them? What types of decisions will we be making about people on the basis of this assessment?
- Proficiencies: What proficiencies of people do we want to measure to make appropriate claims from the assessment?
- Evidence: How will we recognize and interpret observable evidence of these proficiencies so that we can make these claims?
- Tasks: Given limitations on test design, how can we design situations that will elicit the observable evidence needed?
These issues again seem central to my thinking at this stage.
Later they break down ICT literacy into seven key evidences – things that are to be measured (or assessed).
- Define: The ability to use ICT tools to identify and appropriately represent information need.
- Access: The ability to collect and/or retrieve information in digital environments.
- Manage: The ability to apply an existing organizational or classification scheme for digital information.
- Integrate: The ability to interpret and represent digital information.
- Evaluate: The ability to determine the degree to which digital information satisfies the needs of the task in ICT environments.
- Create: The ability to generate information by adapting, applying, designing, or inventing information in ICT environments.
- Communicate: The ability to communicate information properly in context in ICT environments.
This model seems rather to close to a skills taxonomy for my liking but it may be useful as one model among many for trying to look at how learner’s construct their knowledge.