1 May 2008
Taking the ideas from the previous post and putting them into a diagram I get this
Some assessment uses ICT (or technology) – this is e-assessment (x axis).
Some assessment is designed to assess ICT capability (y axis).
Elliott’s Assessment 2.0 seems to be using ICT, not as e-assessment, but as a medium for allowing judgement to be made about the ICT capability (z axis).
Now of course, analysing any one particular assessment methodology one could locate it in this three-dimensional space. for example:
A traditional written paper would be on the y-axis. The NAA online assessment activities designed for KS3 would be in the space between all three axes (with perhaps a lower y- and z-values than x-value. Coursework would have an x-value of 0 but would have some components of y and z. Online assessments such as the driving test would be on the x-axis.
My questions here are “Where is the highest validity”? and “Where is the highest reliability?”. How does one use Elliott’s Assessment 2.0 to determine success in a certificated qualification?
1 May 2008
Much as I dislike the nomenclature (Assessment 2.0), I found this paper by Bobby Elliott (and thanks to my colleague Bruce Nightingale and the ALT newsletter for bring the name to my attention) illuminating on many levels. Firstly here was someone making the links between theways in which technology is reportedly used by young people and the ways it could be used for technology. Secondly the author works for a government agency – the Scottish Qualifications Authority (SQA). Is this evidence of policymakers thoughts are changing to embrace the vicarious ways in which evidence of learning can be presented by technological opportunity?
My thoughts return though to the Macfarlane distinction between assessment of technology (eg the ICT curriculum) and assessment through technology (ie the methodology). This paper by Elliott seems to be moving a little away from the latter and perhaps towards the former. But perhaps, also, it is defining a third axis – assessment of technological capability through evidence presented through that technology. Maybe it is asking the question ‘What should be assessing?’ (ie the curriculum) rather than ‘How should we assess it’ (the methodology). But more than that it is saying can we assess the ‘what’ through the ‘how’.
The impressive list of tools that may be used for evidence presenting (and assessment) in Elliott’s paper also underlines my sceptcism of a one size fits all technological solution to assessment. And when I look down that list I am reminded of surveys presented by Terry Freedman (at a TDA conference in Nov 07) and others that show that young people’s use of tools is very diverse and very thinly spread. It is also very transient – MySpace here today gone tomorrow.
The very tool that Elliott uses to present his iPaper may well be a case in point. What if an awarding body decided Scribd was the thing to use. How long before it becomes the sliced bread superceded by the next best thing? How do we build in agility for assessment so that it does not become an exercise in rewarding the fashionable? (as opposed to the current system which rewards the old-fashionable).
PS Yes it’s been a long time… Higher Education management – ie my temporary role for 07/08 – and PhDs don’t easily mix… but I know that is also my excuse… and I’m sticking to it…