I came across this blog via George Siemens’ elearnspace ERN newsletter. It provides a balance (if not a balanced) to counter the oft made assertions about the generation of younger learners who are completely net-savvy, digital natives etc. I also came across similar balancing statements in a dissertation I have just examined… I’ll need to get the references!
Taking the ideas from the previous post and putting them into a diagram I get this
Some assessment uses ICT (or technology) – this is e-assessment (x axis).
Some assessment is designed to assess ICT capability (y axis).
Elliott’s Assessment 2.0 seems to be using ICT, not as e-assessment, but as a medium for allowing judgement to be made about the ICT capability (z axis).
Now of course, analysing any one particular assessment methodology one could locate it in this three-dimensional space. for example:
A traditional written paper would be on the y-axis. The NAA online assessment activities designed for KS3 would be in the space between all three axes (with perhaps a lower y- and z-values than x-value. Coursework would have an x-value of 0 but would have some components of y and z. Online assessments such as the driving test would be on the x-axis.
My questions here are “Where is the highest validity”? and “Where is the highest reliability?”. How does one use Elliott’s Assessment 2.0 to determine success in a certificated qualification?
I hadn’t come across this paper until today…
To quote from the executive summary…
“This paper is focused on exploring the inter-relationship between two key trends in the field of educational technologies. In the educational arena, we are increasingly witnessing a change in the view of what education is for, with a growing emphasis on the need to support young people not only to acquire knowledge and information, but to develop the resources and skills necessary to engage with social and technical change, and to continue learning throughout the rest of their lives. In the technological arena, we are witnessing the rapid proliferation of technologies which are less about ‘narrowcasting’ to individuals, than the creation of communities and resources in which individuals come together to learn, collaborate and build knowledge (social software). It is the intersection of these two trends which, we believe, offers significant potential for the development of new approaches to education. ”
These new approaches include those encapsulated in these quotes
“Today, the use of social software in education is still in its infancy and many actions will be required across policy, practice and developer communities before it becomes widespread and effective. From a policy perspective, we need to encourage the evolution of the National Curriculum to one which takes account of new relationships with knowledge, and we need to develop assessment practices which respond to new approaches to learning and new competencies we expect learners to develop. ”
“A rigid curriculum inhibits the development of the knowledge and skills that may be useful in the 21st century. If we are to promote the benefits of problem solving and collaboration then they need to be validated and legitimated by the assessment system. This is the greatest challenge for education policy.”
There is some sort of mapping in my mind between the two trends identified in the first paragraph and the two aspects of my research.
The report has a change in the purpose of education and its interface with the change in technologies. I have the two views of ICT/assessment of ICT, (and maybe also views of the purpose of ICT in education). Putting these four onto a diagram I see that two are to do with system, theories etc and two are to do with learners, users, practice.