Too much technology in the classroom?

28 January 2007

This BBC report asks if there is “Too much technology in the classroom”?. A fairly light piece, it does make reference to the interface between students’ use of technology outside of school and in it.


EPPI review (2005) Motivation and assessment

27 January 2007

Smith C, Dakers J, Dow W, Head G, Sutherland M, Irwin R (2005) A systematic review of what pupils, aged 11–16, believe impacts on their motivation to learn in the classroom. In: Research Evidence in Education Library. London: EPPI-Centre, Social Science Research Unit, Institute of Education, University of London.

This EPPI review, cited by Gilbert, is focusing on motivation of 11-16 year olds. Its main findings identify six themes in the key to motivation. Each theme may have some relevance here. Italics represent direct quotes from the summary of the review.

  • The role of self : how is the learner’s own constructs represented in their view of learning? How does the role of the ‘group’ affect this?
  • Utility: Students are more motivated by activities they perceive to be useful or relevant.
  • Pedagogical issues: Pupils prefer activities that are fun, collaborative, informal and active.
  • Influence of peers: Linked to role of self
  • Learning. Pupils believe that effort is important and can make a difference; they are influenced by the expectations of teachers and the wider community.
  • Curriculum. A curriculum can isolate pupils from their peers and from the subject matter. Some pupils believe it is restricted in what it recognises as achievement; assessment influences how pupils see themselves as learners and social beings. The way that the curriculum is mediated can send messages that it is not accessible at all.

In this last point, the role of assessment is raised. So what does the review have to say about assessment in general?

The way that assessment of the curriculum is constructed and practised in school appears to influence how pupils see themselves as learners and social beings. (Summary, page 4)

… assessment [has a role] in nurturing or negatively influencing motivation (page 6 and page 63)

…the recent systematic review of the impact of summative assessment and tests on student’s motivation for learning acknowledges that ‘motivation is a complex concept’ that ‘embraces… self efficacy, self regulation, interest, locus of control, self esteem, goal orientation and learning disposition’ (Harlen and Deakin Crick, 2002:1) (page 8 of the EPPI review)

Students’ motivation is influenced by their ‘affective assessment’ (Rychlak, 1988) of events, premises and actions which are perceived as meaningful to their existence. (page 35, and linked to ‘logical learning theory’ (uncited))

Student satisfaction with their ‘academic performance tended to be influenced both by grouping, curricular and assessment practices and by its relationship to perceived vocational opportunities’ (Hufton et al., 2002:282). (page 45)

…learning situations that were authentic – in other words, appeared real and relevant to the pupils – could positively influence pupil motivation… ‘Sharing the assessment process with students is another way to capture students’ motivation…When students and teachers analyse pieces of writing together in an exchange of views, students can retain a sense of individual authority as authors and teachers convey standards of writing in an authentic context’ (Potter et al. 2001:53) (page 47 of EPPI)

Harlen W, Deakin Crick R (2002) A systematic review of the impact of summative assessment and tests on students motivation for learning. Version 1.1. In: Research Evidence in Education Library. London: EPPI-Centre, Social Science Research Unit, Institute of Education, University of London.

Hufton NR, Elliott JG, Illushin L (2002) Educational motivation and engagement: qualitative accounts from three countries. British Educational Research Journal 28: 265–289.

Potter EF, McCormick CB, Busching BA (2001) Academic and life goals: insights from adolescent writers. High School Journal 85: 45–55.


Education 2020, the Gilbert Report (2006)

26 January 2007

The Gilbert report on Education 2020 contains a wealth of findings (or sentiments anyway) that have relevance to my research.

Personalisation, it begins, means assessment-centred, learner-centred and knowledge-centred… “Close attention is paid to learners’ knowledge, skills, understanding and attitudes. Learning is connected to what they already know (including from outside the classroom).”… “Sufficient time is always given for learners’ reflection.” (page 8 and citing Branford et al, 2000) – this ties in well with the meta-learning findings of Demos (2007).

“…schools therefore need increasingly to respond to: [..] far greater access to, and reliance on, technology as a means of conducting daily interactions and transactions ” (page 10, with references in Annex B). “The pace of technological change will continue to increase exponentially. Increases in ‘bandwidth’ will lead to arise in internet-based services, particularly access to video and television. Costs associated with hardware, software and data storage will decrease further. This is likely to result in near-universal access to personal, multi-functional devices, smarter software integrated with global standards and increasing amounts of information being available to search on line (with faster search engines). Using ICT will be natural for most pupils and for an increasing majority of teachers. ” (page 11)

“strengthening the relationship between learning and teaching through: … dialogue between teachers and pupils, encouraging pupils to explore their ideas through talk, to ask and answer questions, to listen to their teachers and peers, to build on the ideas of others and to reflect on what they have learnt” (page 15)

“Pupils are more likely to be engaged with the curriculum they are offered if they believe it is relevant and if they are given opportunities to take ownership of their learning. Learning, clearly, is not confined to the time they spend in school” (page 22, citing EPPI, 2005)

gilbertfig4.gif

Figure 4 – Ways in which technology might contribute to personalising learning (page 29)

The recommendations on page 30 stop someway short of recognising the relationship between technology inside and outside of formal classroom use however. There is a nod towards it in this extract: “We recommend that…all local authorities should develop plans for engaging all schools in their area on how personalising learning could and should influence the way they approach capital projects… Alongside the design of school buildings, schools will need to consider: – what kind of ICT investment and infrastructure will support desired new ways of working – how the school site and environment beyond the buildings can promote learning and pupils’ engagement… goverment should set standards for software, tools and services commonly used by schools to facilitate exchange and collaboration within and between schools software packages from home.”

Bransford J.D., Brown A. L. and Cocking R. (eds.), How people learn: brain, mind, experience and school, National Academy Press, Washington DC, 2000. teaching principles and the design of quality tools for educators. Technical report

Eppi Centre Review: Asystematic review of what pupils, aged 11-16, believe impacts on their motivation to learn in the classroom, 2005. Available at: http://eppi.ioe.ac.uk/cms/Default.aspx?tabid=304


Wiliam (2000) on reliability and validity

25 January 2007

Wiliam’s paper, referenced by Mike Baker in his BBC summary, is not actually about the validity of National Curriculum (or any other) formal tests per se. It is about the inherent issues of validity and reliability of testing. The reduction of reliability comes from the inability of students to perform exactly the same way in tests. If they were to take the same test several times then they would expect to get different scores, argues Wiliam. This seems intuitively sensible, if impossible to prove as you can’t ever take a test again without it either being a different test or without you learning from your first attempt. The position is a theoretical one. Wiliam uses a simple statistical model to come up with the figures that are used in the BBC report. It is not that a test is 32% inaccurate, but that 32% is the number of misclassifications that might be expected given the nature of testing and quantitative scoring. The stats used by Baker are, themselves, theoretical, and should not be used as ‘headline figures’.

Wiliam then goes on to look at reliability of grades. He points out that we might intuitively know that it would be unreliable to say a student who scores 75% must be ‘better’ than one who scores 74%. But if the results are reported as grades we are more likely to confer reliability to the statement ‘the student achieving the higher level is better ‘.

On validity Wiliam says little in this paper but does point out the tension between validity and reliability. Sometimes making a test reliable means it becomes less valid. He cites the example of the divergent thinker who comes up with an alternative good answer that is not on the markscheme and who therefore receives no credit. this is a standard response by examining teams designed to eliminate differences between markers. While contingencies are always in place to consider exceptional answers, if they are not spotted until the end of the marking period then they cannot be accommodated. If several thousand scripts/tests have already been marked, they cannot be gone back over because one examiner feels that one alternative answer discovered late on should be rewarded. You either reward all those who came up with it or none. Usually it is none for pragmatic reasons, not for reasons of validity.

Wiliam (2000) Reliability, validity, and all that jazz in Education 3-13 vol 29(3) pp 9-13 available online at http://www.aaia.org.uk/pdf/2001DYLANPAPER3.PDF

and citing

Wiliam, D. (1992). Some technical issues in assessment: a user’s guide. British Journal for Curriculum and Assessment, 2(3), 11-20.

Wiliam, D. (1996). National curriculum assessments and programmes of study: validity and impact. British Educational Research Journal, 22(1), 129-141.


Openquals is now NDAQ

24 January 2007

QCA’s website of accredited qualifications, Openquals, is now known as the National Database of Approved Qualifications (NDAQ). It carries the logos of three of the UK’s qualifications’ authorities – QCA (England), CEA (Northern Ireland) and ACAC (Wales/Cymru). The SQA in Scotland is notable by its absence.

NDAQ is easier to ‘pronounce’, harder to find on Google and is easier on the eye – slightly. The myriad options available at school levels in ICT * are still bewildering. Maybe they will help with ‘personalisation’ but will they help to more validly represent learner’s abilities, achievements, capabilities?

* NDAQ has ICT, Openquals had IT… the nomenclature confusion continues…


BBC: Testing times for school assessment

24 January 2007

The BBC’s Education correspondent Mike Baker gives a very readable account of the changes ahead in the assessment system in his report of 6 January 2007 – Testing times for school assessment.

His main thrust is that changes to the system are coming in. Some of these are reflected in subsequent events that I have written about like the revamping of league tables and possible scrapping of the online ICT test… although the latter of these presumably would have helped personalisation if it was an on-demand test.

The changes, concludes Baker, are due to the growing clamour for that most voguish of educational shibboleths – personalisation.

In the article, he reflects on the Gilbert report from the HM Chief Inspector into personalisation and on how the recommendations of the report might necessarily lead to a greater role for teacher assessment. He ties this in with an IPPR study into the tensions between the dimensions of validity and accountability of assessment. Again teacher assessment is recommended by the authors as a way of enhancing both dimensions. Finally he cites Dylan Wiliam’s research into the ‘shockingly’ (Baker’s word) inaccurate methods of formal assessment.

A very useful summary.

Miles Berry also summarises the Gilbert Report in his blog, again very useful.


Assessment: ‘for learning’, formative and summative

19 January 2007

One of the features of WordPress (and many other blogs) is the reporting of search terms that have been used, which then result in the blog being found.

Yesterday, the search terms reported included assessment for learning and inclusion.

This  got me thinking that I hadn’t really made any use of the simple taxonomy of assessment. Assessment for learning is formative as it informs further learning (Black and Wiliam, 1998). My focus is really on summative learning.

A study of pupil perceptions of assessment for learning  (years 7 to 10) was carried out by Cowie (1995). I guess part of my research will be looking at pupil (or student) perceptions of summative learning. It will be interesting to compare results to those found by Cowie.

The Cowie paper was cited on the DfES Standards Site, in a section called the Research Informed Practice Site. I’m not sure about the initials this provides, but the site may well be a useful one both for this research and for my teaching. I hadn’t come across it before. It is useful, not just for its own sake, but because it provides digests of articles…

Black, P and Wiliam, D (1998) Assessment and Classroom Learning in Assessment in Education, Vol. 5, No. 1, pp 7-74

Cowie, B (2005) Pupil commentary on assessment for learning in The Curriculum Journal, Vol. 16, No. 2, June 2005, pp. 137 – 151

DfES (2007), TRIPS – the Research Informed Practice Site, London: DfES [online] available at http://www.standards.dfes.gov.uk/research/ accessed 19/01/07