Rhizomatic Education

31 October 2008

In this article Cormier discusses the ways in which the curriculum may need to change to reflect the new ways in which knowledge is constructed – through networks and communities of learners, as opposed to the traditional model of content transmitted by teachers. There is one reference to assessment where Cormier looks at how the traditional assessment is against that content.

To quote the article

“Information is the foundation of knowledge. The information in any given field consists of facts and figures, such as may be found in the technical reference manuals of learning; in a nonrhizomatic model, individual experts translate information into knowledge through the application of checks and balances involving peer review and rigorous assessment against a preexisting body of knowledge. The peers and experts are themselves vetted through a similar sanctioning process that is the purview, largely, of degree-granting institutions. This process carries the prestige of a thousand-year history, and the canon of what has traditionally been considered knowledge is grounded in this historicity as a self-referential set of comparative valuations that ensure the growth of knowledge by incremental, verified, and institutionally authorized steps. In this model, the experts are the arbiters of the canon. The expert translation of data into verified knowledge is the central process guiding traditional curriculum development.”

The other side of the coin is not discussed but I suspect that in the new ‘rhizomatic’ (and I’m no biologist so the metaphor is lost on me) model has peer and expert assessment. It’s the old thing about how do we know something is good – often because it is valued by those who need to use it (predictive validity as per the discussion at Cambridge – value <=> validity).

Linked from ERN (George Siemens)

Cormier, D (2008) Rhizomatic Education: Community as Curriculum in Innovate, 4, 5, July 2008 [online] available at http://innovateonline.info/index.php?view=article&id=550&action=article accessed 31/10/08


Digital Disconnect

29 October 2008

This from Ray Tolley on the Naace mailing list http://www.eschoolnews.com/news/top-news/index.cfm?i=55665

The ‘digital disconnect’ is alive and well… “kids tell us they power down to come to school.”


More on methodology – the marketing approach

26 October 2008

I wrote recently about the methodology of triple hermeneutics as described by Alvesson and Stöcklund and how it might be relevant to my work. The trail that led to this started with my director of studies’ suggestion that I look at the world of marketing in respect of how it deals with perceptions. This has now led to the writing of  Chisnall (2005). Sure enough in the chapter on “Basic Techniques” there is a discussion of the place of reliability and validity in qualitative and attitude research . I quite like this word ‘attitude’. It helps frame a question ‘What is the attitude‘ of 16-year olds to ICT capability and its assessment. Chisnall says

“The measurement of behavioural factors such as attitudes… has been attempted by a variety of techniques… the ones that are the most reliable and valid from a technical viewpoint generally being the most difficult… to apply” (p234).

Oh well!

Validity for Chisnall consists of content, concurrent and construct validity – so fairly conventional there. One would have expected face validity to be mentioned too, perhaps. He also cites a pamphlet (sic) by Bearden et al (1993) that describes some 124 scales for measuring such things in the field of marketing, consumer behaviour and social research.

Bearden, W, Netemeyer, R & Mobley, M (1993), Handbook of marketing scales: Multi-item measures for marketing and consumer behaviour research. Newbury Park, CA: Sage (in conjunction with the ACR).

Chisnall, P (2005), Marketing research (7th ed). NY: McGraw Hill.


Sir Mike Tomlinson lecture

22 October 2008

Sir Mike Tomlinson, chair of the working group on 14-19 reform that led to the 2003 Tomlinson Report, came to NTU today to give the RSA Shipley Lecture. This year the lecture was also a memorial to former NTU and RSA stalwart Anne Bloomfield. The subject, dear to her heart and to Sir Mike’s, was “Vocational education should be a rounded education“.

With the backdrop of the history of attempted introductions of vocational education (the 1944 Butler Education Act with its tripartite system, TVEI, GNVQs, Curriculum 2000 and Diplomas), Tomlinson argued for the move away from debates about ‘parity of esteem’ towards a view of the ‘credibility and value’ of qualifications. Echoes here of the value and validity arguments of Monday’s seminar at Cambridge.

It was also notable that the lecture included issues of how ‘true’ vocational education must have

  • relevance to 16-year olds (face validity),
  • a knowledge base that is used in, and applied to, occupational areas – however broadly drawn (validty determined by use, not by the test itself)
  • a theoretical content balanced with sector-based skills (content validity)

Again this echoes with Monday. Another thread running through was the role of assessment (systems) in undermining the vocational educational initiatives – TVEI assessment becoming ‘traditional’, GNVQ assessment being changed to equate to GCSE/A level, key skills being decoupled from GNVQs, Curriculum 2000’s insistence on equal numbers of units per qualification with a convergence of assessment types.

Also mentioned, although not in the same sense of ‘undermining’ was the persistence of the BTEC model and the way that NVQs were never envisaged to be other than accreditation mechanisms for on the job training.

The BTEC model of occupational credibility and general education was the model that was paramount in the description of vocational education with the caveat ‘what is general education’?

Throughout I was wondering where ICT fits into all this. Never mentioned as a ‘subject’ nor even as a ‘skill’ it was conspicuous by its absence. It is, of course, present in the specialised diplomas and as a functional skill although the former may be bedevilled by the wide diversity of the sector it is serving, I fear.

Tomlinson was upbeat about the Diplomas but focused especially on the need to get a true progression from level 1 through to 3. The custom of level 1 being what you get if you fail level 2 (GCSE grades D-G rather than A*-C) must not be repeated he urged. Also the need to get level 2 threshold systems so that learners who do not reach that threshold at , I would say the magical (and arbitrary, age of 16 could do so by subsequent credit points accumulation – rather than ‘repeating GCSEs’, a model that doesn’t serve well, he argued.

Another hour of useful insights.


Cambridge Assessment seminar

21 October 2008

I attended  a seminar, on the subject of validity, one of a series of events run by Cambridge Assessment (CA). It was led by Andrew Watts from CA.

This was extremely informative and useful, challenging my notions of assessment. As the basis for his theoretical standpoint Andrew used these  texts 

  • Brennan, R (2004), Educational Measurement (4th edition). Westport, CT: Greenwood
  • Downing, S (2006) Twelve Steps for Effective Test Development in Downing, S and Haldyna, T (2006) Handbook of TEst Development. NY: Routledge
  • Gronlund, N (2005), Assessment of Student Achievement (8th edition). NY: Allyn and Bacon [NB 9th edition (2008) now available by Gronlund and Waugh]

He also referred to articles published in CA’s Research Matters and used some of the IELTS materials as examplars. 

The main premise, after Gronlund, is that there is no such thing as a valid test/assessment per se. The validity is driven by the purposes of the test. Thus a test that may well be valid in one context may not be in another. The validity, he argued, is driven by the uses to which the assessment is put. In this respect, he gave an analagy with money. Money only has value when it is put to some use. The ntoes themselves are fairly worthless (except in the esoteric world of the numismatist). Assessments, analogously, have no validity until they are put to use.

Thus a test of English for entrance to a UK university (IELTS) is valid if, the UK university system validates it. Here then is the concept of consequential validity.  It is also only valid if it fits the context of those taking it. Here is the concept of face validity – the assessment must be ‘appealing’ to those taking it.

Despite these different facets of validity (and others were covered – predictive validity, concurrent validity, construct validity, content validity), Gronlund argues that validity is a unitary concept. This echoes Cronbach and Messick as discussed earlier. There is no validity without all of these facets I suppose would be one way of looking at this.

Gronlund also argues that validity cannot itself be determined – it can only be inferred. In particular, inferred from statements that are made about, and uses that are made of, the assessment.

The full list of chacteristics that were cited from Gronlund are that validity

  • is inferred from available evidence and not measured itself
  • depends on many different types of evidence
  • is expressed by degree (high, moderate, low)
  • is specific to a particular use
  • refers to the inferences drawn, not the instrument
  • is a unitary concept
  • is concerned with the consequences of using an assessment

Some issues arising for me here are that the purposes of ICT assessment at 16 are sometimes, perhaps, far from clear. Is it to certificate someone’s capability in ICT so that they may do a particular type of job, or have a level of skills for employment generally, or have an underpinning for further study or have general life skills, or something else, or all of these? Is ‘success’ in assessment of ICT at 16 a necessary pre requisite for A level study? For entrance to college? For employment? 

In particular I think the issue that hit me hardest was – is there face validity: do the students perceive it as a valid assessment (whatever ‘it’ is).

One final point – reliability was considered to be an aspect of validity (scoring validity in the ESOL framework of CA).


KS3 SATS scrapped in England

16 October 2008

This somewhat unexpected announcement was made this week. Tests for 14 year olds in maths, English and science have been scrapped. Given that many schools start their GCSE/level 2 courses at 13 now, especially in ICT, this might change radically the ways in which the middle years of secondary are organised. It may also affect students’ perceptions of assessment as they will not have had those high stakes external tests at 14.


Tutorial part 2

16 October 2008

I had a tutorial (by telephone) today with the other part of my supervisory team. An interesting model emerges that develops the earlier one:

What emerged was a clarity of vision: I am looking at

A how year 11 students perceive ICT capability and
B how the assessment system (at 16) perceives it.

My project is to define the difference between A and B and to suggest ways in which the two may be aligned.

What now emerges is the more sophisticated notion of a number of views of what ICT capability is, with some sort of Heideggeran absolute at the intersection. Thus there may be four views of what ICT capability is:

  • the view of the awarding bodies
  • the view of the students
  • the view of the education system (policy)
  • the observed view from research

Is there also then a Heideggeran absolute, autonomous view somewhere in the intersection of all these?

We also talked about the notions of perception and interpretation of the students view and came down to the question: How authentic and relevant does assessment feel to students? This, of course, has limitations as due to precisely because of the hermeneutical considerations of it the students’ view.

Building on the notion of the abstract view that would define assessment of ICT in absolute terms (and my stance which rejects this in favour of the diversity of views listed above), we then talked about the importance of the social cultural view in which students’ interpretations are coloured by their class, peer groups, families etc.

One final concept is the emergence of literature on assessment as learning and how the ‘teaching to the tests’ means that students are spoon fed and do not learn beyond the framework of assessment.