NTU research seminar

6 November 2008

Seminar presentation 6/11/08

I was invited to speak in the School of Education‘s research seminar series. Planning the slides linked here has told me that I am far from certain about my research questions!

Advertisements

More on methodology – the marketing approach

26 October 2008

I wrote recently about the methodology of triple hermeneutics as described by Alvesson and Stöcklund and how it might be relevant to my work. The trail that led to this started with my director of studies’ suggestion that I look at the world of marketing in respect of how it deals with perceptions. This has now led to the writing of  Chisnall (2005). Sure enough in the chapter on “Basic Techniques” there is a discussion of the place of reliability and validity in qualitative and attitude research . I quite like this word ‘attitude’. It helps frame a question ‘What is the attitude‘ of 16-year olds to ICT capability and its assessment. Chisnall says

“The measurement of behavioural factors such as attitudes… has been attempted by a variety of techniques… the ones that are the most reliable and valid from a technical viewpoint generally being the most difficult… to apply” (p234).

Oh well!

Validity for Chisnall consists of content, concurrent and construct validity – so fairly conventional there. One would have expected face validity to be mentioned too, perhaps. He also cites a pamphlet (sic) by Bearden et al (1993) that describes some 124 scales for measuring such things in the field of marketing, consumer behaviour and social research.

Bearden, W, Netemeyer, R & Mobley, M (1993), Handbook of marketing scales: Multi-item measures for marketing and consumer behaviour research. Newbury Park, CA: Sage (in conjunction with the ACR).

Chisnall, P (2005), Marketing research (7th ed). NY: McGraw Hill.


Cambridge Assessment seminar

21 October 2008

I attended  a seminar, on the subject of validity, one of a series of events run by Cambridge Assessment (CA). It was led by Andrew Watts from CA.

This was extremely informative and useful, challenging my notions of assessment. As the basis for his theoretical standpoint Andrew used these  texts 

  • Brennan, R (2004), Educational Measurement (4th edition). Westport, CT: Greenwood
  • Downing, S (2006) Twelve Steps for Effective Test Development in Downing, S and Haldyna, T (2006) Handbook of TEst Development. NY: Routledge
  • Gronlund, N (2005), Assessment of Student Achievement (8th edition). NY: Allyn and Bacon [NB 9th edition (2008) now available by Gronlund and Waugh]

He also referred to articles published in CA’s Research Matters and used some of the IELTS materials as examplars. 

The main premise, after Gronlund, is that there is no such thing as a valid test/assessment per se. The validity is driven by the purposes of the test. Thus a test that may well be valid in one context may not be in another. The validity, he argued, is driven by the uses to which the assessment is put. In this respect, he gave an analagy with money. Money only has value when it is put to some use. The ntoes themselves are fairly worthless (except in the esoteric world of the numismatist). Assessments, analogously, have no validity until they are put to use.

Thus a test of English for entrance to a UK university (IELTS) is valid if, the UK university system validates it. Here then is the concept of consequential validity.  It is also only valid if it fits the context of those taking it. Here is the concept of face validity – the assessment must be ‘appealing’ to those taking it.

Despite these different facets of validity (and others were covered – predictive validity, concurrent validity, construct validity, content validity), Gronlund argues that validity is a unitary concept. This echoes Cronbach and Messick as discussed earlier. There is no validity without all of these facets I suppose would be one way of looking at this.

Gronlund also argues that validity cannot itself be determined – it can only be inferred. In particular, inferred from statements that are made about, and uses that are made of, the assessment.

The full list of chacteristics that were cited from Gronlund are that validity

  • is inferred from available evidence and not measured itself
  • depends on many different types of evidence
  • is expressed by degree (high, moderate, low)
  • is specific to a particular use
  • refers to the inferences drawn, not the instrument
  • is a unitary concept
  • is concerned with the consequences of using an assessment

Some issues arising for me here are that the purposes of ICT assessment at 16 are sometimes, perhaps, far from clear. Is it to certificate someone’s capability in ICT so that they may do a particular type of job, or have a level of skills for employment generally, or have an underpinning for further study or have general life skills, or something else, or all of these? Is ‘success’ in assessment of ICT at 16 a necessary pre requisite for A level study? For entrance to college? For employment? 

In particular I think the issue that hit me hardest was – is there face validity: do the students perceive it as a valid assessment (whatever ‘it’ is).

One final point – reliability was considered to be an aspect of validity (scoring validity in the ESOL framework of CA).


Tutorial part 2

16 October 2008

I had a tutorial (by telephone) today with the other part of my supervisory team. An interesting model emerges that develops the earlier one:

What emerged was a clarity of vision: I am looking at

A how year 11 students perceive ICT capability and
B how the assessment system (at 16) perceives it.

My project is to define the difference between A and B and to suggest ways in which the two may be aligned.

What now emerges is the more sophisticated notion of a number of views of what ICT capability is, with some sort of Heideggeran absolute at the intersection. Thus there may be four views of what ICT capability is:

  • the view of the awarding bodies
  • the view of the students
  • the view of the education system (policy)
  • the observed view from research

Is there also then a Heideggeran absolute, autonomous view somewhere in the intersection of all these?

We also talked about the notions of perception and interpretation of the students view and came down to the question: How authentic and relevant does assessment feel to students? This, of course, has limitations as due to precisely because of the hermeneutical considerations of it the students’ view.

Building on the notion of the abstract view that would define assessment of ICT in absolute terms (and my stance which rejects this in favour of the diversity of views listed above), we then talked about the importance of the social cultural view in which students’ interpretations are coloured by their class, peer groups, families etc.

One final concept is the emergence of literature on assessment as learning and how the ‘teaching to the tests’ means that students are spoon fed and do not learn beyond the framework of assessment.


Connectivism and serendipity

14 October 2008

In looking around for thoughts on Husserl I came across the WordPress blog ‘Between Husserl and Heidegger‘ – a blog as an adjunct to a taught face-to-face course. On clicking on one of the tags (Husserl) I was surprised to see a link to a post in another blog about connectivism. This is the theory of learning espoused by two of the leading lights in the technology and learning arena – George Siemens and Stephen Downes.

The surprise was not that this should turn up in a search (although the link to Husserl is pretty tenuous through a quoted marginalia). Rather it is the subject of an online course that one of my colleagues is attending and blogging about at this very time. Is that serendipity, coincidence or reticular activation?


Is that a milestone?

14 October 2008

So another day’s study leave – another 3000 words or so committed to ‘paper’… at least when it is printed out it makes a thud on the desk!

This month I have restarted the PhD, many things have happened to enable this… a colleague reported that she will finish during this year… a research cluster meeting has signed me up to present a paper at  seminar (maybe I need to sign up for the one at ITTE in Cambridge too!)… I have booked the aforementioned study leave and set some targets (pretty basic ones like ‘write something proper’ but I now have 17000 proper words – although I am under no illusion: about half of them will probably go before I’m done)… I have restarted tutorials… I have built this PhD into my performance development review targets… I’m doing a different job, which seems to suit better… I am supervising three other candidates and see the process from the other side… all good… for now!


Putting the Ph into the PhD?

14 October 2008

I have been struggling this week with the concept of ‘perception‘. After a tutorial my focus was on how I might apporach the capture and analysis of students’ ‘perceptions‘ about assessment. This word has been quite fundamental to my description of the research. In the tutorial, we got talking about marketing theories and perceptual analysis as a method in that discipline.

Needless to say I know little about marketing. So what is the perceptual analysis? It has proved an elusive hunt but I have travelled over some interesting territory. One laden with ontological considerations and debate.

First there is the hermeneutics of Husserl and Heidegger. For one the importance of the existence of the objects  of consciousness ONLY in the way in which they are perceived by the consciousness, for the other the autonomy of such objects irrespective of the sense we bestow on them. This perception is then reported linguistically and Wittgenstein’s concept of the language game filters any such sense.

Then there is the triple hermeneutics of Alvesson and Sköldberg (2000). Hermeneutics – the analysis of interpration, double hermeneutics – the analysis of interpreted interpretation (the dual lenses of researcher and respondent), triple hermeneutics – the analysis of interpreted interpretation and the context behind that interpretation (the three lenses of researcher, respondent and context).

Lowe et al (2005) propose a 4th hermeneutic in the context of marketing (and that was how I came in) but I am not sure yet how this applies!

Finally, and most pragmatically (*), Conroy (2003) examines interpretive phenomenology (or rather re examines it) and develops a  methodology and methods for doing something fairly similar to what I am proposing, albeit in the context of nursing (the usual context for this approach, it seems). Here is a model that I need to examine and critique for its use in my study as I move towards the primary research phase.

And in this phrase – interpretive phenomenology – hides the word I have been meaning when I have said perception: it is interpretation. So not ‘what is a student’s perception of….’ but what is a student’s interpretation of…’ The problem is, you see, that perception has a particular meaning in this philosophical arena. I had to go down the false road of Arnold Berleant to realise it… Thanks are due here to my colleague Kev Flint…

I have a fair chunk of literature review written, I have many ideas about methodology and method. Now is the time to crystallize this and move on to ‘action’. My director of studies agrees and this ‘permission’ is what I have been waiting for…

(*) pragmatic in the the sense of being related to action… but actually very philosophical in nature