NTU research seminar

6 November 2008

Seminar presentation 6/11/08

I was invited to speak in the School of Education‘s research seminar series. Planning the slides linked here has told me that I am far from certain about my research questions!


More on methodology – the marketing approach

26 October 2008

I wrote recently about the methodology of triple hermeneutics as described by Alvesson and Stöcklund and how it might be relevant to my work. The trail that led to this started with my director of studies’ suggestion that I look at the world of marketing in respect of how it deals with perceptions. This has now led to the writing of  Chisnall (2005). Sure enough in the chapter on “Basic Techniques” there is a discussion of the place of reliability and validity in qualitative and attitude research . I quite like this word ‘attitude’. It helps frame a question ‘What is the attitude‘ of 16-year olds to ICT capability and its assessment. Chisnall says

“The measurement of behavioural factors such as attitudes… has been attempted by a variety of techniques… the ones that are the most reliable and valid from a technical viewpoint generally being the most difficult… to apply” (p234).

Oh well!

Validity for Chisnall consists of content, concurrent and construct validity – so fairly conventional there. One would have expected face validity to be mentioned too, perhaps. He also cites a pamphlet (sic) by Bearden et al (1993) that describes some 124 scales for measuring such things in the field of marketing, consumer behaviour and social research.

Bearden, W, Netemeyer, R & Mobley, M (1993), Handbook of marketing scales: Multi-item measures for marketing and consumer behaviour research. Newbury Park, CA: Sage (in conjunction with the ACR).

Chisnall, P (2005), Marketing research (7th ed). NY: McGraw Hill.


Cambridge Assessment seminar

21 October 2008

I attended  a seminar, on the subject of validity, one of a series of events run by Cambridge Assessment (CA). It was led by Andrew Watts from CA.

This was extremely informative and useful, challenging my notions of assessment. As the basis for his theoretical standpoint Andrew used these  texts 

  • Brennan, R (2004), Educational Measurement (4th edition). Westport, CT: Greenwood
  • Downing, S (2006) Twelve Steps for Effective Test Development in Downing, S and Haldyna, T (2006) Handbook of TEst Development. NY: Routledge
  • Gronlund, N (2005), Assessment of Student Achievement (8th edition). NY: Allyn and Bacon [NB 9th edition (2008) now available by Gronlund and Waugh]

He also referred to articles published in CA’s Research Matters and used some of the IELTS materials as examplars. 

The main premise, after Gronlund, is that there is no such thing as a valid test/assessment per se. The validity is driven by the purposes of the test. Thus a test that may well be valid in one context may not be in another. The validity, he argued, is driven by the uses to which the assessment is put. In this respect, he gave an analagy with money. Money only has value when it is put to some use. The ntoes themselves are fairly worthless (except in the esoteric world of the numismatist). Assessments, analogously, have no validity until they are put to use.

Thus a test of English for entrance to a UK university (IELTS) is valid if, the UK university system validates it. Here then is the concept of consequential validity.  It is also only valid if it fits the context of those taking it. Here is the concept of face validity – the assessment must be ‘appealing’ to those taking it.

Despite these different facets of validity (and others were covered – predictive validity, concurrent validity, construct validity, content validity), Gronlund argues that validity is a unitary concept. This echoes Cronbach and Messick as discussed earlier. There is no validity without all of these facets I suppose would be one way of looking at this.

Gronlund also argues that validity cannot itself be determined – it can only be inferred. In particular, inferred from statements that are made about, and uses that are made of, the assessment.

The full list of chacteristics that were cited from Gronlund are that validity

  • is inferred from available evidence and not measured itself
  • depends on many different types of evidence
  • is expressed by degree (high, moderate, low)
  • is specific to a particular use
  • refers to the inferences drawn, not the instrument
  • is a unitary concept
  • is concerned with the consequences of using an assessment

Some issues arising for me here are that the purposes of ICT assessment at 16 are sometimes, perhaps, far from clear. Is it to certificate someone’s capability in ICT so that they may do a particular type of job, or have a level of skills for employment generally, or have an underpinning for further study or have general life skills, or something else, or all of these? Is ‘success’ in assessment of ICT at 16 a necessary pre requisite for A level study? For entrance to college? For employment? 

In particular I think the issue that hit me hardest was – is there face validity: do the students perceive it as a valid assessment (whatever ‘it’ is).

One final point – reliability was considered to be an aspect of validity (scoring validity in the ESOL framework of CA).


Tutorial part 2

16 October 2008

I had a tutorial (by telephone) today with the other part of my supervisory team. An interesting model emerges that develops the earlier one:

What emerged was a clarity of vision: I am looking at

A how year 11 students perceive ICT capability and
B how the assessment system (at 16) perceives it.

My project is to define the difference between A and B and to suggest ways in which the two may be aligned.

What now emerges is the more sophisticated notion of a number of views of what ICT capability is, with some sort of Heideggeran absolute at the intersection. Thus there may be four views of what ICT capability is:

  • the view of the awarding bodies
  • the view of the students
  • the view of the education system (policy)
  • the observed view from research

Is there also then a Heideggeran absolute, autonomous view somewhere in the intersection of all these?

We also talked about the notions of perception and interpretation of the students view and came down to the question: How authentic and relevant does assessment feel to students? This, of course, has limitations as due to precisely because of the hermeneutical considerations of it the students’ view.

Building on the notion of the abstract view that would define assessment of ICT in absolute terms (and my stance which rejects this in favour of the diversity of views listed above), we then talked about the importance of the social cultural view in which students’ interpretations are coloured by their class, peer groups, families etc.

One final concept is the emergence of literature on assessment as learning and how the ‘teaching to the tests’ means that students are spoon fed and do not learn beyond the framework of assessment.


Connectivism and serendipity

14 October 2008

In looking around for thoughts on Husserl I came across the WordPress blog ‘Between Husserl and Heidegger‘ – a blog as an adjunct to a taught face-to-face course. On clicking on one of the tags (Husserl) I was surprised to see a link to a post in another blog about connectivism. This is the theory of learning espoused by two of the leading lights in the technology and learning arena – George Siemens and Stephen Downes.

The surprise was not that this should turn up in a search (although the link to Husserl is pretty tenuous through a quoted marginalia). Rather it is the subject of an online course that one of my colleagues is attending and blogging about at this very time. Is that serendipity, coincidence or reticular activation?


Is that a milestone?

14 October 2008

So another day’s study leave – another 3000 words or so committed to ‘paper’… at least when it is printed out it makes a thud on the desk!

This month I have restarted the PhD, many things have happened to enable this… a colleague reported that she will finish during this year… a research cluster meeting has signed me up to present a paper at  seminar (maybe I need to sign up for the one at ITTE in Cambridge too!)… I have booked the aforementioned study leave and set some targets (pretty basic ones like ‘write something proper’ but I now have 17000 proper words – although I am under no illusion: about half of them will probably go before I’m done)… I have restarted tutorials… I have built this PhD into my performance development review targets… I’m doing a different job, which seems to suit better… I am supervising three other candidates and see the process from the other side… all good… for now!


Putting the Ph into the PhD?

14 October 2008

I have been struggling this week with the concept of ‘perception‘. After a tutorial my focus was on how I might apporach the capture and analysis of students’ ‘perceptions‘ about assessment. This word has been quite fundamental to my description of the research. In the tutorial, we got talking about marketing theories and perceptual analysis as a method in that discipline.

Needless to say I know little about marketing. So what is the perceptual analysis? It has proved an elusive hunt but I have travelled over some interesting territory. One laden with ontological considerations and debate.

First there is the hermeneutics of Husserl and Heidegger. For one the importance of the existence of the objects  of consciousness ONLY in the way in which they are perceived by the consciousness, for the other the autonomy of such objects irrespective of the sense we bestow on them. This perception is then reported linguistically and Wittgenstein’s concept of the language game filters any such sense.

Then there is the triple hermeneutics of Alvesson and Sköldberg (2000). Hermeneutics – the analysis of interpration, double hermeneutics – the analysis of interpreted interpretation (the dual lenses of researcher and respondent), triple hermeneutics – the analysis of interpreted interpretation and the context behind that interpretation (the three lenses of researcher, respondent and context).

Lowe et al (2005) propose a 4th hermeneutic in the context of marketing (and that was how I came in) but I am not sure yet how this applies!

Finally, and most pragmatically (*), Conroy (2003) examines interpretive phenomenology (or rather re examines it) and develops a  methodology and methods for doing something fairly similar to what I am proposing, albeit in the context of nursing (the usual context for this approach, it seems). Here is a model that I need to examine and critique for its use in my study as I move towards the primary research phase.

And in this phrase – interpretive phenomenology – hides the word I have been meaning when I have said perception: it is interpretation. So not ‘what is a student’s perception of….’ but what is a student’s interpretation of…’ The problem is, you see, that perception has a particular meaning in this philosophical arena. I had to go down the false road of Arnold Berleant to realise it… Thanks are due here to my colleague Kev Flint…

I have a fair chunk of literature review written, I have many ideas about methodology and method. Now is the time to crystallize this and move on to ‘action’. My director of studies agrees and this ‘permission’ is what I have been waiting for…

(*) pragmatic in the the sense of being related to action… but actually very philosophical in nature


Reframing and a timeline!

26 June 2007

Following on from my tutorial at NTU I took the landscape to my ‘external advisor’, Peter Twining (of Schome fame). We spent an hour and a half in heated discussion. Heated to the extent that my brain fried but all very amicable! The outcomes were firstly a re-framing of my thoughts – and probably of my aims although that can wait for a while, and secondly a timeline for the project.

What emerged was a clarity of vision: I am looking at

A how year 11 students perceive ICT capability and
B how the assessment system (at 16) perceives it.

My project is to define the difference between A and B and to suggest ways in which the two may be aligned. This latter point, of a PhD thesis making recommendations, is one of the doctoral level learning outcomes that I hadn’t really paid attention to. Actually I hadn’t come across any of these outcomes before this month… I’ll post something about them if I can find an electronic copy or time to type them up!

I also came away with a timeline. The literature review that have embarked on will need to give way to a finding my way to a suitable methodology. This will require a change of focus of reading to look more at the methodology and methods I wish to adopt so that I may collect data in the coming academic year. Part of this discussion will be to look at the literature around ascertaining student’s perceptions and gathering the student voice.

I will also need to consider the impact of eliciting views from students in school situations as opposed to outside school. The choice of data collection instruments will also be subject to discussion – will interviews suffice, or will observation of their capability be necessary. It is likely that a piloting of a range of tools will be needed with a fuller data collection in 2008/09.

This data collection, together with the literature review, will yield information about A above. Further review of the literature, this time on policy, together with examination of assessment materials (exams, coursework assignments), will yield information on B and reveal the differences between them. This will then lead to the recommendation phase.

A rough timeline has been developed (click image to see it full size):

timelineatjune07.gif

Whither my landscape in this simplified model? The landscape had four features – assessment, learning, policy and technology. These may be seen in the model, I believe:

  • assessment is in A and B
  • learning is in A
  • policy is in B
  • technology is in A and B

PDPs, training and support for research degree students – the Anglia model

19 June 2007

My previous post was with Anglia Polytechnic (now Anglia Ruskin) University. While there I served on the Education Faculty’s research degrees committee (RDC) and also attended the University RDC. One of the things that I was involved in was early steps to develop the use of personal development planning (PDP) tools.  It is interesting to revisit this two years later on their website “Planning your research training”

I was reminded of this by discussions at the PhD supervisors’ course at my current employer (Nottingham Trent University). There is a need for us to look at this aspect of PhD support and guidance we felt.

The APU (ARU) materials came were stimulated by papers from UKGrad. Their website contains a PDP database that lists many other case studies on the development of such support and training.


Tutorial

31 May 2007

Had the tutorial this morning, and very helpful it was too. What emerges is a landscape.

It would seem that I have four key concepts – assessment, learning, policy and technology. Each of these informs the landscape. We talked about the need to paint this landscape and then draw out the salient features of it that inform my research questions. In the foreground of all of this is the learner perception/construct of their learning in ICT and the way in which it is assessed. Lurking over the landscape like some cloud is the thorny question – what is ICT anyway. This provides another theme which filters the light and colours the landscape.

Maybe I need to paint a picture.

We also talked about what the research is not about, and how that needs to be explained in my writing. In particular e-assessment – while a hot topic, it is not something that is especially relevant to my aims and less relevant still to the students I’ll be researching into as they won’t have had any e-assessment (probably).

Then there is the nature of ICT (the cloud above) and of assessment itself. We talked a lot about the so-called problem of ICT assessment at 16 being too easy in that it just assesses what people know rather than what was learnt in school. Actually I don’t see this as a problem. I think we need to look at our assessment and accreditation system to ensure it is fit for purpose (and valid). Why shouldn’t we give accreditation students who can demonstrate the four pillars of knowledge, understanding, attributes and skills at the appropriate level. Does it have to be only accreditation of the value added by schools.

This then led to another picture – a continuum going from the individual at one end, through family, friends, peers, teachers, schools to the education system itself. Each of my four features might have dimensions in each of these.

And the landscape metaphor has broken down… no picture needed perhaps!


14000 words

29 May 2007

I’m  minded of the thermometers you get outside churches, promoting their tower repair funds. You know the ones that show how much has been raised by a red blob creeping up a scale in the style of old style temperature measuring devices.

I’ve a tutorial this week and so felt obliged to get my literature review into some semblance of order. Currently there are 14 000 words. Unlike the fund raising gauge, though, I suspect this will fall before long as I cut out the bits that are not contributing to the thesis. 

Still it is good to look at that little word count at the bottom of Word and see six digits.

One thing that is taking shape is the concept map – now turned into chapter titles.

1.    Personal reflections
2.    Students’ construction of their learning
3.    Policy
4.    Technologies for learning
5.    Assessment
6.    Learning

These are how they emerged. I guess a better sequence might be

1.   Learning
2.    Assessment
3.    Policy
4.    Technologies for learning
5.    Students’ construction of their learning
6.    Personal reflections

This way the story builds up to chapter 5. After this will come the definitive statement of research questions and then the primary research with the students themselves. So far three schools have come on board and would be willing to have me interview students in the next academic year.

Once this is all done (!) I see the thesis structure to be developing like this

1.     Introduction, rationale etc
2.    Learning
3.    Assessment
4.    Policy
5.    Technologies for learning
6.    Students’ construction of their learning
7.    Statment of research questions
8.    Methodology
9.    Methods
10.  Analysis of data
11.   Findings
12.  Conclusion: the thesis
13.   Personal reflections

This will give me something to talk about on Thursday at the tutorial!


Article for ITTE Newsletter

11 April 2007

I wrote an article for the latest edition of the ITTE newsletter. Entitled Weblogs, PhDs and Google-generated concept lists it reflects on the process of doing a PhD. In particular issues of using a weblog and the way in which links to the log might help generate a concept map of my reading.


Project Approval

6 February 2007

I found out today that the university has accepted my proposal. This means that I am now officially started! Good news!

I am minded of the informal findings of Vernon Trafford, who ran the Anglia PhD training programme. There seemed to be little correlation between titles of approved projects and the final theses resulting from them. Oh well… we’ll see!


Developing a concept map

1 February 2007

Why haven’t I posted so much recently. There are two answers to this question. One is that my teaching load is at its peak and time has got squeezed for a few weeks. The other is that perhaps things are starting to clarify and maybe beg for some more extended writing. Is it time to start thinking about structure of the thesis – or at least the preliminary chapters?

So after a couple of months of reading where am I? Have I been able to refine my concepts and approach any? Does my research still make the same sense to the person the pub? I acknowledge the lack of reading on personal constructs but, that aside, what I have read seems to be ready for some meta reflection and organisation. Mind you that might just be procrastination. But i think I need to dig out my ‘How to write your thesis’ books.

One thing that has happened is that this blog has been hit by people alighting from searh engines. The software (WordPress) keeps a tally of what they typed in to the search to reach the blog. This list makes for itneresting reading and may help infrom the structure. Is this some sort of concept list collection tool?

Terms typed into search engines to reach this blog (Dec and Jan, most frequent at the top – although most frequent is very small, the maximum is seven occurrences for any term):

informal learning bebo
\”roger distill\” school
Demos 2007
validation ict
assessment validity Ict level 2
construct validity messick
ICT and authentic assessment
non-formal learning
Tombari & Borich portfolio
2006 5 GCSE passes school league table
assessing ict capability
assessment for learning and inclusion
assessment in ICT lesson
Assessment of informal learning
authentic assessment validity reliability
characteristics of a valid website
construct representation
construct validity model
contextual value added
define assessment validity
define formal informal assessment
demos ict report
dfes 2007 gcse league tables
Empirical Research Report on Assessment
evidence of validation in a test of creativity
formal and informal learning opportunities
formal learning
formal, informal, and non-formal
How does ICT affects Young people
ICT ASSESSMENT mouse
ict school league tables
Literature review of E-assessment Ridgway
meaning of valid in assessments
messick’s model of validity
nomothetic messick
pamela moss shifting conceptions of validity
peer assessment ict
recording and retrieving information and
Roger Distill ICT
school league tables 2007
school league tables telford
validity in ict
validity knowledge transfer vignette
validity on parole and cronbach
validity on parole: how can we go straight
What Does \”Bias\” mean and ICT
what does formal and informal mean ict


Aims revisited

16 January 2007

My original aims were

1. To critically analyse the ways in which students aged 16 construct their learning of ICT capability in formal and informal contexts.
2. To explore the relationship between formal and informal learning within the field of ICT.
3. To explore the methodologies of assessment of ICT capability at 16 and how this affects student perceptions of their capability.
4. To develop a theoretical base to evaluate the construct validity of assessment of ICT at 16.

In looking especially at numbers 2 and 4 a concept map (or at least a list) appears to be emerging. In addition to the concepts contained in these aims – formal and informal learning, validity of assessment, methodologies of assessment, personal constructs of learning – two others are emerging. One is about young people’s appropriation of technology for learning, the other is the policy agenda.

The former is the subject of the reports and books I seem to be drawn to.  Maybe it is this topic that will allow me a way into the theory of aim 1 – personal constructs. I have yet to touch on this, but much of the literature on young people’s use of technology seems to be based on this, if implicitly.

So in looking at the Demos report, there is much about how and what young people have learnt. The assumption seems to be that they are controlling the learning, choosing what to learn. Maybe it is also that they are constructing what they have learnt. Certainly if there is to be reverse-ICT then this construct of learning would need to be articulated or manifested in some way. It would be made explicit through the act of learners teaching adults. this is outside the scope of my research here. On the other hand the making of the learning explicit through examination of learners’ perceptions and constructs of their learning is at the heart of aim 1.

I had started to be concerned about the neglect of this aim and the associated theory. The reflection in this post is reassuring me somewhat – and is an example of not knowing what I thought until I wrote it.

The second emerging  new concept (or issue) – the policy agenda – must not be forgottenbut is probably best considered as part of aim 3. I guess the next step is to start to build a concept map of ideas and authors to help ‘design’ the literature review section/s of my thesis.


Gap in Knowledge: Assessment of ICT v Assessment with ICT

2 January 2007

I have it ingrained in my psyche that one of the key things about doctoral work is the need to prove that one is inquiring into a ‘gap in the knowledge’. This has always been problematic for me. What is knowledge? How do I prove that the gap exists. Simply because I don’t know of something, doesn’t mean it doesn’t exist (three negatives there…). I might think there is a gap, only to be blissfully unaware that someone else has filled it (or, worse, is filling it as I speak/procrastinate).

Notwithstanding this, the gap in the knowledge that I identify is located somewhere in the aridity that is the apparent dearth of writing on the assessment of ICT. Put assessment and ICT into Google and you get 1.42m results. Most of these appear to be about using ICT in the assessment process. Assessment with ICT.

McCormick (2004), writing for the ERNIST project and elsewhere, cites Macfarlane (2001) and Thelwall (2000) in defining a taxonomy for the relationships between ICT and assessment. While his first category is ‘Assessing ICT skills and understanding’, it would seem that this is ignored in the rest of his paper. There is, instead, focus on use of ICT for assessment and affordances provided by use of ICT in other subjects for the assessment of those subjects. Indeed, Thelwall’s work is solely of computer-aided assessment.

Similarly the EPPI studies on ICT and assessment deal with how it is used in assessment or how it can help assess creative and thinking skills in different ways to other media.

So is there a gap in knowledge? Like Popper, I cannot prove that there is but if there is it is somewhere in all of this mist. How do you know when you’ve found a gap anyway? What does the edge of a gap look like?


Thinking and writing and publishing

1 January 2007

Last month the NTU research seminar programme (it’s not called this but I forget its title) held a session that was led by Anthony Haynes of P&H. The objective was to look at the ways in which ‘academics’ get published. Two themes emerged that were, in some ways, both parallel and tangential to this.

  • Does writing precede or follow publishing?
  • Does writing precede or follow thinking?

To this end the notion of regular writing was discussed. The oft described (and observed) image of the researcher with daybook, recording thoughts, observations, references. Writing little and often. This was where the decision to keep this blog came from. Writing a little each day, building up patterns of thought.

How do I know what I am thinking until I see what I am writing?.

The concept of writing up is one that is often cited as filling PhD candidates with dread. The concept of the blank page doing the same for authors. But if you take the starting point that you are writing for a purpose (a thesis, or a publication) and if you take the viewpoint that thinking and writing are indivisible then maybe these dreaded inertias may be avoided. I don’t know, but it seems to be a reasonable premise at this stage…


Towards a PhD, starting the long journey

3 December 2006

So… after several false starts I have just this week made a presentation of my PhD proposal, and am now truly on the road. I’ll use this space to explore ideas….

TITLE

Assessment of ICT at 16: its validity and relationship to students’ formal and informal learning.

AIMS

1. To critically analyse the ways in which students aged 16 construct their learning of ICT capability in formal and informal contexts.
2. To explore the relationship between formal and informal learning within the field of ICT.
3. To explore the methodologies of assessment of ICT capability at 16 and how this affects student perceptions of their capability.
4. To develop a theoretical base to evaluate the construct validity of assessment of ICT at 16.