So, taking the model from the Futurelab literature review, how might the dimensions of construct validity manifest themselves in assessment of ICT at 16 – the domain of my study.
Content validity: are items fully representative of the topic being measured?
Here might be included a study of what is included in assessments and an analysis of those against the stated assessment objectives, the content of specifications and, coming back to my specific focus, the topic (ICT learning) as constructed by the learners. What do 16-year olds identify as ICT?
Convergent validity: given the domain definition, are constructs which should be related to each other actually observed to be related to each other?
Here there is something about the relationship between the things above I think. Is there convergence between the assessment objectives, between learners’ constructs and between the two sets? I think there is more to explore here but haven’t quite got my head around it yet…
Discriminant validity: given the domain definition, are constructs which should not be related to each other actually observed to be unrelated?
This is more tricky. Why would there be “constructs which should not be related to each other”? Is this to do with identifying things that are mutually exclusive? Is formal and informal learning ever like this?
Concurrent validity: does the test correlate highly with other tests which supposedly measure the same things?
This too is tricky, but there is something here for me about the relationship between teacher assessment and test results I think