Is an assessment valid? What does this mean? Gipps and Murphy (1994) discuss this semantic issue. They relate validity with bias (or lack of it). If a test, or assessment, is valid it is free from bias (although the opposite is does not necessary follow). They cite Cronbach and Messick’s notions of a unitary model of validity based on the construct. How is the assessment constructed? Does it measure what it intends to measure, and is it free from bias. This construct validity is regarded as one of the three dimensions – the others being content and criterion. They argue that no content or criteria can ever be free from bias, and hence these are less dominant aspects when looking for validity.
On the other hand validity, or validation at least, has a very different meaning. It is used to mean the process of recognising (as valid) that which has been learnt in non-formal settings. See, for example, the ECOTEC project. In higher education this might be equated to the process of Accreditation of Prior Experiential Learning (APEL). In APEL, non-certificated learning is validated against assessment criteria that have been designed to assess formal learning. A judgement (assessment) is made to see if the learning claimed as APEL does equate to that which might be learnt formally. It is used to exempt learners from parts of programmes.
Over arching these two processes, ensuring validity and validation of non-formal learning, is a more thorny concept. That of peer or community validation of skills, knowledge and understanding. Someone regarded as a expert in a field by his or her peers probably has had a more valid assessment of their capability than someone who simply has the piece of paper. Even if that piece of paper has been awarded through scrutiny and validation of some APEL portflio.