Ensure that assessment in each unit of study is powerful, integrated and 'fit-for-purpose'
- What is ‘powerful’ assessment?
- How can we assess less but better?
- How can we use powerful assessment in large courses and groups?
- How do we know the assessment tasks selected for each unit of study and the program overall are ‘powerful’ and valid?
It was widely emphasised in the workshops that, if we want to know what capabilities and competencies are being developed in our graduates, the best place to look is at the assessment tasks they are being asked to undertake and at what the specific capabilities and competencies each task is giving focus to.
When seeking to ensure that assessment tasks are fit-for-purpose, powerful, engaging and valid it is important to be clear on the tests you and your team will apply to make such judgements. Below, in Box Six , the overall tests which were identified, developed and refined during the Fellowship workshops and checked against the guidelines produced by key assessment leaders and outcomes of earlier OLT projects are summarised:
Key tests for powerful assessment
The assessment task or tool under consideration:
In the search for ‘powerful’ assessment tools workshop participants suggest that a key strategy is to ‘assess less but assess better’. They suggest, for example, getting students to self-teach and self-test basic skills and knowledge in their own time using recent developments in online, interactive learning like MOOCs. Then, if a real-world assessment task is used, their ability to draw appropriately and successfully upon these basic competencies appropriately in the context of a unique practice situation will be assessed in context. Romy Lawson in her Fellowship Report (pgs 22-26) emphasises the importance of using whole-of-course rubrics and whole-of-course assessment that is ‘scaffolded’ across each degree to foster integrated learning and assessment and notes the potential to make greater use of a portfolio approach to show achievement of key course (program) level outcomes in combination.
It is equally important, say participants, to use an overall classification system to make sense of the many different types of assessment that might be relevant and to undertake regular stocktakes using this with colleagues to identify and share assessment tasks that meet the above tests.
Below in Box Seven are some initial ideas for a classification system for ‘powerful’, ‘authentic’ assessment (Wiggins, 1993; UNSW, 2015) identified during the Fellowship workshops. It has much in common with the existing classification systems produced by many Australian Universities including that undertaken by the University of Adelaide; the University of Melbourne; in OLT projects by higher educators like Professor Simon Barrie and colleagues on assessing and assuring graduate learning outcomes and by Professor Geoff Crisp on Transforming Assessment and the effective use of eAssessment. The more traditional university forms of higher education assessment like essays, examinations, tests of core skills and knowledge, class presentations, lab-work and so on are acknowledged but are not included in Box Seven.
Types of ‘powerful’ assessment
Examples of powerful assessment
For some 240 practical examples of ‘powerful’ assessment tasks consistent with the tests in Box Six and the classification system in Box Seven were identified by Fellowship participants during the workshops. These are available, sorted by Field of education and type of assessment by Field of education and type of assessment under Resources & Further Reading.
During the Fellowship there has been, as noted earlier, particular interest in how ‘authentic’, dilemma-based assessment tasks that give focus to the real world ‘wicked problems’ of daily practice (Rittel & Webber, 1973; UNSW 2013) might most productively be developed and used. Box Eight brings together the key suggestions on how this might best be done. It is important to note that this form of assessment, like many of the types identified in Box Seven, needs to be scalable. In this regard there is particular potential to use recent developments in high-speed interactive online tools to address this challenge.
Some participants suggested that a focus on dilemma-based assessment could be facilitated by introducing as a first step a unit of study called ‘dilemmas of professional practice’ in which students study, discuss how they would handle and then are assessed on how they would handle actual dilemmas of early career practice in the profession concerned identified by successful early career practitioners in that area.
Developing and using dilemma-based assessment tasks
Developing dilemma-based assessment tasks
• Identify successful early career graduates (e.g. people identified by their supervisors, colleagues and clients as performing effectively);
Using dilemma based assessment
• When you have a pool of key dilemmas some can be used as a tool for learning – for formative assessment - and others (unseen by students) for summative assessment;
Below are three examples of how the above guidelines can be applied – the first is in medicine and uses ICT to enable scale-up, the second and third are in teacher education and the fourth comes from engineering.
Examples of ‘authentic’, dilemma based assessment
A group of 100 final year medical students are asked to look at a ‘trigger’ video in which a real-life dilemma unfolds on their laptop. This is based on an actual case identified by a successful early career doctor and is reproduced by actors. First the fledgling doctors see a young mother and two children in the doctor’s waiting room. She is in a positive mood and is about to get the results of her regular, routine mammography check.
The scene cuts to the experienced doctor and on the screen are the results of the young mother’s most recent mammography and her associated blood tests. Each student doctor must interpret what these results are saying. It is in this way that generic and role specific skills and knowledge (for example the ability to read and interpret blood test and mammography results) are tested in context. If this is done correctly they will see that the results are very bad news indeed for the young mother with secondaries already spreading. Each student is told that the mother is about to come in and they are asked to say how they would break the news to the young mother. This is recorded. They then watch how the experienced practitioner does this and each medical student has to compare and contrast how they broke the news with the practitioner’s approach, using the top 12 professional capabilities identified by successful early medical practitioners as a reflection and evaluation framework. The case then proceeds to asking the student doctors what they would do next.
An ‘interactive examination’ (see Johnnson et al) attempts to improve the professional validity of an examination. Using a computer, students view 3 short films showing different classroom contexts. They can also access background information and transcripts of the dialogue. They are asked to describe and analyse the situations and recommend how the teachers should act. Once the students have submitted this first stage, they are presented with ‘expert’ solutions. They then have a week to compare their own responses against the ‘expert’ approach, comment on the differences and use that to identify any future learning needs that have emerged from the exercise”.
Practicum in Teaching
The supervisor is briefed on the top 12 ranked capabilities from studies of successful early career teachers and asked to identify a time when the student being supervised is confronted with a dilemma – a forked road situation where there is no clear, ‘right’ way to respond. The supervisor notes what happened and how well the person being supervised handled the situation, using the top 12 capabilities as an assessment framework. The student teacher is then asked to take the supervisor’s feedback and compare it with their own perception of what happened and how well they handled it taking into account the key capabilities and write a comparative essay which is submitted for assessment against a rubric discussed in class before the practicum period got underway”. (see, for example: Bloxham, S, 2007)
An early career engineer – Rosemary (not her real name) – has been working successfully over the previous 3 years since graduation in a large construction firm. This day she is to accompany a senior partner to a public meeting about a by-pass the company is building around a regional town. They know in advance that there is considerable public opposition and are greeted by a very angry audience. The senior partner presents a series of slides on the proposed construction showing that all that is proposed is fully compliant with all the regulations. However, this does not placate the audience.
Engineering students undertaking the assessment task are asked to say what, if they were Rosemary, they would do to resolve the situation. They are then told what Rosemary did - at a tea break she quietly approaches some of the most vociferous members of the audience, gives them her card and says it would be great if she could talk privately after the meeting so she could hear directly from them what is going on. This establishes that the mayor is a keen ornithologist and there is a colony of endangered local birds that nest in one of the small patches of forest that will be felled to make way for the by-pass. A diversion around this is negotiated and the by-pass project proceeds. Again students compare and contrast their strategy with Rosemary’s making reference to the top 12 key capabilities identified in studies of successful early career engineering graduates.
Select one assessment task from a unit of study you teach which you see as being ‘powerful’ and which has been well received by students
- Identify what sort of assessment task it is using the classification system discussed in Box Seven or by adding a new category.
- Identify the extent to which it assesses the five dimensions of capability in combination (see Using the Guide & Getting Started, Section 3.2)
- Identify if it is addressing any aspects of the plus in work ready plus (see Using the Guide & Getting Started, Section 3.3) – for example does it confirm if the graduate sustainability literate, change implementation savvy, creative and inventive, and has a clear position on the tacit assumptions driving the 21st century agenda?
References & further guidelines on developing ‘powerful’ assessment tasks
- Benckendorff, P et al (2015): Online business simulations – a good practice guide, OLT, Sydney (see also bizsims.edu.au)
- Griffith University (2015): Assessment matters! This site has guidelines on designing effective assessment, assessment methods, consensus moderation, a glossary of terms and a range of further references and resources.
- Annie Holdsworth, Kim Watty and Martin Davies (2009) from the Centre for the study of HE at the University of Melbourne provide a very helpful and practical overview on developing and using capstone assessment tasks in their Guide on developing capstone experiences. On pgs12ff they summarise some very practical guidelines on the assessment of capstones and suggest reference to the Griffith University Graduate attributes Toolkit on assessing professional skills (pgs 12ff) and the Griffith toolkit on creativity and innovation (see pgs 29ff for guidelines on assessing creativity)
- Barrie et al (2012): Assessing and assuring Australian graduate Learning outcomes, OLT, Sydney give specific exemplars as follows:
- Tasks seen as being relevant for assessing graduate learning outcomes across a range of disciplines including Business, Chemistry, Drama, English, History, Law and Vet Science are identified on pgs 34ff. The assessment task types identified include: reports, critical reviews/essays, oral presentations, tutorial/rehearsal, reflective piece, examinations, performance, work-placement, working demonstrations and multi-component tasks.
- The characteristics of tasks identified as being effective in the assessment of graduate learning outcomes are listed and discussed on pgs 36ff. They include: assessment for learning; relevance to professional practice; authenticity of role and audience; active student engagement and roles; careful design and management of group assessment tasks; explicit task relationships; a focus on reflection: turning experience into learning;
- Selected examples of assessment tasks in Engineering, Veterinary Science, Law, Business, History, Archaelogy, Drama, along with eAssessment options that meet these tests are included on pgs 40ff.
- Boud, D & Dawson, P (2015); Best estimates of knowledge about assessment in higher education? Foundations for course design and leading assessment in the academy, Centre for Research in Assessment and Digital Learning, Deakin University, Session E8, pg 126, ISSOTL 2015
- Crisp, Geoff: Website: Transforming Assessment, ALTC Fellowship 2009-11). The website Rethinking assessment in a participatory digital world – assessment 2.0 and beyond gives a range of eAssessment examples in a wide range of fields of education, types of eAssessment and tools. This site also hosts a series of webinars on this area.
- Freeman, M & Ewan, C (2014): Good Practice Report: Assuring learning outcomes and standards, OLT, Sydney discuss a range of experiential learning and capstone options on pgs 40-42.
- The Padagogy wheel – on this site Allan Carrington from Adelaide brings together all of the Applications available on tablets and relates them to the key capabilities identified in this fellowship.
- Many universities are now providing guidelines on how to make assessment ‘authentic’. A good example is the UNSW (2013) Assessment Toolkit on Assessing Authentically. This toolkit provides a clear outline of the distinguishing characteristics of authentic assessment tasks (pgs 2-3), along with excellent guidelines on how best to design them and lists examples including problem-based tasks (pg 4), structured clinical examinations, scenario based assessment, portfolios, solution focused tasks, forensic problem solving and video triggers (pgs 5-7). See also UNSW (2015): Assessing authentically, UNSW Assessment Toolkit. This site provides guidelines on authentic assessment and case studies of its use in art teacher education, Engineering, Medical Sciences and in social sciences and international studies.
- Lee, Nicolette (2015): Capstone curriculum across disciplines: Synthesising theory, practice and policy to provide practical tools for curriculum design, National Senior Teaching Fellowship Report, OLT, Sydney.
- Romy Lawson's Assuring Learning website at: http://www.assuringlearning.com/
This site, developed as part of Romy Lawson's OLT Fellowship on the area, has a wide range of practical tips and resources on mapping graduate attributes in higher education and leadership strategies for engaging staff in these processes. It includes practical resources covering writing and embedding course (program) level outcomes, constructing whole of course rubrics, designing course level outcome assessments, productive learning activities and leading the way along with quality enhancement resources and a curriculum design workbench (tool).