Difference between revisions of "Assessment Criteria List/OG"

From Open Pattern Repository for Online Learning Systems
Jump to navigation Jump to search
(Created pattern in original format)
 
(Added category)
Line 59: Line 59:
<references/>
<references/>


[[Category:Design_patterns]] <!-- List of other categories the design pattern belongs to. The syntax for linking to a category is: [[Category:<Name of category]] -->
[[Category:Design_patterns]] [[Category:Full_Pattern<!-- List of other categories the design pattern belongs to. The syntax for linking to a category is: [[Category:<Name of category]] -->

Revision as of 12:13, 3 May 2017


Assessment Criteria List
Contributors Joseph Bergin, Christian Kohls, Christian Köppe, Yishay Mor, Michel Portier, Till Schümmer, Steven Warburton
Last modification May 3, 2017
Source Bergin et al. (2015)[1]; Warburton et al. (2016)[2][3]
Pattern formats OPR Alexandrian
Usability
Learning domain
Stakeholders


Also Known As: Assessment Contract (Assessment Contract)


Clearly communicate to the students what the criteria for assessment are.


Context

This works best for projects, essays, and theses where you have multiple components/perspectives that need to be assessed: produced artefacts, learning achievements, documentation, presentation, innovation, social interaction etc.

Problem

If students do not know what you expect from them, they may run in the wrong direction, perform poorly or do the wrong things. As a teacher you may also have no idea if the students are performing well and as intended.

Forces

You may implicitly know what you expect from students but how can the students tell? Your implicit expectations may also lead to different criteria when it comes to the actual grading. There is a chance that the grading is unfair if you apply different assessment criteria each time. Even if you try to find the right individual criteria for each group, the students may feel treated unfair because they cannot understand what was going wrong. Students like consistency and they want to understand what you do expect but how to account for unexpected high or low performance of students? Students should be able to self-assess their current performance but how can they know what is good and bad performance? Students need some safety and guidance but they should also be able to explore new fields and alternative paths.

Solution

Therefore, have a clear list of criteria that are accounted for in grading. Clearly communicate the criteria your students should aim for to have Transparent Assessment (Transparent Assessment). The criteria should be derived from the Learning Outcomes (Learning Outcomes) and further explain what will be assessed.

Solution Details

Work on Criteria Refinement (Criteria Refinement) to have more requirement details and state explicitly what you consider as good and bad performance. Use a Performance Sheet (Performance Sheet) to mark each criteria conveniently. Aggregate the single performance grades to one overall grade. Clearly state to your students the criteria they will be rated on. Decide and communicate whether all criteria have to be fulfilled and whether they score equally. If some criteria are weighted stronger than others you should define sub criteria.


Some of the criteria might be knockout criteria. That is student have to perform well in these areas. On the other hand, to account for diversity and special cases, you may define that only the best evaluated criteria account for the overall grade.


Optionally have Hidden Criteria (Hidden Criteria) that are coherent with the goals but would confuse or mislead students if stated explicitly. Knockout criteria should never be Hidden Criteria (Hidden Criteria).

Positive Consequences

Students know what is expected from them and how they are rated. The transparent criteria list makes grading more fair. It also simplifies the evaluation if you have clear criteria that you can check with a checklist or performance sheet. All groups are assessed the same way. Having a standardized criteria list makes it easier to grade results by two independent reviewers.

Negative Consequences

Students may focus too much on the criteria and lose sight of the big picture. Students may only concentrate on the criteria and do not exceed expectations and go for the easiest path. The criteria may be too static to account for special performances of students. Too many details may be too overwhelming to students. That’s why you should only refer to the high level criteria, but also have a Criteria Refinement (Criteria Refinement) when needed.

Example

In an introductory course on media and computer science (“media informatics”) students had to design a system that helps users to reduce their CO2 footprint. They had one week to work on the problem, fulfill several tasks, and have a self-running demonstration (a video). At the end of the week they had to do a presentation and two weeks later they had to submit their project documentation. They were rated on ten criteria: research, requirement analysis, system architecture, pseudo code, self-running presentation video, team presentation, structure of documentation, writing quality of documentation, reflection in documentation, degree of innovation. Each of the criteria had to be matched on a minimum level but the grading was based only on the seven criteria the students performed best in order to account for diversity.

Related patterns

Instead of prescribing criteria, let Students Define Criteria (Students Define Criteria) or let Students Guess Criteria (Students Guess Criteria).

Instead of a fixed set of criteria, one can have Flexible Criteria (Flexible Criteria) and allow Multiple Paths (Multiple Paths) (allowing different achievements to reach the goal). In team work, a Performance Matrix (Performance Matrix) can make explicit who was responsible for what). This pattern is an Open Instruments of Assessment (Open Instruments of Assessment) and can be used within Constructive Alignment (Constructive Alignment).

References

  1. Pattern published in Bergin, J., Kohls, C., Köppe, C., Mor, Y., Portier, M., Schümmer, T., & Warburton, S. (2015). Assessment-driven course design foundational patterns. In Proceedings of the 20th European Conference on Pattern Languages of Programs, EuroPLoP 2015 (p. 31). New York:ACM.
  2. Patlet published in Warburton, S., Mor, Y., Kohls, C., Köppe, C., & Bergin, J. (2016). Assessment driven course design: a pattern validation workshop. Presented at 8th Biennial Conference of EARLI SIG 1: Assessment & Evaluation. Munich, Germany.
  3. Patlet also published in Warburton, S., Bergin, J., Kohls, C., Köppe, C., & Mor, Y. (2016). Dialogical Assessment Patterns for Learning from Others. In Proceedings of the 10th Travelling Conference on Pattern Languages of Programs (VikingPLoP 2016). ACM.

[[Category:Full_Pattern