Bones, boys, bombs and booze: an exploratory study of the reliability of marking dissertations across disciplines

Hdl Handle:
http://hdl.handle.net/10149/92346
Title:
Bones, boys, bombs and booze: an exploratory study of the reliability of marking dissertations across disciplines
Authors:
Bettany-Saltikov, J. A. (Josette); Kilinç, S. (Stephanie); Stow, K. (Karen)
Affiliation:
University of Teesside. School of Health and Social Care; University of Teesside. School of Social Sciences and Law.
Citation:
Bettany-Saltikov, J., Kilinç, S. and Stow, K. (2009) 'Bones, boys, bombs and booze: an exploratory study of the reliability of marking dissertations across disciplines', Assessment & Evaluation in Higher Education, 34 (6), pp.621-639.
Publisher:
Taylor & Francis
Journal:
Assessment & Evaluation in Higher Education
Issue Date:
Dec-2009
URI:
http://hdl.handle.net/10149/92346
DOI:
10.1080/02602930802302196
Abstract:
The primary aim of this study was to evaluate the reliability of the University's Masters' level (M-level) generic assessment criteria when used by lecturers from different disciplines. A further aim was to evaluate if subject-specific knowledge was essential to marking these dissertations. Four senior lecturers from diverse disciplines participated in this study. The University of Teesside's generic M-level assessment criteria were used and formatted into a grid. The assessment criteria related to the learning outcomes, the depth of understanding, the complexity of analysis and synthesis and the structure and academic presentation of the work. As well as a quantitative mark, a qualitative statement for the reason behind the judgement was required. Each lecturer provided a dissertation that had previously been marked. All participants then marked each of the four projects using the M-level grid and comments sheet. The study found very good inter-rater reliability. For any one project, the variation in marks from the original mark was no more than 6% on average. This study also found that subject-specific knowledge was not essential to marking when using generic assessment criteria in terms of the reliability of marks. The authors acknowledge the exploratory nature of these results and hope other lecturers will join in the exploration to test the robustness of generic assessment criteria across disciplines.
Type:
Article
Language:
en
Keywords:
reliability; subject-specific knowledge; dissertations; second marking; subject disciplines; assessment criteria
ISSN:
0260-2938; 1469-297X
Rights:
Subject to restrictions, author can archive post-print (ie final draft post-refereeing). For full details see http://www.sherpa.ac.uk/romeo/ [Accessed 17/02/2010]
Citation Count:
0 [Web of Science, 17/02/2010]

Full metadata record

DC FieldValue Language
dc.contributor.authorBettany-Saltikov, J. A. (Josette)en
dc.contributor.authorKilinç, S. (Stephanie)en
dc.contributor.authorStow, K. (Karen)en
dc.date.accessioned2010-02-17T12:08:08Z-
dc.date.available2010-02-17T12:08:08Z-
dc.date.issued2009-12-
dc.identifier.citationAssessment & Evaluation in Higher Education; 34 (6): 621-639en
dc.identifier.issn0260-2938-
dc.identifier.issn1469-297X-
dc.identifier.doi10.1080/02602930802302196-
dc.identifier.urihttp://hdl.handle.net/10149/92346-
dc.description.abstractThe primary aim of this study was to evaluate the reliability of the University's Masters' level (M-level) generic assessment criteria when used by lecturers from different disciplines. A further aim was to evaluate if subject-specific knowledge was essential to marking these dissertations. Four senior lecturers from diverse disciplines participated in this study. The University of Teesside's generic M-level assessment criteria were used and formatted into a grid. The assessment criteria related to the learning outcomes, the depth of understanding, the complexity of analysis and synthesis and the structure and academic presentation of the work. As well as a quantitative mark, a qualitative statement for the reason behind the judgement was required. Each lecturer provided a dissertation that had previously been marked. All participants then marked each of the four projects using the M-level grid and comments sheet. The study found very good inter-rater reliability. For any one project, the variation in marks from the original mark was no more than 6% on average. This study also found that subject-specific knowledge was not essential to marking when using generic assessment criteria in terms of the reliability of marks. The authors acknowledge the exploratory nature of these results and hope other lecturers will join in the exploration to test the robustness of generic assessment criteria across disciplines.en
dc.language.isoenen
dc.publisherTaylor & Francisen
dc.rightsSubject to restrictions, author can archive post-print (ie final draft post-refereeing). For full details see http://www.sherpa.ac.uk/romeo/ [Accessed 17/02/2010]en
dc.subjectreliabilityen
dc.subjectsubject-specific knowledgeen
dc.subjectdissertationsen
dc.subjectsecond markingen
dc.subjectsubject disciplinesen
dc.subjectassessment criteriaen
dc.titleBones, boys, bombs and booze: an exploratory study of the reliability of marking dissertations across disciplinesen
dc.typeArticleen
dc.contributor.departmentUniversity of Teesside. School of Health and Social Care; University of Teesside. School of Social Sciences and Law.en
dc.identifier.journalAssessment & Evaluation in Higher Educationen
ref.citationcount0 [Web of Science, 17/02/2010]en
or.citation.harvardBettany-Saltikov, J., Kilinç, S. and Stow, K. (2009) 'Bones, boys, bombs and booze: an exploratory study of the reliability of marking dissertations across disciplines', Assessment & Evaluation in Higher Education, 34 (6), pp.621-639.-
All Items in TeesRep are protected by copyright, with all rights reserved, unless otherwise indicated.