Links

Tools

Export citation

Search in Google Scholar

Adding time and propedeuticity dependencies to the OpenAnswer Bayesian model of peer-assessment

Proceedings article published in 2014 by M. De Marsico, A. Sterbini, M. Temperini
This paper was not found in any repository; the policy of its publisher is unknown or unclear.
This paper was not found in any repository; the policy of its publisher is unknown or unclear.

Full text: Unavailable

Question mark in circle
Preprint: policy unknown
Question mark in circle
Postprint: policy unknown
Question mark in circle
Published version: policy unknown

Abstract

Peer-assessment can be used to evaluate the knowledge level achieved by learners, while exposing them to a significant leaning activity at the same time. Here we see an approach to semi-automatic grading of school works. It is based on peer-assessment of answers to open ended questions (“open answers”), supported by the teacher grading activity performed on a portion of the answers. The methodology we present is based on a representation of student model and answers through Bayesian networks. It supports grading in a six-values scale (the widely used “A” to “F” scale). The experiments we present test the possibility to model the fact that knowledge required to perform some work is preparatory to that required for a subsequent one. The experiments have been conducted using the web-based system OpenAnswer and were meant to collect datasets, to be exploited in order to evaluate the various algorithms, settings and termination conditions better to be applied in order to have a reliable set of grading out of the learners' peer-assessment process and the teacher's grading work (with the latter limited to a significantly limited percentage of the answers to be graded).