Abstract: Context: Student evaluations of teaching and subjects are systematically used at many educational institutions, to get students' feedback. They usually consist of a number of questions where students can anonymously provide their responses on 5 or 7-point Likert scales, with provision to provide free comments. There are several different ways of evaluating student satisfaction (Elliott, 2002) and the numerical scores are often taken as the main indicator of student satisfaction, although much insight can be gained from analysing the free text comments.
Purpose: The purpose of this study is to utilise students' free comments to gain better insights into problem areas, enabling targeted actions for curriculum design and teaching. Specifically, this study is seeking to answer two related research questions.
1. What is the correlation between numerical scores and students' comments?
2. Which teaching activities influence the selection of the numerical scores, and how can these be used to give pointers for teaching enhancement?
Approach: We analyse students' comments, using machine learning techniques to extract sentiment information. Students' comments from survey data obtained at QUT over 3 years, in almost 20 engineering subjects, with two surveys per subject per year. One of the surveys is conducted in the middle of semester (the Pulse survey), the other towards the end of semester (the Insight survey). A total of 2254 text responses have been analysed. Links between sentiment information and survey numerical scores are used to provide pointers to curriculum enhancement or the design and selection of appropriate teaching strategies.
Results: From the analysis conducted so far, a clear link appears to be present between students' free comments and the numerical scores selected by students. There is a clear trend in the types of comments and phrases used when comparing positive and negative feedback. This trend provides confidence in the validity of the survey scores as a useful feedback mechanism. General areas for feedback have also been identified and extracted from the data. This can be applied on a subject by subject basis, or used at the whole-of-institution level.
Conclusions: A machine learning model has been designed which can predict student satisfaction, given students' free comments about a subject. This model gives a quantifiable score based on these free comments. Important recommendations will also be extracted from the data, by searching for key words (ie. tutorials, assignments, lectures, etc.), to address problem areas, or further enhance those that are judged to be positive and valuable by students. This initial model doesn't take into account negation words.
To cite this article: Cunningham-Nelson, Samuel; Baktashmotlagh, Mahsa and Boles, Wageeh. Linking numerical scores with sentiment analysis of students' teaching and subject evaluation surveys: Pointers to teaching enhancements [online]. In: 27th Annual Conference of the Australasian Association for Engineering Education : AAEE 2016. Lismore, NSW: Southern Cross University, 2016: 187-194.
[cited 28 May 17].