By Ben Harkin
, Aspasia Eleni Paltoglou, Khizra Tariq
, Maggie Watkin, Shokaib Ashfaq, Alan Yates, and Carly Jacobs
Manchester Metropolitan University
Cite as: Harkin, B., Paltoglou, A.E., Tariq, K., Watkin, M., Ashfaq, S., Yates, A. and Jacobs, C. (2022), "Student Experiences of Assessment and Feedback in the National Student Survey: An Analysis of Student Written Responses with Pedagogical Implications", International Journal of Management and Applied Research, Vol. 9, No. 2, pp. 115-139. https://doi.org/10.18646/2056.92.22-006 | Download PDF
Abstract
The National Student Survey (NSS) indicates that students are less satisfied with Assessment and Feedback versus other dimensions of the NSS in Higher Education Institutions (HEIs) across the United Kingdom (UK). HEIs generally rely on quantitative Likert responses within the NSS to assess the quality of their provision while ignoring written comments. However, we propose that analysis of written comments is essential to understand students’ lived and multidimensional experiences. Therefore, we utilised a Framework Analysis to investigate students' written responses for assessment and feedback in the 2020 NSS. We identified high (n = 4) and low-scoring (n = 4) departments as those that scored the highest and lowest on assessment and feedback at an HEI institution in the UK. These groups scored above and below the NSS national average for assessment and feedback of 72.6% (Office for Students, 2020), at 84.6% and 66%, respectively. Our analysis of 10,628 words revealed five main themes of interaction and experience, assessment clarity, assessment fairness, timing, inspiration for the present and future, and eleven sub-themes. We used the frequency of words concurrently with these themes to identify areas of good pedagogical practice. For example, high-scoring departments provided easy-to-follow lectures (Theme 1) and assessment guidance (Theme 2), students perceived feedback as fair (Theme 3), tutors were appropriately responsive to students' attempts at communication (Theme 4), and assessments had clear applicability to future employability (Theme 5). Our findings highlight the suitability of our approach for academics and HEIs to improve their understanding and provision of assessment and feedback. We provide recommendations to improve assessment and feedback at a unit, program, and HEI level.
1. Introduction
The National Student Survey (NSS) indicates that students are less satisfied with Assessment and Feedback versus other dimensions of the NSS in Higher Education Institutions (HEIs) across the United Kingdom (UK) (Office of Students, 2022). The NSS has become a marker of institutional quality, with the results said to influence the course choices of prospective students, facilitate public accountability, and help institutes improve the student experience (Office of Students, 2022). However, research suggests that since their first publication in 2006, NSS results have demonstrated consistently lower levels of satisfaction (Blair et al., 2013; Maggs, 2014; O’Donovan et al., 2021). HEIs have, therefore, responded to the concerns raised in the NSS, such as teaching quality, which has seen a marked improvement in student ratings over the years (Office of Students, 2019). However, as noted above, student perceptions about the quality of feedback and assessment remain relatively negative and resistant to change (Langan & Harris, 2019). These points indicate variability in the perceptions and quality of assessment and feedback across HEIs, faculties, departments, and programs. In addition, HEIs generally rely on quantitative Likert responses within the NSS to assess the quality of their provision while ignoring written comments. Therefore, for the first time in the literature, we will conduct a qualitative analysis of written responses in the NSS of students in high versus low-scoring departments, with the aim of identifying recommendations for good pedagogical practice.
The investigation of feedback has revealed numerous significant experiences and factors. For example, a corpus of academic literature has highlighted that timely and constructive feedback can facilitate improvements in assessment performance, increase motivation, and can encourage self-regulated learning by enhancing cognitive engagement (Butler & Winne, 1995; Chur-Hansen & McLean, 2006; Hattie & Timperley, 2007; Hounsell, 1987, 2003; Hyland, 2000; Kulik & Kulik, 1988; Nicol & Macfarlane‐Dick, 2006; Sadler, 1989; Tian & Zhou, 2020). However, for feedback to be effective, it requires a two-way process: where students perceive it to be of value and actively engage with it to help improve future assessment performance (Blair et al., 2013).
Research from a range of countries and disciplines has examined student views and identified concerns about the quality and quantity of feedback; and the length of time between submission and return (Blair et al., 2013; Blair & McGinty, 2012; Denton et al., 2008; Gould & Day, 2012; Li & De Luca, 2014; O’Donovan et al., 2021; Parkes & Fletcher, 2017; Vattøy et al., 2021; Weaver, 2006). The research also identified issues specific to inconsistencies between tutors, for example, an inability to understand the language used by tutors, differing feedback between tutors on the same topic (e.g., how to reference appropriately), and lack of feedback on exams. Blair et al. (2012) identified issues of cultural insensitivity, wherein extensive comments on the use of English language skills to students for whom English is not the first language also caused some distress.
Difficulty in completing assignments is attributed to poorly designed or ambiguous criteria, which impacts attitudes to feedback and leads students to hold less favourable attitudes toward subsequent feedback (Graham et al., 2022). These points highlight the importance of constructive alignment (Biggs, 1996) in terms of a linear and logical relationship between assessment criteria, design of the assignment, assessment methods and attitudes toward feedback and an acknowledgement that this influences objective (i.e., formative performance) and subjective (i.e., the emotions felt during the assessment process) metrics (Graham et al., 2022). Blair et al. (2012) identified issues with the method of delivery of summative feedback, with some students preferring face-to-face feedback as it allowed them to seek clarification and improve their understanding. Others suggested a hybrid model, with feedback provided in both written and verbal forms. However, whilst this may be a favoured method for some students, the workloads and time constraints experienced by tutors may make this approach unviable.
Contemporary studies often focus solely on student perspectives (Dawson et al., 2019), possibly reflecting the thrust to improve the student experience in recent years. In contrast, research that solicited the views of academic tutors indicates that students and tutors often hold contrary views about feedback. In an extensive qualitative study, Dawson et al. (2019) reported broad agreement about the purpose of feedback: to help students improve their work, although the means of how to improve was often absent. However, students and tutors differed more in opinions about what counted as effective feedback, with tutors more likely to mention the design of the assessment than students and less likely to comment on the quality of feedback. Mulliner and Tucker’s (2017) survey of tutors and students also noted divergent responses between the groups, in particular concerning students’ engagement with feedback and the quality of feedback. Tutors were considerably more satisfied with the quality and fairness of their feedback and its relationship with the marking criteria than students were. These authors also observed that an ideal timeframe for work to be returned was considerably less for students than tutors, which highlights students may not appreciate or be aware of tutor workloads.
O’Donovan, Den Outer and Price (2021) concur with the extant literature suggesting that many factors may impact the perceived student lack of satisfaction with feedback. They also highlighted an understandable lack of knowledge about the processes involved in assessment and feedback. The authors suggested that both the student and the tutor need to be assessment literate, that is, to have: “Shared understandings of the nature and role of assessment and feedback” (p. 4). Failing to address the mismatch between students and tutor perceptions will likely continue to reflect a lack of satisfaction with feedback and assessment in NSS responses.
It is important to note that the term feedback covers a range of pedagogical approaches, such as formative peer and tutor feedback and summative feedback on assignments. However, we identify two limitations of how the NSS measures feedback and assessment. First, questions within the NSS focus on summative feedback and fail to tap into diverse types of assessment delivery. Second, students respond on a 6-point Likert scale to four closed questions relating to marking criteria, fair marking, timely feedback, and helpful comments. We identify a limitation to this reliance on quantitative aggregated responses as they do not provide much meaningful information to help us fully understand the students’ perspectives on feedback and assessment. Fortunately, and of relevance to the present study, the NSS does allow students to give more detailed written responses, providing the opportunity for rich qualitative analysis.
Therefore, the present study will utilise previously untapped NSS student written response data to identify what differentiates high- from low-scoring departments and identify markers of good practice in assessment and feedback. The use of NSS data in this manner will provide a means for HEIs to understand student responses in more detail (i.e., beyond aggregated quantitative responses) and to traverse the communication gap that often exists between students and tutors in this area (Dawson et al., 2019; Maggs, 2014; Mulliner & Tucker, 2017).
2. Methodology
2.1. Data Extraction
The NSS gathers data on undergraduate students’ opinions, attitudes, and experiences on their courses (Office for Students, 2022). The NSS probes students on the following nine domains: (a) the teaching on my course, (b) learning opportunities, (c) assessment and feedback, (d) academic support, (e) organisation and management, (f) learning resources, (g) learning community, (h) student voice and (i) students’ union. We presently focus on the written responses for the third dimension of assessment and feedback. The four questions specific to assessment and feedback in the NSS are: (a) “The criteria used in marking have been clear in advance.” (b) “Marking and assessment has been fair.” (c) “Feedback on my work has been timely.” (d) “I have received helpful comments on my work”. Students are then given the option to write a positive and/or negative comment for each of these questions. Therefore, we conducted a simple in-text search of excel spreadsheets (positive and negative comments separately) using the terms “feedback”, “assessment”, and “marking” and included text that satisfied the following three definitions:
“Feedback”
“Feedback in educational contexts is information provided to a learner to reduce the gap between current performance and a desired goal” (Sadler, 1989).
“Assessment”
“Assessment refers to a related series of measures used to determine a complex attribute of an individual or group of individuals. This involves gathering and interpreting information about student level of attainment of learning goals” (Brown, 1990, p. 1)
“Marking” (Criteria)
“Marking schemes play an important role in criterion … They explicitly explain how a student is graded and every mark is accounted for. This helps the students to recognize and match teachers’ expectations and encourages student autonomy prompting deep learning” (Koshy, 2008, p. 5)
As a result of this process, we identified 10,628 words that satisfied these definitions within the positive and negative comments columns. Following this, we categorised these words to high- and lower-scoring departments concerning the NSS dimension of assessment and feedback at this HEI. We used the following criteria to create these two groups. First, we used the national sector average of 72.6% for assessment and feedback to identify departments that scored above and below this threshold (Office for Students, 2020). We justify using this high-low cut-off as it is an aggregate from the responses of ~310,000 students from 396 universities in the 2020 NSS. Second, we identified the departments at our institution that scored above and below the national average and then chose those that had the highest and lowest satisfaction scores. Therefore, we ended up with two groups of departments: high-scoring (n = 4) and low-scoring (n = 4), with aggregated scores of 85.6 and 66% for assessment and feedback on the NSS, respectively. We summarise the categorisation of 10,628 words to high- and low-scoring departments for negative and positive comments in Table 1.
Comment Type | Negative | Positive | ||
---|---|---|---|---|
Departmental Grouping | Low Scoring | High Scoring | Low Scoring | High Scoring |
Total Word Count | 7283 | 1539 | 585 | 1221 |
2.1.1. The Framework Analysis Method
Thematic analysis using a Framework Approach was then used to analyse the positive and negative comments for high- and lower-scoring departments for assessment, feedback and marking. Framework Analysis adopts a thematic approach to compare data in a structured manner and to develop themes within a deductive and inductive framework (Gale et al., 2013; Goldsmith, 2021). It is a suitable method for applied qualitative research such as ours as it allows the researcher/s to provide: “Targeted answers about specific populations” (Goldsmith, 2021, p. 2061), i.e., high- and low-scoring departments. It adopts a pragmatic epistemology, which focuses on practical understandings, and acknowledges that whilst the reality is socially constructed, experiences of world issues are individual, as such: “World views can be individually unique and socially shared” (Kaushik & Walsh, 2019, p. 3). Although considered a qualitative method, Framework Analysis applies a mixed methods approach as it allows the quantitative categorisations of data to inform theme development, as we outline in the Data Extraction section and summarise in Table 1. Therefore, Framework Analysis matches the demands of our research aims and the nature of the data we analysed. We outline our data analysis process below.
2.2. Data Analysis
Framework Analysis follows five sequential stages thus: (a) familiarization with the data, (b) identifying a thematic framework, (c) indexing all study data against the framework, (d) reviewing indexed data, and (e) mapping and interpreting patterns found within the tables (Goldsmith, 2021). We now summarise who and what we did in each of these interconnected stages.
Familiarisation with the Data
Researcher AP read and re-read comments within the positive and negative columns.
Identifying a Thematic Framework Researchers AP and BH identified recurring themes and then categorised these into a smaller number of higher-order main themes. Following numerous iterations, feedback, and discussions we identified five main themes and eleven associated subthemes which we summarise below in Figure 1.
Indexing AP then coded all the relevant extracts from the students’ comments using the agreed coding framework. To ensure validity BH then independently second-coded these extracts, with any issues and disagreements discussed and resolved.
Reviewing Data Extracts AP then synthesised and rearranged the data for positive and negative comments for high- and low-scoring departments according to the appropriate aspect of the thematic framework. This allowed us to compare themes and subthemes between high- and low-scoring departments as we outline in Table 2.
Mapping and Interpreting BH then summarised relevant extracts that students provided for each of the main themes and subthemes. The research team then input on the appropriateness and interpretation of these summaries and provided further refinement where necessary.
Main Themes | Comment Type | Negative | Positive | ||
---|---|---|---|---|---|
Departmental Grouping | Low Scoring | High Scoring | Low Scoring | High Scoring | |
Interaction & experience | 2534 | 390 | 249 | 381 | |
Clarity | 2590 | 377 | 158 | 131 | |
Fairness | 490 | 170 | 0 | 28 | |
Timing | 934 | 138 | 55 | 91 | |
Inspiration for present & future | 551 | 284 | 9 | 513 | |
Total Word Count | 7283 | 1539 | 585 | 1221 |
Ethics Published student responses on the NSS cannot be linked to individuals, therefore providing anonymity for the students who responded. In terms of the departments which have been included in the data analysis it is not the intention of this paper to ‘name and shame’ or, conversely, to ‘name and praise’ individual staff/departments at the institution, and therefore all identifiable information has been redacted from the present research.
3. Results
As a result of our Framework Analysis, we identified five major themes: (a) interaction and experience, (b) assessment clarity, (c) assessment fairness, (d) timing, and (e) inspiration for present and future, and eleven associated subthemes, which we outline in Figure 1. For ease, we discuss each of these themes (T) and subthemes (ST) and exemplar extracts from students' comments within the following structure:
- Low Scoring Departments – Negative Comments
- High Scoring Departments – Negative Comments
- Low Scoring Departments – Positive Comments
- High Scoring Departments – Positive Comments
It is important to note that due to the differences in frequency counts for each of the cells in Table 2, this is reflected in the number of quotes that we then discuss under each theme, departmental scoring (low vs. high) and type of comments (negative vs. positive). Therefore, we include quotes that are good exemplars made by students for each theme and subtheme. As we summarised in Figure 1, we used these frequency counts concurrently with the themes/subthemes to identify examples of good pedagogical practice for assessment and feedback.

Theme 1: Interaction and Experience
For the theme of interaction and experience, we identified three subthemes of tutors (ST1), equipment and environment (ST2), and university and course organization (ST3) across low- and high-scoring departments for negative and positive comments.
Low Scoring Departments - Negative Comments
In low-scoring departments, negative comments for tutors (ST1) often focussed on the quality of lectures, poor communication, lack of professionalism, and lack of support for assessments.
“Tutors sometimes don’t make the lectures/tutorials interesting or relevant to assessment. Don’t connect/communicate properly with the students, they aren’t approachable. Sometimes [tutors] are rude and unprofessional. They cancel teaching sessions and don’t make changes according to students’ feedback. They don’t give enough support with assignment. Don’t reply promptly to emails. They pick on students in workshops. They need to be more nurturing.”
Equipment and environment (ST2) issues highlighted computer and software issues, with an interesting insight into the poor standard of the buildings that students find themselves within.
“Issues with PCs and specialized software, not enough PCs and printers for students. A lot of construction on campus. Trouble with e-books and Moodle, [and] receiving results.”
“Dark and depressing buildings. Quiet zones in library are noisy.”
Negative comments for low-scoring departments for university and course organization (ST3) highlighted that poor organization existed concerning the assessments and that communication within the departments was poor or perceived as such by students. This lack of connection between students, tutors and assessment materials makes students feel separate from the university.
“Not much association with services as promised. Same deadline for [different] assessments, different courses should talk to each other. No follow up on student feedback. [I]don’t feel part of the university.”
Across these themes, we see classic issues typical to assessments, for example, lecture content did not map onto the demands of the assessment. However, students also identified factors that are perhaps less obvious and more phenomenological regarding the influences of emotional connection to the tutor (ST1) and the use of space within the university (ST2).
High Scoring Departments - Negative Comments
In high scoring departments, negative comments mirrored those observed for low scoring departments. For the subtheme of tutors’ (ST1), students mentioned several negative issues, that we observed previously in low scoring departments:
“Majority [of tutors/lectures] disorganized (e.g., no PowerPoint).”
“[Tutors] lecture during seminars, don’t allow discussion.”
“Don’t listen to student feedback, stuck in their ways (3 times).”
“Not helpful when wanting to change unit. Modules filled up too quickly.”
“Give support only when received negative feedback from students.”
“Don’t limit students, let them try and achieve what they believe. Less sarcasm when giving feedback.”
“Number to call better than just email.”
“Inexperienced lecturer, talked about their own experiences, rather than teach students.”
As with low scoring departments, students in good departments identified similar negative issues with respect to equipment/environment (ST2) and university/course organisation (ST3):
“Overcrowded rooms.”
“Supportive services: Counselling services not helpful.”
“No sense of community.”
Low Scoring Departments - Positive Comments
Low scoring departments provided a range of positive comments across the subthemes of tutors (ST1) and university/course organisation (ST3). For the subtheme of tutors, we see the importance of emotional connection, passion, quality of communication and teaching from the tutors.
“Passionate lecturers [and] some very good lecturers.”
“… in third year, I noticed the teaching was overwhelmingly better than the past two years and that was a common thought amongst all [the] year group.” “Good communication.”
In contrast to what we observed for negative comments, we see that students can hold diametrically opposed views on the same programs within the same institution.
“The university provides many opportunities, from careers, work experience, and socialising with other students and staff.”
“Privilege to use all the services and facilities of both [universities names redacted]. I am very grateful about it especially when using the libraries of both universities. Sense of community. the environment is a … safe space to learn and grow. Management responded well to feedback and arranged additional support for part-time students.”
High Scoring Departments, Positive Comments
An interesting pattern for high scoring departments was that students expressed similar positive comments as low scoring departments for the subtheme of tutors, however, only relevant responses were provided for the sub-theme of tutors (ST1) with no comments provided for equipment/environment (ST2) or university/course organisation (ST3):
“Support on assessment offered many times by certain tutors.”
“Interested and passionate tutors.”
“Great atmosphere, students encouraged to discuss.”
“Everyone helpful, tutors try their best to guide students.”
“Incredible staff, incredibly devoted and supportive. They gave 100%.”
“Feedback understanding and supportive.”
“Good quality lectures [and] lecturers.”
In summary of our findings for interaction and experience (Theme 1) we highlight the following. Students within high-scoring departments report that tutors provide high-quality lecture content, convey the information to students in an expert manner, and provide a positive atmosphere for students to learn. In contrast, students reported some negative issues that appear generic to assessments and likely to reflect shortcomings at an institutional level, for example, availability of computers and suitability of study areas. When we refer to Table 1, we see that while the nature of negative responses is similar across low and high-scoring departments, we see a marked difference in the total number of negative words: 2534 versus 390, respectively. We take this to suggest that lower-scoring departments have systemic issues for assessment, feedback and marking, which operate above the background noise of general assessment-related complaints and environmental-related issues.
Theme 2: Assessment Clarity
For the theme of assessment clarity (T2), the importance of clear assessment guidance, marking criteria and feedback was evident. In addition, we identified three distinct yet overlapping subthemes in terms of temporal domains for assessment, feedback and marking: before assessment submission (ST1), after receiving feedback (ST2), and between assessments (ST3).
Low Scoring Departments – Negative Comments
Before Assessment Submission
“Unclear marking criteria.”
“No consistency in marking criteria.”
“Lack of clarity regarding what the students have to do for the assessment.”
“Having a formative assessment doesn’t always make summative assessment clearer.”
“Not enough explanation of coursework in lectures and workshops.”
“Lecturers sometimes unable to convey what students need to do.”
After Receiving Feedback
“Subjective, too brief, shallow, generic and unclear feedback.”
“Lack of clarity in feedback.”
Between Assessments
“No consistency in … advice [on assessments] between units.”
“Students don’t know how to improve [in their] next [assessment].”
High Scoring Departments – Negative Comments
Before Assessment Submission
“Misleading information about assessment in live sessions from some tutors. Unclear advice and not enough guidance often on assessment.” “Marking rubric vague and unclear”.
After Receiving Feedback
“Not enough detail on feedback. Repetitive feedback, but not clear how to improve.”
“Feedback doesn’t tell you how to improve. Pointless feedback. Feedback not representative of the grade. Unclear structure of unit, unclear communication.”
“Inconsistency between feedback and marking sometimes.”
Between Assessments
“It is not enough to have online resources; you also need to explain these.”
Low Scoring Departments – Positive Comments
Before Assessment Submission
"Really good, extremely helpful, and clear guidance on assessments from the majority of staff."
"Exact templates questions on … projects, very straightforward and easy to understand/handle.
"Give a lot of practice coursework and good feedback on it."
After Receiving Feedback
“Good feedback. Great one-to-one feedback from tutors on coursework and exams. They provide a lot of materials and information, workshops, etc.”
“Helpful feedback on how to improve.”
“Frequent feedback (weekly in one case)”
Between Assessments
No relevant extracts satisfied this temporal domain.
High Scoring Departments - Positive Comments
Before Assessment Submission
No relevant extracts satisfied this domain.
After Receiving Feedback
“Helpful, diligent, detailed, explained well, well-balanced and in-depth feedback.”
Between Assessments
“Easy to get support [and] to understand how to improve next time.”
In summary, we highlight that for the theme of assessment clarity (T2), we have identified sequential and temporal points in the assessment process that tutors can target, i.e., before assessment submission (ST1), after receiving feedback (ST2) and between assessments (ST3). In addition, in Table 1, we observe a marked difference between low-scoring and high-scoring departments for total negative words of 2590 versus 377, whereas positive words were relatively similar with 158 versus 131, respectively. We interpret this to suggest that lower-scoring departments systemically fail to provide adequate assessment-related provisions at all temporal points of the assessment process.
Theme 3: Assessment Fairness
Related to the previous theme of assessment clarity was the theme of assessment fairness (T3). Generally, students’ responses with respect to assessment fairness focused on feedback, and the perception of fairness around feedback.
Low Scoring Departments – Negative Comments
“Conflicting guidance, inconsistent marking. Variation in marking between tutors. Too harsh marking.”
“Unfair marking due to lecturers not being clear. Unfair marking. Lack of continuity. One essay tutorial is not enough.”
“Group work issues – lower mark because members of the group not pulling their weight.”
“Should have online material for those that can’t make the lectures.”
“Give second chances to do better.”
Low Scoring Department – Positive Comments
No relevant extracts were found for this domain.
High Scoring Departments – Negative Comments
“Harsh marking can have negative effect on future.”
“Inconsistent marking and feedback.”
“Inconsistency between lecturers and in marking criteria.”
“Unfair when some students don’t pull their weight in group work, that is not reflected on the mark.”
“Favouritism, nothing done when complained.”
“Step marking penalizes borderline students.”
High Scoring Departments – Positive Comments
No relevant extracts were found for this domain.
In summary of our findings for Theme 2 (Clarity), we found that negative words for assessment clarity were more frequent in low- (490) versus high-scoring (170) departments. Interestingly for positive comments, low-scoring departments returned no positive comments, whereas, in contrast, high-scoring departments had 28 positive words (see Table 1). We interpret this to suggest that for low-scoring departments, students perceive fairness negatively with little or no positive upside. The small number of positive comments for high-scoring departments also indicates that across all departments (low and high), assessment fairness is not explicitly promoted or discussed by tutors to students. Instead, when assessment and feedback are generally perceived as inadequate by students, they are likely to fill in the gap in communication gaps with discussions with other students and conclude that the assessment process is unfair. In addition, the lack of positive comments for high-scoring departments potentially suggests that they have fewer negative experiences rather than having more positive experiences.
Theme 4: Timing
Timing (4) was a strong theme, especially within the negative comments of students in low scoring departments. Specifically, within timing we identified two subthemes of timetabling and assessment conflicts (ST1) and tutor responsiveness and availability (ST2).
Low Scoring Departments – Negative Comments
Timetabling and Assessment Conflicts
“Problems with timetabling. Lectures far apart during the day, so have to wait hours between them. Breaks too long.”Tutor Responsiveness and Availability
“Assessments too close together, or nothing to do at certain points in the year.”
“Tutorials before lectures is a problem.”
“Material uploaded the day before session on Moodle.
“Some students are commuting from different cities, so they prefer to have the face-to-face lessons concentrated in certain days.”
“Feedback should come quickly, so that you can use it in the next assignment.”
“Turnover for feedback is too long, and sometimes it comes later than it should.”
“Sessions cancelled last minute.”
“Tutors take time off which can lead to delays.”
“Not enough contact with personal tutor. Not enough office hours.”
“Not quick enough response to emails.”
“Staff absences are an issue.”
“Problems for part-time students, full-timers come late and disrupt the lesson.”
High Scoring Departments – Negative Comments
Timetabling & Assessment Conflicts
“Problems with having assessment for different units too close in time. Need enough support around assessment time.”
Tutor Responsiveness and Availability
“Quicker marking is needed. Respond quickly to email. Make office hours more often and at more convenient times.”
Low scoring departments, positive comments
Timetabling & Assessment Conflicts
“Good timetable, teaching and contact time.”
Tutor Responsiveness and Availability
“Response to emails at any point.”
High scoring departments, positive comments
Timetabling & Assessment Conflicts
“Timetable helpful to part-time work. Timing of assignments good, doesn’t leave students stressed.”
Tutor Responsiveness and Availability
“On-time feedback, lecturers available any time.”
To summarise the theme of timing (T4), we observed marked differences between low- and high-scoring departments in terms of negative (934 vs. 138) and positive comments (55 vs. 91) (see Table 1). We take this to suggest that students perceive lower departments as consistently failing to address issues related to timetabling and assessment conflicts (ST1) and believe that tutors are unresponsive at attempts of communication (ST2). The subtheme of timetabling and assessment conflicts highlights a relationship between organisation and provision at an institutional level and the subsequent impact on departments and student NSS responses. In that, the centralisation of assessments and timetabling at this HEI often leaves departments dealing with a problem they cannot fix and so bear the front of student complaints. Said differently, the lack of provision at a top-down organisational level impacts students' experiences, which students then reflect negatively back to departments in the NSS. However, assessment conflicts are generally consistent across departments, suggesting that lower-scoring departments may fail to help students manage their workloads and plan accordingly.
Inspiration for Present and FutureInspiration for present and future (T5) was an interesting theme where students identified inspiration, challenge, motivation, encouragement to explore and develop ideas and critical thinking as key to the assessment process. We identified the following three sub-themes: assessment novelty (ST1), motivation from tutors and course content (ST2), and employability (ST3).
Low Scoring Departments – Negative Comments
Assessment Novelty
“More knowledge testing than coursework needed for technologies, several methods of testing should be used.”
“Not a lot of hands-on actives or projects.”
“Repetitive workshops and assessment.”
“Single essay a year in [course name redacted].”
Motivation from Tutors and Course Content
“Tutors should encourage exploration and development of students’ ideas, rather than telling the students what they should do.”
“Lecturers should motivate students to do better, not limit them and demotivate them.”
“There is no excitement and joy for teaching - some of the staff do not look like they enjoy their job. It is really demotivating to learn something from someone who does not like what they do.”
“Disengaging content.”
“Not engaging material/teaching methods.”
Employability
“Feeling like a temporary part of the university, and you have to put minimal effort to pass.”
“No support for choosing a career other than the subject taught.”
High Scoring Departments – Negative Comments
Assessment Novelty
“Not much variability in assessment types and units. The content was a bit repetitive.”
Motivation from Tutors and Course Content
“The course should be more challenging academically.”
Employability
“Not much opportunity for group work, groupwork is important for employability.”
“Not much careers information and preparation for life after university.”
“Questioning whether skills learned will lead to a job.”
“Not enough industry talks for some courses.”
Low Scoring Departments – Positive Comments
Motivation from Tutors and Course Content
“The course has provided challenging and intellectually stimulating content.”
High Scoring Departments – Positive Comments
Assessment Novelty
“Good crossover between various modules. Mostly interesting and varied reading lists”
“Good mix of exam and coursework.”
Motivation from Tutors and Course Content
“Critical thinking skills.”
“Incredible staff who are incredibly devoted to supporting their students. Explanatory, enabling us to create better work in the future.”
Employability
“My course gave me a wide variety of insights into different subjects. A lot of extracurricular encouragement. Encouragement to develop individual voice and confidence. Push to do your best work. In touch with industry professionals. Learned about myself and the educational system.”
In summary, the theme of inspiration for present and future (T5) revealed stark differences between low- and high-scoring departments in terms of negative (551 vs. 282) and positive words (9 vs. 513) (see Table 1). Specifically, we draw attention to the fact that high-scoring departments had 513 positive words versus only 9 in low-scoring departments. This difference suggests that departments who scored higher on assessment and feedback on the NSS went beyond merely delivering assessments to test learning, as they created interesting assessments (ST1), were motivational (ST2), and perhaps more revealingly tied to future employability (ST3). These findings highlight multidimensional representations and expectations that students hold regarding assessments, with high-performing departments bringing this to the fore.
4. Discussion
The present study provides a novel analysis of students' written responses in the NSS to understand what differentiates high- from low-scoring departments for assessment and feedback at an HEI institution in the UK. As we employed a Framework Analysis, we used frequency word counts (see Table 1) concurrently with our themes to identify areas of good pedagogical practice and areas of concern in high- versus lower-scoring departments, respectively. To this end, we identified five themes and eleven subthemes which we summarised in Figure 1.
Word counts revealed marked differences in the total number of positive and negative words between the groups of departments. Specifically, low- and high-scoring departments had 7283 versus 1539 negative and 585 versus 1221 positive words, respectively. We further justify our grouping method (i.e., highest- and lowest-scoring departments in the institutions for assessment and feedback; above and below the national sector average of 72.6%) and the sensitivity of our framework analysis to identify relevant students' written responses for positive and negative comments within the NSS. Across our themes, word counts also revealed specific and revealing patterns that we discuss below concerning specific themes and subthemes identified in our analysis. Within this, we aim to identify areas of good practice that tutors, unit leads, program leads, departments and HEIs can implement to improve the delivery of assessment, feedback and marking.
Student responses revealed a division of interaction and experience (Theme 1) into two principal areas of tutors within the departments (Subtheme 1) and more generic institutional issues of equipment and environment (Subtheme 2) and university and course organization (Subtheme 3). This assertion is consistent with the view that students view educational spaces and learning as multi-dimensional, comprising physical, mental, social, and emotional factors (Harkin & Nerantzi, 2021; Harkin et al., 2021; Harkin et al., 2022; Lefebvre, 1991; see Harkin, Yates, Riach, Clowes, Cole & Cummings, 2021 for an original exposition of this model). Students’ assessment exists within this space, so when one (or more) of these dimensions is compromised (e.g., poor access to computers or study areas that are unsuitable), this likely comprises them all and impacts their assessment experience. A finding that is consistent with Langan and Harris (2019) , as they reported that positive student satisfaction is generally related to smooth-running programs and stimulating course content. From this, we infer that HEIs, departments, and tutors can improve their provision of assessment and feedback by viewing it as a multi-dimensional concept. In addition, students may differ in their demands within each of these dimensions and require specific provisions and interventions matched to their unique academic and assessment needs.
Assessment clarity (Theme 2) and fairness (Theme 3) were overlapping constructs identified in students’ responses. Specifically, we identified three distinct yet overlapping subthemes in terms of temporal phases for assessment, feedback and marking: (a) before assessment submission (Subtheme 1), (b) after receiving feedback (Subtheme 2), and (c) between assessments (Subtheme 3). We suggest that good pedagogical practice in this area identifies the need to make each of these phases explicit within and between taught units. Students will benefit from knowing what phase of the assessment and feedback process they are in and how to use information from one phase to improve in the next. For example, after receiving feedback: debriefing students on what feedback meant, potentially providing 1-1 meetings, clarifying confusion over feedback, and ensuring that students know how to improve (e.g., accessing relevant student support services) on future assessments.
As a potential solution, we draw attention to the research which indicates that the modality of feedback can affect students' perception of the feedback they receive. Henderson and Phillips (2015) found that students preferred video feedback versus text due to its specificity to the student, clarity, and level of detail. However, students indicated that video feedback sometimes causes difficulties in matching feedback within videos to their written assessment (2015). Positively, tutors identified that video feedback was more time efficient and allowed them to provide better quality feedback (Denton et al., 2008). Similarly, Crook et al. (2012) found that students preferred video feedback as it addressed shortcomings of written feedback (clarity and quality) and improved student engagement with the feedback process. A literature review of 67 studies similarly reported that students preferred video feedback, again due to its degree of detail and clarity (Bahula & Kay, 2021). To address limitations of video feedback (i.e., matching against the assessment), we highlight the potential use of screen capture, as research employing this method found that students found it improved their ability to use feedback within the video to that of the assessment and to implement corrections (Denton, 2014). We appreciate that these suggestions require student engagement and consideration in tutor training, staffing numbers and appropriate workloads.
To help address issues of fairness (Theme 3), we suggest peer feedback exercises will potentially negate the tutor-student barrier that we noted exists in this area (e.g., unfair marking), gives students an opportunity to explore feedback in a safer environment, and hopefully provides them with an opportunity to carry this feedback into future assessments. For example, Huisman, Saab, Driel and Broek (2018) found that writing and receiving written peer feedback improved performance on subsequent assessments. Detailed peer feedback was perceived positively by students and encouraged them to improve their work. Anonymity when providing peer feedback led to more critical feedback and an increase in formative assessment performance (Panadero & Alqassab, 2019). The benefit of using anonymous peer feedback is that it allows students to be more honest, giving accurate and critical feedback without the fear of criticising those in their peer group. This approach benefits those students receiving negative feedback, outlining the areas they may need to improve upon and highlighting a likely overlap with tutor feedback. The benefits of peer feedback are described further in a meta-analysis where findings indicated engagement in peer feedback led to improvements in writing versus controls and self-assessment. Importantly, feedback from teachers and peers led to similar improvements in writing (Huisman et al., 2019).
Timing (Theme 4) was a strong theme with two underlying themes of timetabling and assessment conflicts (Subtheme 1) and tutor responsiveness and availability (Subtheme 2). First, conflict in assessment deadlines was an identified issue. However, due to the structure of assessments and concurrently running units, clashes are sometimes an unavoidable logistical fact. We propose empowering students via workshops to make them aware of the need to manage their workloads and time. For example, tutors can make students aware that submission dates are just the final point in the process, and there is nothing to stop them from planning accordingly to foresee clashes in submission dates (Agormedah et al., 2021). This approach requires students to reflect upon what they have done, which induces a form of task-related incubation, which is observed to have a number of positive effects on learning associated factors such as creativity and problem-solving (for review see Ritter & Dijksterhuis, 2014). Second, and understandably, students expect tutors to be open to and respond positively to attempts at communication. Suzić, Ćirković-Miladinović and Dabic (2013) identified that first contact determines the perceptions of students to the quality of subsequent communication as outlined in Heider’s Attribution Theory (Heider, 1958). We suggest this identifies the need to improve communication via positive tutor-student interactions and learning activities early when students engage with a unit.
Inspiration for the present and future (Theme 5) was an interesting theme with three subthemes of assessment novelty (Subtheme 1), motivation for course tutors and course content (Subtheme 2), and employability (Subtheme 3). Students want to engage in novel, challenging and inspirational assessments. Perhaps one of the most revealing subthemes was that of employability: Students want to see the application of the assessments they complete and the feedback they receive for future employment. Making this relationship explicit to students will improve student engagement in the assessment and feedback process. These findings concur with Control Theory which suggests that students regulate their behaviours when task goals are clearly outlined (Gregory & Levy, 2010). Therefore, students are more likely to regulate their behaviour (i.e., assessment performance) when they have a specific goal (i.e., future employment within a relevant field) and then evaluate their performance towards and distance from that goal. Johnson and Lord (2006) highlighted that when there is a large gap between a set goal and the feedback received, students increase their effort to close that gap. However, some students may develop lower expectations and desire to reach a set goal if current performance is not up to the mark (Campion & Lord, 1982) or if the goal is absent or unknown (i.e., relevance to future employment).
5. Recommendations
5.1. Recommendations for Units and Programs
Based on our findings, we suggest the following key recommendations to improve the student experience of assessment and feedback at a program and unit level. Specifically, consistent with our finding that identified three distinct periods related to assessments and feedback (i.e., before, during and between assessments), we propose the need to make students explicitly aware of where they are and what they must do in each phase of the assessment and feedback process. To this end, programs and units must embed each of these phases within the design of programs/units and tutor-student dialogue (see Heidegger’s notion of temporality; Lewis, 2007). As such, we propose recommendations within three temporally distinct yet interrelated phases of the assessment-feedback process.
Before the Assessment. In the induction phase of a unit, one of the first requirements is to develop clear lines of tutor-tutor, student-tutor, and student-student communication as this will serve as the foundation for future assessment-focussed exercises and discussions. Specifically, this phase will focus on establishing effective dialogue via three routes. (a) Tutors who run sequential units will communicate with each other about areas of general strength and development across student cohorts. (b) The time, space, and environment (e.g., appropriate workspaces) to set up open, dynamic, and honest tutor-student (e.g., use of Padlet to provide weekly updates on areas of enjoyment and challenge) and peer-to-peer communication. (c) Effective use of constructive alignment to establish the link between learning outcomes, assessment requirements and employability (Biggs, 1996).
Preparation of the Assessment(s). We identified that it is essential that students can use feedback (positive and areas of development) from previous assessments to inform the focus of their preparation for their next assessment. A means to achieve this is to use peer-to-peer workgroups established previously, where students work together, go through previous feedback, and provide appropriate solutions. Lastly, we need to coach students to manage their time effectively, for example, working with students to identify clashes in assessment submission dates and conflicts with everyday life (e.g., work, childcare) and then providing solutions on how to manage these appropriately. In extreme cases, it may be beneficial to identify students who previously struggled in this area (i.e., submitted late), with tutors and student services tasked with providing preventive interventions.
Receiving and Using Feedback. At this point, tutors must provide clear and succinct feedback that students can implement in their following assessments. For example, we identified that students are not always aware of how to use feedback to improve and commented positively when they could use feedback to improve the quality of subsequent assessments. We propose face-to-face meetings and group debriefings to potentially facilitate the effective use of feedback, a suggestion that can take advantage of peer-to-peer groups and tutor-student dialogue established earlier in the unit. As we proposed in Figure 1, the relationship between these three temporal domains is interconnected, creating an iterative loop between receiving feedback in one assessment and before assessment submission and then receiving feedback. However, we reiterate that the impact of these suggestions on logistics and tutor workloads is apparent and by no means a trivial issue. In that, our recommendations require sufficient consideration of staff workloads, student engagement, and provision at a financial and organizational level.
5.2. Recommendations for Higher Education Institutions
Our Framework Analysis of NSS written responses revealed differences between high- and lower-scoring departments concerning Assessment and Feedback at a HEI in the UK. Therefore, we use the themes and suggestions of good pedagogical practice (see Figure 1) to call for improvements to be pursued by HEIs. First, the quality of interactions between tutor-students and students-students is key to the assessment process; assessments are more to students than the mere completion of the assessment per se. This suggestion will require HEIs to invest in relevant tutor training and hiring to reduce tutor-to-student ratios. We propose that this will potentially improve: (1) the relationships that students experience with their tutors, (2) the content of the lecture materials tutors provide, and (3) the quality of the assessments that students submit and (4) the quality of the experiences they report in the NSS. Second, we have argued that across HEIs, courses need to delineate specific temporal points in the assessment and feedback process. Students need to know what phase of the assessment process they are in, what they are to do in that phase, and what they need to do to transition into the next phase. Such an approach will require HEIs to provide opportunities for tutors on different units within a given semester to communicate general trends in each cohort’s assessment performance, identifying common areas of feedback and development. This identifies the need (where logistically possible) to stagger assessment submission dates, giving students time to digest feedback they received in one assessment and then deploy it in the next. Lastly, as we have seen consistently in the written responses of students, assessment and feedback are more than mere assessment and feedback; students want to see novel assessments with clear and direct links to future employment. Several relevant approaches are available to HEIs, for example, the deployment of Problem-Based Learning, where students achieve learning outcomes via the presentation of trigger materials (i.e., a problem with a potential workplace) which students then have to find an empirically justified solution (Wood, 2003). It is important to note that we are always aware that implementing such suggestions always exists in a background of increasing financial constraints within HEIs and reduced time within tutor workloads.
6. Conclusion
In sum, we propose that HEIs would benefit from the close reading and analysis of students' written responses across high- and poorer-performing departments on a given dimension of the NSS. HEIs can then use this data to design interventions for poorer-performing departments, with the effectiveness of these determined via year-on-year comparisons and changes in quantitative and qualitative responses on the NSS. Such an approach will shift the use of NSS data from one of reacting to changes in numbers produced by the NSS to one where HEIs use NSS data to inform empirically based responses and interventions. Therefore, we hope to inform future research and HEIs on the validity, insightfulness, and applicability of using experientially rich written responses within the NSS to improve our understanding and delivery of assessment and feedback specifically and all domains of the NSS generally.
7. References
- Agormedah, E., Britwum, F., Amoah, S., Acheampong, H., Adjei, E., & Nyamekye, F. (2021), “Assessment of Time Management Practices and Students' Academic Achievement: The Moderating Role of Gender”, International Journal of Social Sciences & Educational Studies, Vol. 8, pp. 171-188. https://doi.org/10.23918/ijsses.v8i4p171
- Bahula, T. and Kay, R. (2021), “Exploring Student Perceptions of Video-Based Feedback in Higher Education: A Systematic Review of the Literature”, Journal of Higher Education Theory and Practice, Vol. 21, No. 4. https://doi.org/10.33423/jhetp.v21i4.4224
- Biggs, J. (1996), “Enhancing teaching through constructive alignment”, Higher Education, Vol. 32, No. 3, pp. 347-364. https://doi.org/10.1007/BF00138871
- Blair, A., Curtis, S., Goodwin, M. and Shields, S. (2013), “What Feedback do Students Want?”, Politics, Vol. 33, No, 1, pp. 66-79. https://doi.org/10.1111/j.1467-9256.2012.01446.x
- Blair, A. and McGinty, S. (2012), “Feedback-Dialogues: Exploring the Student Perspective”, Assessment & Evaluation in Higher Education, Vol. 2012. https://doi.org/10.1080/02602938.2011.649244
- Butler, D. L. and Winne, P. H. (1995), “Feedback and Self-Regulated Learning: A Theoretical Synthesis”, Review of Educational Research, Vol. 65, No. 3, pp. 245-281. https://doi.org/10.3102/00346543065003245
- Campion, M. A. and Lord, R. G. (1982), “A control systems conceptualization of the goal-setting and changing process”, Organizational Behavior and Human Performance, Vol. 30, No. 2, pp. 265-287. https://doi.org/https://doi.org/10.1016/0030-5073(82)90221-5
- Chur-Hansen, A. and McLean, S. (2006), “On being a supervisor: the importance of feedback and how to give it”, Australas Psychiatry, Vol. 14, No.1, pp. 67-71. https://doi.org/10.1080/j.1440-1665.2006.02248.x
- Crook, A., Mauchline, A., Maw, S., Lawson, C., Drinkwater, R., Lundqvist, K., Orsmond, P., Gomez, S., and Park, J. (2012), “The Use of Video Technology for Providing Feedback to Students: Can It Enhance the Feedback Experience for Staff and Students?”, Computers & Education, Vol. 58, pp. 386-396. https://doi.org/10.1016/j.compedu.2011.08.025
- Dawson, P., Henderson, M., Mahoney, P., Phillips, M., Ryan, T., Boud, D., & Molloy, E. (2019). What makes for effective feedback: staff and student perspectives. Assessment & Evaluation in Higher Education, 44(1), 25-36. https://doi.org/10.1080/02602938.2018.1467877
- Denton, D. (2014). Using screen capture feedback to improve academic performance. TechTrends, 58. https://doi.org/10.1007/s11528-014-0803-0
- Denton, P., Madden, J., Roberts, M. and Rowe, P. (2008), “Students' response to traditional and computer-assisted formative feedback: A comparative case study”, British Journal of Educational Technology, Vol. 39, No, 3, pp. 486-500. https://doi.org/https://doi.org/10.1111/j.1467-8535.2007.00745.x
- Gale, N. K., Heath, G., Cameron, E., Rashid, S. and Redwood, S. (2013), “Using the framework method for the analysis of qualitative data in multi-disciplinary health research”, BMC Medical Research Methodology, Vol. 13, Article No. 117. https://doi.org/10.1186/1471-2288-13-117
- Goldsmith, L. J. (2021), “Using Framework Analysis in Applied Qualitative Research”, Qualitative Report, Vol. 26, No. 6, pp. 2061-2076. https://doi.org/10.46743/2160-3715/2021.5011
- Gould, J. and Day, P. (2012), “Hearing you loud and clear: student perspectives of audio feedback in higher education”, Assessment & Evaluation in Higher Education, Vol. 38, pp. 1-13. https://doi.org/10.1080/02602938.2012.660131
- Graham, A. I., Harner, C. and Marsham, S. (2022), “Can assessment-specific marking criteria and electronic comment libraries increase student engagement with assessment and feedback?”, Assessment & Evaluation in Higher Education, Vol. 47, No. 7, pp. 1071-1086. https://doi.org/10.1080/02602938.2021.1986468
- Gregory, J. and Levy, P. (2010), “Employee coaching relationships: Enhancing construct clarity and measurement”, Coaching: An International Journal of Theory, Research and Practice, Vol. 3, No 2, pp. 109-123. https://doi.org/10.1080/17521882.2010.502901
- Harkin, B. and Nerantzi, C. (2021), “It Helps if You Think of Yourself as a Radio Presenter! A Lefebvrian Commentary on the Concerns, Conflicts and Opportunities of Online Block Teaching”, International Journal of Management and Applied Research, Vol. 8, No. 1, pp. 18-35. https://doi.org/10.18646/2056.81.21-002
- Harkin, B., Yates, A., Riach, M., Clowes, A., Cole, S. and Cummings, C. (2021), “I Want to See People’s Reactions to the Selfies”: A Lefebvrian Analysis of the Impact of Social Networking Sites on Physical, Mental, and Emotional Functioning”, Social Science Computer Review, Vol. 40, No. 3, pp. 788–808. https://doi.org/10.1177/0894439321994222
- Harkin, B., Yates, A., Wright, L. and Nerantzi, C. (2022), “The Impact of Physical, Mental, Social and Emotional Dimensions of Digital Learning Spaces on Student’s Depth of Learning: The Quantification of an Extended Lefebvrian Model”, International Journal of Management and Applied Research, Vol. 9, No. 1, pp. 50-73. https://doi.org/10.18646/2056.91.22-003
- Hattie, J. and Timperley, H. (2007), “The Power of Feedback”, Review of Educational Research, Vol. 77, No. 1, pp. 81–112. https://doi.org/10.3102/003465430298487
- Heider, F. (1958), The Psychology of Interpersonal Relations. John Wiley & Sons.
- Henderson, M. and Phillips, M. (2015), “Video-based feedback on student assessment: Scarily personal”, Australasian Journal of Educational Technology, Vol. 31, No. 1, pp. 51-66. https://doi.org/10.14742/ajet.1878
- Hounsell, D. (1987), “Chapter 10: Essay writing and the quality of feedback”, in Richardson, J.T.E., Eysenck, M.W. and Warren Piper, D., eds, Student Learning: Research in Education and Cognitive Psychology. Milton Keynes: Society for Research into Higher Education & Open University Press, pp. 109 - 119
- Hounsell, D. (2003), “Student feedback, learning and development. Higher Education and the Lifecourse”, Slowey, M. and Watson, D. eds. Higher Education and the Lifecourse, SRHE & Open University Press, pp. 67-78
- Huisman, B., Saab, N., van den Broek, P. and van Driel, J. (2019), “The impact of formative peer feedback on higher education students’ academic writing: a Meta-Analysis. Assessment & Evaluation in Higher Education, Vol. 44, No. 6, pp. 863-880. https://doi.org/10.1080/02602938.2018.1545896
- Huisman, B., Saab, N., van Driel, J. and van den Broek, P. (2018), “Peer feedback on academic writing: undergraduate students’ peer feedback role, peer feedback perceptions and essay performance”, Assessment & Evaluation in Higher Education, Vol. 43, No. 6, pp. 955-968. https://doi.org/10.1080/02602938.2018.1424318
- Hyland, F. (2000), “ESL writers and feedback: giving more autonomy to students”, Language Teaching Research, Vol. 4, No. 1, pp. 33-54. https://doi.org/10.1177/136216880000400103
- Johnson, R. E., Chang, C. H. and Lord, R. G. (2006), “Moving from cognition to behavior: What the research says”, Psychological Bulletin, Vol. 132, No. 3, pp. 381-415. https://doi.org/10.1037/0033-2909.132.3.381
- Kaushik, V., and Walsh, C. A. (2019), “Pragmatism as a Research Paradigm and Its Implications for Social Work Research”, Social Sciences, Vol. 8, No. 9, 255. https://doi.org/10.3390/socsci8090255
- Kulik, J. A. and Kulik, C.-L. C. (1988), “Timing of Feedback and Verbal Learning”, Review of Educational Research, Vol. 58, No. 1, pp. 79-97. https://doi.org/10.3102/00346543058001079
- Langan, A. M. and Harris, W. E. (2019), “National student survey metrics: where is the room for improvement?”, Higher Education, Vol. 78, No. 6, pp. 1075-1089. https://doi.org/10.1007/s10734-019-00389-1
- Lefebvre, H. (1991), The Production of Space (D. Nicholson-Smith, Trans.). Wiley-Blackwell.
- Lewis, M. (2007), “Individuation in Levinas and Heidegger: The One and the Incompleteness of Beings”, Philosophy Today, Vol. 51, No. 2, pp. 198-215. https://doi.org/10.5840/philtoday200751249
- Li, J. and De Luca, R. (2014), “Review of assessment feedback”, Studies in Higher Education, Vol. 39, No. 2, pp. 378-393. https://doi.org/10.1080/03075079.2012.709494
- Maggs, L. A. (2014), “A case study of staff and student satisfaction with assessment feedback at a small specialised higher education institution”, Journal of Further and Higher Education, Vol. 38, No. 1, pp. 1-18. https://doi.org/10.1080/0309877X.2012.699512
- Mulliner, E. and Tucker, M. (2017), “Feedback on feedback practice: perceptions of students and academics”, Assessment & Evaluation in Higher Education, Vol. 42, No. 2, pp. 266-288. https://doi.org/10.1080/02602938.2015.1103365
- Nicol, D. J. and Macfarlane‐Dick, D. (2006), “Formative assessment and self‐regulated learning: a model and seven principles of good feedback practice”, Studies in Higher Education, Vol. 31, No. 2, pp. 199-218. https://doi.org/10.1080/03075070600572090
- O’Donovan, B. M., den Outer, B., Price, M. and Lloyd, A. (2021), “What makes good feedback good?”, Studies in Higher Education, Vol. 46, No. 2, pp. 318-329. https://doi.org/10.1080/03075079.2019.1630812
- Office of Students. (2019). National Student Survey 2019 Results, Available from: https://www.officeforstudents.org.uk/advice-and-guidance/student-information-and-data/national-student-survey-nss/nss-2019-results/ [Accessed on 9 November 2022]
- Office of Students. (2020), National Student Survey, Available from: https://www.officeforstudents.org.uk/advice-and-guidance/student-information-and-data/national-student-survey-nss/ [Accessed on 9 November 2022]
- Office of Students. (2022), National Student Survey: NSS, Available from: https://www.officeforstudents.org.uk/advice-and-guidance/student-information-and-data/national-student-survey-nss/review-of-the-nss/ [Accessed on 9 November 2022]
- Panadero, E. and Alqassab, M. (2019), “An empirical review of anonymity effects in peer assessment, peer feedback, peer review, peer evaluation and peer grading”, Assessment & Evaluation in Higher Education, Vol. 44, No. 8, pp. 1253-1278. https://doi.org/10.1080/02602938.2019.1600186
- Parkes, M. and Fletcher, P. (2017), “A longitudinal, quantitative study of student attitudes towards audio feedback for assessment”, Assessment & Evaluation in Higher Education, Vol. 42, No. 7, pp. 1046-1053. https://doi.org/10.1080/02602938.2016.1224810
- Ritter, S. M. and Dijksterhuis, A. (2014), “Creativity—the unconscious foundations of the incubation period [Review]”, Frontiers in Human Neuroscience, 8. Article 215. https://doi.org/10.3389/fnhum.2014.00215
- Sadler, R. (1989), “Formative assessment and the design of instructional systems. Instructional Science, Vol. 18, pp. 119-144. https://doi.org/10.1007/BF00117714
- Suzić, R., Ćirković-Miladinović, I. and Dabic, T. (2013), “Student-Teacher Communication in University Teaching”, Sino-US English Teaching, Vol. 10, No. 1, 65-71
- Tian, L. and Zhou, Y. (2020), “Learner engagement with automated feedback, peer feedback and teacher feedback in an online EFL writing context”, System, Vol. 91, 102247. https://doi.org/10.1016/j.system.2020.102247
- Vattøy, K.-D., Gamlem, S. M. and Rogne, W. M. (2021), “Examining students’ feedback engagement and assessment experiences: a mixed study”, Studies in Higher Education, Vol. 46, No. 11, pp. 2325-2337. https://doi.org/10.1080/03075079.2020.1723523
- Weaver, M. (2006),” Do Students Value Feedback? Student Perceptions of Tutors’ Written Responses”, Assessment & Evaluation in Higher Education, Vol. 31, No. 3, pp. 379-394. https://doi.org/10.1080/02602930500353061
- Wood, D. F. (2003), “Problem based learning”, British Medical Journal, Vol. 326, No. 7384, pp. 328-330. https://doi.org/10.1136/bmj.326.7384.328