Rethinking student experience
Part One – Where did it go so wrong?
The National Student Survey (NSS), launched 15 years ago, endeavoured to give the higher education sector insight into students’ satisfaction with their student experience. Its website states that NSS’s purpose is to gather “opinions from students about their time in higher education, asking them to provide honest feedback on what it has been like to study on their course at their university/college”. Since its inception, NSS has become one of the key benchmarks on which to judge universities, also playing a significant role as a metric in the Teaching Excellence Framework (TEF) metrics and sector league tables.
It won’t be news to anybody that the NSS has had a controversial reception over the years, most notably so when TEF ratings became linked to the ability for an institution to charge higher fees, resulting in a national NSS boycott policy by Student Unions. Despite this, universities have continued to give it credence, largely because of the financial risk of poor scores in a competitive market.
And yet, as the sector becomes more reliant on the NSS results, the questions and concerns about the legitimacy or reliability of the NSS haven’t disappeared. Not least because the 2020 results suspiciously showed no impact of two major events; significant industrial action over a prolonged period and a global pandemic. Which raises the question – if the complete absence of physical lectures doesn’t shift the scores, what will? And what else might the NSS be failing to highlight?
There are three key reasons why the NSS is not fit for purpose:
Firstly, the NSS takes an extremely narrow view of the student experience. All of the 26 core questions are directly related to the academic on-course experience. But student satisfaction with academic provision does not equal satisfaction with a broad and holistic student experience. You might strongly agree that “the course is well-organised and running smoothly” (Q15) but be unable to access mental health provision due to long wait times, struggle to pay your expensive rent or are priced out of extra-curricular activities that were sold to you on the open day. The key student issues aren’t whether “staff are good at explaining things” (Q1), but the mental health crisis, quality and price of accommodation, or sexual assault incidents on campus. As such, the data that NSS produces gives a superficial snapshot of a narrow element of the student experience. The danger for individual institutions is that this view can prevent their senior leadership and governing body from digging deeper into what’s happening on the ground outside the lecture theatres.
Secondly, the NSS results do not show active dissatisfaction. The percentages that you see are the proportion of students who either agreed or strongly agreed with the statements, running the risk of us only seeing the results of a homogenous majority. The higher education experience is often brilliant for those with enough money, time and social capital to navigate all it has to offer. But we also know that underrepresented student groups have a disproportionately less satisfactory or enjoyable experience at university. In many cases, the system is just not built to support them. Perhaps then, good NSS scores are more reflective of the type of students an institution attracts, rather than an accurate portrayal of the student experience. If a university is relying only on their NSS scores, then they will be glossing over any active dissatisfaction and won’t be focusing on those students who are falling through the gaps.
Finally, like most blunt measurement tools, the NSS can bring unintended consequences. In this case, some universities might mistakenly focus their time and attention on improving their NSS scores, rather than working to improve the student experience in itself. The NSS has been put on a sector pedestal, influencing league table places, TEF ratings and subsequent student recruitment performance. It’s easy to fall into the trap of focusing on external perception rather than internal experience. By creating so-called “measurables” in the academic experience, we begin to overvalue what we can measure, and undervalue what we can’t.
The NSS’s narrow view of the student experience, its lack of interest in active dissatisfaction and the culture of prioritising metrics over experiences make it clearer than ever that the NSS is not fit for purpose, especially when the next academic year will look so vastly different from all that we know about student experience in the past. So what do we do next?
See: Part Two – How should we put it right?
Eve Alcock is Former President of The University of Bath’s Students’ Union and a HE policy enthusiast.