Are league tables a useful measurement of the success (or otherwise) of university senior leadership teams? The truth is that ‘it depends’. And, frustratingly, ‘it’ probably ‘depends’ rather more than we might first think.
Let’s start at ‘the top’.
At least 20 institutions will have the goal of ‘being in the top 10 in the UK’ in the objectives of their senior leadership teams.
If you are on the senior leadership of one of the half a dozen or so institutions which score so highly that they are likely to maintain their top 10 placement no matter what (Oxford, Cambridge, UCL, etc) then UK league table position probably isn’t the best measure of your performance. Your focus would be firmly on international league table position or setting the agenda in a particular key performance area.
Then there are the ‘top 10 candidates’ all seeking to fill the remaining 3 or 4 slots. There are over a dozen in this pack and they are very tightly bunched. This means that slight variables in performance can mean you are in, or out, of the top 10 pretty swiftly. Notice Exeter’s rise from languishing in the 30s to a steady top 10 position or thereabouts in most of the league tables and then note this year’s drop to 14 in the Times League table. Has Exeter’s performance declined or have others outpaced it? Consider Loughborough’s rise from the teens to top 6 in the THE table of tables and “top 11” in all three league tables. What lies behind Loughborough’s journey and will it/can it sustain this position over the coming years? Or the University of Glasgow here who have seen a steady rise in performance in both UK and international tables and have just broken into the top 20 for the first time (117th in 2014 to 88th in 2017 in THE; 29th to 20th in Sunday Times). Each institution’s position will tell a complex story of strategy, investment, focus, competition and reputation.
Governing bodies of the ‘top 10 candidates’ are hungry for metrics y which to judge the performance of their senior leadership teams (and now under media and political scrutiny to justify the remuneration of those leaders). These governing bodies will take a keen interest as to whether they get in, stay in or fall out of the top 10. In this part of the sector UK league tables matter, a lot.
And it’s the same elsewhere in the sector, but the measure won’t be ‘top 10’. Perhaps it’s ‘top 50’ or best in your class or region or best for a particular measure. It would be an unusual governing body that paid no attention to league table performance as they consider whether their senior leadership team is doing a good job (and what they should be paid for that job).
But is UK league table position really a robust measure of senior leadership performance?
Surely the answer must be yes? The measures the league tables use are undoubtedly key performance indicators for university performance. Taken together these measures give us a pretty rounded view of performance (although one must note the lag in performance as data in some areas may be a couple of years old). They tell us how hard it is to get into an institution, how satisfied students are with their course, the quality of degrees they get, the quality of research being undertaken in the institution, how the institution is investing in infrastructure, and the employability of graduates. If this was a business we would have a pretty clear picture of the quality of products and services, the demand from customers, how happy the customers are. It also tells us how each institution compares to others on each and every measure and for each subject area. The frustrating truth, however, is that ‘the devil is in the detail’.
The problem isn’t in the individual measures. Leadership should undoubtedly be judged for successfully increasing research capacity or quality or increasing student satisfaction. The individual measures are robust. The problem comes in how the measures are combined and compared to present a league table position. One might assume that if you are genuinely doing well in these measures then the league table position will follow. However, the weightings, method of aggregation, z scoring, etc makes the league tables complex and all slightly different, so trying to predict or replicate is virtually impossible.
Take TEF for example – this should be used by students (and parents!) to inform choice – however, do students realise that the data they are looking at is old (4-5 years in some cases) and based on a complex benchmarking methodology? Surely it would have been sensible to more highly weigh the most current year of the three to reflect how an institution is improving? Do we need new measures that are more a reflection of ‘value for money’ (not that easy of course but perhaps a combination of value-added – tariff vs output – and contact time?)
Despite the inevitable vagaries, what is important is the reputation of the tables both in the UK and internationally on UK HE as a whole and the impact on the attractiveness of the UK as a place to study or work – surely leaders have a ‘responsibility’ to do well for their own institutions and the sector as a whole?
Surely senior leaders are responsible for either a rise (or fall) in league table position? Or does it depend on what metrics drove that rise or fall?
In some areas, one might question their accountability. Can they be accountable for changes in the marketplace? They are clearly cannot control the population decline in the number of 18-year-olds or the increase in the number of students entering without traditional A levels e.g. BTECs which is leading to a reduction in entry tariff, for example. However, they are accountable for their business planning and market intelligence. It’s their job to use market intelligence and demographic information to plan how student numbers may rise or fall and whether they will increase their numbers and how they will maintain quality. If they haven’t used market intelligence to inform their growth plans then they have failed in their duties. They are not responsible for external drivers but they are responsible for horizon scanning and effective planning. They should think about their risk appetite (what risks are they prepared to take?) and how they will mitigate, what is going to make them stand out from the competition.
They can also ensure they are paying close attention to how their academic subject areas are performing as this is where the teaching and research take place (the reason they exist, if you like), and certain subjects with large student cohorts can have a significant impact on the overall league table performance. Good leaders will plan effectively and prioritise resources to areas needing help to improve, or to continue to support those achieving excellence.
But what if a drop-in position isn’t down due to a fall in any of the measures, but rather an increase in a competitor’s performance? Can the senior leadership be held accountable for that? Perhaps. Could they have done more to understand what the competition is doing? What they are investing in and how they are focusing their energy? Could they be benchmarking performance throughout the year and monitoring their competitors closely? Certainly most of the ‘top 10 candidates’ will be doing this on a regular basis and this intelligence and benchmarking should be part of senior leadership work in all parts of the sector.
An interesting question is whether leaders in an institution could be doing all the right things for the long-term future of their institution but experience a drop-in position in a particular year whilst they grow or invest. Clearly, the answer could be yes. In this scenario the role of leaders is to anticipate that this fall will happen (due to significant growth in numbers, major development work taking place on the campus, a merger or significant structural change, significant curriculum change, and so on), communicate the position and show what they are doing about it long term to address and improve their position. Too often the league result leads to a knee-jerk reaction on the day it is published.
A wise governing body won’t just judge performance on the basis of one years’ performance (especially given the lag in data) but will focus on the trajectory of travel and will understand how change can negatively impact on league table position in the short term. If you are on the governing body of an institution which had been steadily rising up the league tables and was now experiencing a drop over the past 2-3 years you would be rightly concerned as to the longer-term health of the institution and whether your leader is focussing on the right things and keeping up with the competition. Assurance should be found in the clarity of strategy for the institution and the performance of individual metrics and the market intelligence of the leaders – together these should enable a richer picture as to whether your senior leaders have done a good job (and warrant their salary package) than the league tables alone can provide.
One might liken it to corporate shareholders who look at a company’s longer-term market position, strategy and performance in key areas rather than the vagaries of one year. Is Tesco worthy of investment despite a tumultuous year or two? Clearly so given its size, scale and market position. Is Exeter still an excellent University despite its drop in position? Clearly so given its Gold TEF, high NSS scores and growing research capacity.
In summary, the frustrating truth is ‘it depends’. League table position, over time, can be an indicator of leadership success but ‘the devil’ is in the detailed picture found in each institution’s strategy, individual metrics and market intelligence.