We aren’t the only ones to notice the Augar Review’s attempt to redraw the lines over institutional autonomy. There is now an expectation that universities will no longer have complete freedom over their own portfolios, but instead be expected to focus on courses that create valuable outcomes. Given that much of the sector’s income is funded one way or another by taxpayers, this is probably inevitable.
However, it is the definition of ‘valuable outcomes’ where things become tricky, for three main reasons:
i) There are multiple outcomes for graduates that go way beyond financial return, including the development of higher-level skills and specialist knowledge, overall employability and increased career options. Some of these outcomes, such as the development of social and cultural capital or delivering meaningful work, are intangible and difficult to track and measure. And of course, outcomes aren’t limited to the graduates themselves; there are also other beneficiaries of higher education, including employers, the economy and wider society.
ii) There’s such variation between academic subjects – e.g. humanities vs sciences – that applying the same metrics and weightings to each is illogical and unhelpful. In the worst case, where this approach is used to make direct comparisons between different subjects, it becomes dangerous.
iii) The UK HE sector isn’t homogenous and so shouldn’t be measured as such. Whilst there is a lack of distinctiveness amongst many institutions, in general universities can operate in separate competitive environments, have different target markets and assorted purposes. So they each need customised metrics to fit their own mission, culture and strategy.
It’s quite possible that a significant number of universities will be prompted to review their UG and PG portfolios as a result of Augar. At the very least they will be mindful of the published recommendations; more likely, their portfolio review will be directly shaped by them. So it is even more important that such a project takes a balanced scorecard approach to measuring value; one that doesn’t rely on financial data alone, but that considers other measurements.
The most obvious metric to include alongside financial data is a market perspective, looking at market share and position, student feedback, NSS scores, brand awareness, league table performance and so on. These will reveal whether the university is offering courses that are attracting students and whether it understands what they need from their learning experience. However, this comes with a caveat; like financial data, most of these metrics look backwards; they are focused on the outcomes of past actions, rather than giving insight into the chances of future success.
So equally important are those that are leading indicators – those that can be used to predict future outcomes. They might include:
- Internal process, which includes portfolio architecture (this includes the size and shape of the portfolio as well as course structures and delivery routes) and quality measures such as accreditation, teaching and research quality and capacity utilisation. This determines how you do something, alongside how well you do it.
- People, including the level of stakeholder engagement and team knowledge and skills. There’s little point in planning a new course to meet employer or market need if you can’t attract or retain the academic talent to develop and lead on it.
By giving equal attention to market, process and people indicators, your portfolio review project will give you a complete – and more useful – view of the product performance.
If you are preparing for your institution’s future and would like a conversation about reviewing your portfolio, contact email@example.com.