Indirect measures

Indirect measures of student learning are evidence that has been shown to correlate with student learning, while not measuring it directly

Indirect measures of student learning are evidence that has been shown to correlate with student learning, while not measuring it directly

While direct measures provide explicit evidence of student learning, indirect measures provide evidence that suggests or implies that student learning has taken place. Most indirect measures capture evidence at a single point in time and therefore do not necessarily offer insight into the ‘value added’ by the education or intervention. However, they have the advantage of being relatively straightforward to collect in a standardised form that can enable comparisons across and between cohorts.

Indirect measures typically relate either to institutional measures of student progression (e.g. pass rates, attrition rates) or to the perspectives of students and other stakeholders (e.g. unsolicited student feedback, student evaluation scores, employer feedback). Examples of indirect measures of students learning are listed below. Where possible, links to relevant measurement tools are provided.
Alternative student evaluation surveys
Institutional student evaluation questionnaires are widely used by universities across the world as key indicators of academic teaching achievement. However, many such questionnaires have been designed ‘in house’ and some are reported to “lack any evidence of reliability or validity, include variables known not to be linked to student performance, and do not distinguish well or consistently between teachers and courses” (Gibbs, 2014). Summarised below are details of two alternative and highly-regarded survey instruments that could be used by candidates to collect student evaluations in relation to a specific programme, course or activity:
  • Student Evaluation of Educational Quality (SEEQ) captures student evaluations of 35 aspects of effective teaching in relation to their course or teacher. A version of the SEEQ questionnaire is reproduced in the appendices of Nash (2012).
  • Student Assessment of Learning Gains (SALG) is a survey tool which, according to its authors (Seymour et al., 2000), “avoids critiques of the teacher, the teacher’s performance, and of teaching methods that are unrelated to student estimates of what they have gained from them”, focusing instead on “the learning gains that students perceive they have made” in terms of the learning outcomes of the course or activity.
Gibbs (2014) You can measure and judge teaching. From SEDA: 53 Powerful Ideas All Teachers Should Know About (September 2014). [link]
Nash, J. L. (2012) Using Student Evaluations at a Cambodian University to Improve Teaching Effectiveness, Lehigh University, Theses and Dissertations. Paper 1384. [link]
Seymour, E., Wiese, D., Hunter, A., & Daffinrud, S. M. (2000). Creating a better mousetrap: On-line student assessment of their learning gains. In National Meeting of the American Chemical Society. [link]

Show examples
Self-reported student learning gains
Self-efficacy, or a student’s self-belief in their own abilities, has been shown to be a strong predictor of student learning and motivation (Zimmerman, 2000). Pre/post survey data that demonstrate improvements in student self-efficacy can be used within a promotion case to demonstrate, for example, the impact of a course or new pedagogy. A generic self-efficacy questionnaire (the Motivated Strategies for Learning Questionnaire) is available from Pintrich and DeGroot (1990).

Targeted self-efficacy questionnaires are also available which often focus on specific skills and attitudes, such as entrepreneurship (Lucas, 2014), or within specific disciplines, such as engineering design (Carberry et al., 2010).
Zimmerman, B. J. (2000). Self-efficacy: An essential motive to learn. Contemporary educational psychology, 25(1), 82-91. [link]
Pintrich, R. R., & DeGroot, E. V. (1990). Motivational and self-regulated learning components of classroom academic performance, Journal of Educational Psychology, 82, 33-40. [link]
Lucas, W. A. (2014, June). Using the CDIO syllabus 2.0 to assess leadership self-efficacy. Paper presented at the 10th International CDIO Conference, Barcelona, Spain. [link]
Carberry, A. R., Lee, H. S. & Ohland, M. W. (2010). Measuring engineering design self-efficacy. Journal of Engineering Education, 99(1), 71-79. [link]
Show references
Unsolicited/solicited student feedback
As a complement to student evaluation survey data, solicited or unsolicited feedback from students/graduates – for example an email from a student describing the positive impact on their learning, progress and/or engagement made by the candidate – can be used to support the teaching element of promotion cases.
Student prizes and achievements
Indirect evidence of student learning can also include the achievements of students and graduates. Although, in most cases, it is very difficult to attribute such achievements to the learning opportunities and/or support provided by a particular academic, some exceptions may exist. For example, a promotion candidate could include details of the number of student teams from an entrepreneurship course who have since established a successful startup business.
Measures of student progression and learning typically collected by the university
Stacks Image 30447
Most universities across the world routinely collect indirect measures of student learning at an institution. Where disaggregated at the course or programme level, these data can be used to support a candidate’s promotion case. However, it is often difficult to directly attribute positive changes in such institutional measures to one particular individual, particular where they do not hold a leadership position in a course or programme.
  • student attrition/retention rates
  • student satisfaction in relation to specific courses, collected via survey and written feedback
  • pass rates and degree classifications
  • employer assessment of graduate capabilities, collected via survey
  • post-graduation employment rates and salary scales
  • graduate feedback about their educational experience, collected via survey
Show examples
Show case study
Case study
Professor Tom Joyce, Newcastle University, UK
In 2011, Dr Tom Joyce submitted a successful case for promotion to full professorship at Newcastle University in the UK, on the basis of a balanced teaching and research portfolio. His evidence for research achievement included high-impact publications, research grant income and distinguished awards in his research field of orthopaedic engineering.
Stacks Image 30523
Dr Joyce’s teaching achievements were demonstrated by a blend of two sources:
  • peer-reviewed evidence (such as institutional and national teaching awards, peer-reviewed pedagogical articles and the inclusion of his teaching activities in published case studies of good practice) as indicators of scholarly teaching and pedagogical influence beyond his institution.
  • details of a major curricular innovation with associated improvements in student progression following its implementation, as indirect measures of student learning, details for which are given below.
One element of his promotion case focused on the design and impact of Engineering Teams, a scheme implemented and evaluated by Dr Joyce in response to concerns about attrition rates among first-year undergraduate students in the engineering school. Engineering Teams sought to develop a culture of peer learning and support across the student cohort during the first year of study, thereby improving engagement, the quality of learning, and (ultimately) student progression. As Dr Joyce explained, “we put [all incoming] students into pre-assigned teams of five and we gave them tasks to do over the course of their first year which meant that they had to work together and from this they helped each other to learn and developed friendships which often lasted for the whole of their degrees”.

Using both survey and focus-group data, he conducted (i) an analysis of the design and delivery of Engineering Teams, identifying a number of constraints to the scheme that were subsequently improved upon during the years that followed, and (ii) a review of the impact of Engineering Teams on the student cohort. A major indicator of the impact of Engineering Teams, as highlighted in the promotion case, was the significant improvement in student progression rates following its introduction: from 83% to 93%. As Dr Joyce noted, “Going from a situation where we were ‘losing’ almost 1 in 5 students to one in which we were only ‘losing’ 1 in 11 conveyed a very strong message I thought, particularly when there was no additional financial expenditure by the School.  These numbers were also backed up by positive student feedback which we gathered over the first year and at the beginning of second year”.
Other indirect measures relating to programme/institutional impact
Other ‘indirect measures’ can be used to demonstrate both programme- and institutional-level impact in teaching and learning. Examples could include:
  • Assessments by industry partners and/or graduate employers, such as (i) surveys capturing the perceived capabilities of graduates from particular programmes/universities compared to peer institution or previous generations of graduates, or (ii) qualitative assessments of student performance on industry-linked curricular experiences or placements