Measures of student learning

Measures of student learning can be either direct or indirect. Details and examples of both types of measures are in turn given below.

Measures of student learning can be either direct or indirect. Details and examples of both types of measures are given below.

Direct measures of student learning
Direct measures of student learning capture the knowledge/skills/attitudes of the student cohort, enabling evaluation of student performance, either against a defined benchmark or through changes over time.
While ‘direct measures’ can provide robust evidence of teaching achievement within particular courses or programmes, they are typically resource intensive, requiring time and expertise to design and collect. Such measures are therefore not routinely used within promotion cases. Where included, however, it is important for candidates to contextualise the data by describing the design and goals of the activity. As summarised below, direct measures of student learning that could be presented as part of an academic promotion case tend to fall into two categories:

Direct measures of learning over time

Measures which assess learning over time, often using before/after testing of student knowledge/abilities.
The direct measures of learning over time likely to be most appropriate for inclusion in a promotion case are those involving pre/post testing of students, for example, on the basis of their conceptual understanding. One well-documented example of such pre/post testing is from the Massachusetts Institute of Technology (MIT), where a new active learning approach was adopted within an electromagnetics course with the aim of improving students’ conceptual understanding and reducing failure rates. Conceptual questions from standardised tests were administered to students both before and after the new course, and the results were compared to control group data from students studying under the previous, more traditional, course delivery style. The survey outcomes demonstrated that the new course delivered significantly improved conceptual understanding among students (Dori and Belcher 2005). The survey questionnaire used in this example can be accessed from Dori et al. (2007).

Concept tests, such as the Force Concept Inventory (Hestenes and Halloun, 1995) – available from Mazur (1997) – are widely used in engineering and physics schools across the world to evaluate students’ conceptual understanding. Sample concept tests related to a wide range of science, engineering and mathematics topics are available from:
An alternative direct measure of student learning is the student learning journal, in which students are asked to reflect on the course and their learning on a weekly basis. An approach to designing and evaluating student learning journals is provided in Shiel and Jones (2003).
Dori, Y. J. & Belcher, J. (2005). How does technology-enabled active learning affect undergraduate students' understanding of electromagnetism concepts?. The Journal of the Learning Sciences, 14(2), 243-279. [link]
Dori, Y. J., Hult, E., Breslow, L. & Belcher, J. W. (2007). How much have they retained? Making unseen concepts seen in a freshman electromagnetism course at MIT. Journal of Science Education and Technology, 16(4), 299-323. [link]
Hestenes, D., & Halloun, I. (1995). Interpreting the force concept inventory. The Physics Teacher, 33(8), 502-506. [link]
Mazur, E. (1997). Peer instruction: A user’s manual. Upper Saddle River, NJ: Prentice Hall. [link]
Shiel, C., & Jones, D. (2003). Reflective Learning and Assessment: a Systematic Study of Reflective Learning as Evidenced in Student Learning Journals. HEAC, 1-32. [link]
Show more

Direct measures of learning at a single point in time

Stacks Image 30868
Measures which assess learning at a single point in time, typically through comparisons against a control group, norm or benchmark. The validity of these techniques rests on the assessment instrument capturing the relevant learning outcomes. Most also require a benchmark against which to compare the data collected for the student cohort, such as national average scores or performance of students in control groups.
Suitable techniques for measuring learning at a single point in time include:
  • student performance in institutional examinations and assignments can be used, in particular, to demonstrate the positive impact of pedagogical or curricular change as part of a promotion case;
  • products/outputs of a course or programme delivered by students, such as final-year projects, conceptual maps or oral exams;
  • student performance in standardised tests, capturing both generic learning outcomes through tools such as the Collegiate Learning Assessment (Klein et al., 2007) or capturing discipline-specific capabilities through tools such as AHELO (OECD, 2009). Although such tools are primarily designed for comparisons between institutions and countries, such data could also be disaggregated by programme to support a candidate’s case for promotion.
Klein, S., Benjamin, R., Shavelson, R. & Bolus, R. (2007). The Collegiate Learning Assessment Facts and Fantasies. Evaluation Review, 31(5), 415-439. [link]
Organisation for Economic Co-operation and Development (OECD). (2013). Synergies for better learning: An international perspective on evaluation and assessment. [link]
Show more
Show case study
Case study
Professor Craig Forest, Georgia Institute of Technology, US
Stacks Image 25551
In 2015, Dr Forest submitted a successful case for promotion to Associate Professorship at Georgia Tech. Of the five ‘noteworthy accomplishments’ listed in his application, four related to research achievements within his field of biomolecular science and one related to achievements in education. Dr Forest noted that, as an academic following a tenure track in a research-led institution, the decision to include an educational component in his promotion case was carefully considered.

A wide range of evidence sources was used to demonstrate Dr Forest’s institutional impact and influence in teaching and learning, including:
  • Professional activities: the educational portion of the promotion case centred on a description of three activities: (i) the co-foundation of the ‘InVenture Prize’, a university invention competition, (ii) the establishment of the ‘Invention Studio’, an open-access space for student creativity, innovation and design, and (iii) the redesign of an engineering capstone design course.
  • Peer assessments: including national press coverage of the educational activities developed by Dr Forest, a peer-reviewed pedagogical publication and details of the funds raised for the establishment of the ‘Invention Studio’.
  • Indirect measures of student learning: including estimates of the number of companies founded by students engaged in the entrepreneurial and innovation activities established by Dr Forest.
  • Direct measures of student learning: including an evaluation of the quality of student projects from the multi-disciplinary final year design course established by Dr Forest, as described below.
Building on an existing capstone design experience within the engineering school – where teams of students from a single discipline were tasked to solve authentic industry problems – Dr Forest led the creation of a new multi-disciplinary capstone experience, bringing together mechanical and biomedical engineering students to work together on these ‘real world’ problems. Based on the scores allocated by a judging panel of industry partners, an evaluation was conducted of the quality of student projects developed by these multi-disciplinary teams compared to that of their mono-disciplinary peers. The evaluation (Hotaling et al., 2012) concluded that “the [multi-disciplinary] teams’ holistic performance in innovation, utility, analysis, proof of concept, and communications skills was superior to that of the mono-disciplinary counterparts”.
Hotaling, N., Fasse, B. B., Bost, L. F., Hermann, C. D., & Forest, C. R. (2012). A quantitative analysis of the effects of a multidisciplinary engineering capstone design course. Journal of Engineering Education, 101(4), 630-656. [link]

Indirect measures of student learning
Indirect measures of student learning are evidence that has been shown to correlate with student learning, while not measuring it directly
While direct measures provide explicit evidence of student learning, indirect measures provide evidence that suggests or implies that student learning has taken place. Most indirect measures capture evidence at a single point in time and therefore do not necessarily offer insight into the ‘value added’ by the education or intervention. However, they have the advantage of being relatively straightforward to collect in a standardised form that can enable comparisons across and between cohorts.

Indirect measures typically relate either to institutional measures of student progression (e.g. pass rates, attrition rates) or to the perspectives of students and other stakeholders (e.g. unsolicited student feedback, student evaluation scores, employer feedback). Examples of indirect measures of students learning are listed below. Where possible, links to relevant measurement tools are provided.
Alternative student evaluation surveys
Institutional student evaluation questionnaires are widely used by universities across the world as key indicators of academic teaching achievement. However, many such questionnaires have been designed ‘in house’ and some are reported to “lack any evidence of reliability or validity, include variables known not to be linked to student performance, and do not distinguish well or consistently between teachers and courses” (Gibbs, 2014). Summarised below are details of two alternative and highly-regarded survey instruments that could be used by candidates to collect student evaluations in relation to a specific programme, course or activity:
  • Student Evaluation of Educational Quality (SEEQ) captures student evaluations of 35 aspects of effective teaching in relation to their course or teacher. A version of the SEEQ questionnaire is reproduced in the appendices of Nash (2012).
  • Student Assessment of Learning Gains (SALG) is a survey tool which, according to its authors (Seymour et al., 2000), “avoids critiques of the teacher, the teacher’s performance, and of teaching methods that are unrelated to student estimates of what they have gained from them”, focusing instead on “the learning gains that students perceive they have made” in terms of the learning outcomes of the course or activity.
Gibbs (2014) You can measure and judge teaching. From SEDA: 53 Powerful Ideas All Teachers Should Know About (September 2014). [link]
Nash, J. L. (2012) Using Student Evaluations at a Cambodian University to Improve Teaching Effectiveness, Lehigh University, Theses and Dissertations. Paper 1384. [link]
Seymour, E., Wiese, D., Hunter, A., & Daffinrud, S. M. (2000). Creating a better mousetrap: On-line student assessment of their learning gains. In National Meeting of the American Chemical Society. [link]

Show examples
Self-reported student learning gains
Self-efficacy, or a student’s self-belief in their own abilities, has been shown to be a strong predictor of student learning and motivation (Zimmerman, 2000). Pre/post survey data that demonstrate improvements in student self-efficacy can be used within a promotion case to demonstrate, for example, the impact of a course or new pedagogy. A generic self-efficacy questionnaire (the Motivated Strategies for Learning Questionnaire) is available from Pintrich and DeGroot (1990).

Targeted self-efficacy questionnaires are also available which often focus on specific skills and attitudes, such as entrepreneurship (Lucas, 2014), or within specific disciplines, such as engineering design (Carberry et al., 2010).
Zimmerman, B. J. (2000). Self-efficacy: An essential motive to learn. Contemporary educational psychology, 25(1), 82-91. [link]
Pintrich, R. R., & DeGroot, E. V. (1990). Motivational and self-regulated learning components of classroom academic performance, Journal of Educational Psychology, 82, 33-40. [link]
Lucas, W. A. (2014, June). Using the CDIO syllabus 2.0 to assess leadership self-efficacy. Paper presented at the 10th International CDIO Conference, Barcelona, Spain. [link]
Carberry, A. R., Lee, H. S. & Ohland, M. W. (2010). Measuring engineering design self-efficacy. Journal of Engineering Education, 99(1), 71-79. [link]
Show references
Unsolicited/solicited student feedback
As a complement to student evaluation survey data, solicited or unsolicited feedback from students/graduates – for example an email from a student describing the positive impact on their learning, progress and/or engagement made by the candidate – can be used to support the teaching element of promotion cases.
Student prizes and achievements
Indirect evidence of student learning can also include the achievements of students and graduates. Although, in most cases, it is very difficult to attribute such achievements to the learning opportunities and/or support provided by a particular academic, some exceptions may exist. For example, a promotion candidate could include details of the number of student teams from an entrepreneurship course who have since established a successful startup business.
Measures of student progression and learning typically collected by the university
Stacks Image 44727
Most universities across the world routinely collect indirect measures of student learning at an institution. Where disaggregated at the course or programme level, these data can be used to support a candidate’s promotion case. However, it is often difficult to directly attribute positive changes in such institutional measures to one particular individual, particular where they do not hold a leadership position in a course or programme.
  • student attrition/retention rates
  • student satisfaction in relation to specific courses, collected via survey and written feedback
  • pass rates and degree classifications
  • employer assessment of graduate capabilities, collected via survey
  • post-graduation employment rates and salary scales
  • graduate feedback about their educational experience, collected via survey
Show examples
Show case study
Case study
Professor Tom Joyce, Newcastle University, UK
In 2011, Dr Tom Joyce submitted a successful case for promotion to full professorship at Newcastle University in the UK, on the basis of a balanced teaching and research portfolio. His evidence for research achievement included high-impact publications, research grant income and distinguished awards in his research field of orthopaedic engineering.
Stacks Image 44764
Dr Joyce’s teaching achievements were demonstrated by a blend of two sources:
  • peer-reviewed evidence (such as institutional and national teaching awards, peer-reviewed pedagogical articles and the inclusion of his teaching activities in published case studies of good practice) as indicators of scholarly teaching and pedagogical influence beyond his institution.
  • details of a major curricular innovation with associated improvements in student progression following its implementation, as indirect measures of student learning, details for which are given below.
One element of his promotion case focused on the design and impact of Engineering Teams, a scheme implemented and evaluated by Dr Joyce in response to concerns about attrition rates among first-year undergraduate students in the engineering school. Engineering Teams sought to develop a culture of peer learning and support across the student cohort during the first year of study, thereby improving engagement, the quality of learning, and (ultimately) student progression. As Dr Joyce explained, “we put [all incoming] students into pre-assigned teams of five and we gave them tasks to do over the course of their first year which meant that they had to work together and from this they helped each other to learn and developed friendships which often lasted for the whole of their degrees”.

Using both survey and focus-group data, he conducted (i) an analysis of the design and delivery of Engineering Teams, identifying a number of constraints to the scheme that were subsequently improved upon during the years that followed, and (ii) a review of the impact of Engineering Teams on the student cohort. A major indicator of the impact of Engineering Teams, as highlighted in the promotion case, was the significant improvement in student progression rates following its introduction: from 83% to 93%. As Dr Joyce noted, “Going from a situation where we were ‘losing’ almost 1 in 5 students to one in which we were only ‘losing’ 1 in 11 conveyed a very strong message I thought, particularly when there was no additional financial expenditure by the School.  These numbers were also backed up by positive student feedback which we gathered over the first year and at the beginning of second year”.
Other indirect measures relating to programme/institutional impact
Other ‘indirect measures’ can be used to demonstrate both programme- and institutional-level impact in teaching and learning. Examples could include:
  • Assessments by industry partners and/or graduate employers, such as (i) surveys capturing the perceived capabilities of graduates from particular programmes/universities compared to peer institution or previous generations of graduates, or (ii) qualitative assessments of student performance on industry-linked curricular experiences or placements