Direct measures

Direct measures of student learning capture the knowledge/skills/attitudes of the student cohort, enabling evaluation of student performance, either against a defined benchmark or through changes over time

Direct measures of student learning capture the knowledge/skills/ attitudes of the student cohort, enabling evaluation of student performance, either against a defined benchmark or through changes over time

While ‘direct measures’ can provide robust evidence of teaching achievement within particular courses or programmes, they are typically resource intensive, requiring time and expertise to design and collect. Such measures are therefore not routinely used within promotion cases. Where included, however, it is important for candidates to contextualise the data by describing the design and goals of the activity. As summarised below, direct measures of student learning that could be presented as part of an academic promotion case tend to fall into two categories:

Direct measures of learning over time

Measures which assess learning over time, often using before/after testing of student knowledge/abilities.
The direct measures of learning over time likely to be most appropriate for inclusion in a promotion case are those involving pre/post testing of students, for example, on the basis of their conceptual understanding. One well-documented example of such pre/post testing is from the Massachusetts Institute of Technology (MIT), where a new active learning approach was adopted within an electromagnetics course with the aim of improving students’ conceptual understanding and reducing failure rates. Conceptual questions from standardised tests were administered to students both before and after the new course, and the results were compared to control group data from students studying under the previous, more traditional, course delivery style. The survey outcomes demonstrated that the new course delivered significantly improved conceptual understanding among students (Dori and Belcher 2005). The survey questionnaire used in this example can be accessed from Dori et al. (2007).

Concept tests, such as the Force Concept Inventory (Hestenes and Halloun, 1995) – available from Mazur (1997) – are widely used in engineering and physics schools across the world to evaluate students’ conceptual understanding. Sample concept tests related to a wide range of science, engineering and mathematics topics are available from:
An alternative direct measure of student learning is the student learning journal, in which students are asked to reflect on the course and their learning on a weekly basis. An approach to designing and evaluating student learning journals is provided in Shiel and Jones (2003).
Dori, Y. J. & Belcher, J. (2005). How does technology-enabled active learning affect undergraduate students' understanding of electromagnetism concepts?. The Journal of the Learning Sciences, 14(2), 243-279. [link]
Dori, Y. J., Hult, E., Breslow, L. & Belcher, J. W. (2007). How much have they retained? Making unseen concepts seen in a freshman electromagnetism course at MIT. Journal of Science Education and Technology, 16(4), 299-323. [link]
Hestenes, D., & Halloun, I. (1995). Interpreting the force concept inventory. The Physics Teacher, 33(8), 502-506. [link]
Mazur, E. (1997). Peer instruction: A user’s manual. Upper Saddle River, NJ: Prentice Hall. [link]
Shiel, C., & Jones, D. (2003). Reflective Learning and Assessment: a Systematic Study of Reflective Learning as Evidenced in Student Learning Journals. HEAC, 1-32. [link]
Show more

Direct measures of learning at a single point in time

Stacks Image 30868
Measures which assess learning at a single point in time, typically through comparisons against a control group, norm or benchmark. The validity of these techniques rests on the assessment instrument capturing the relevant learning outcomes. Most also require a benchmark against which to compare the data collected for the student cohort, such as national average scores or performance of students in control groups.
Suitable techniques for measuring learning at a single point in time include:
  • student performance in institutional examinations and assignments can be used, in particular, to demonstrate the positive impact of pedagogical or curricular change as part of a promotion case;
  • products/outputs of a course or programme delivered by students, such as final-year projects, conceptual maps or oral exams;
  • student performance in standardised tests, capturing both generic learning outcomes through tools such as the Collegiate Learning Assessment (Klein et al., 2007) or capturing discipline-specific capabilities through tools such as AHELO (OECD, 2009). Although such tools are primarily designed for comparisons between institutions and countries, such data could also be disaggregated by programme to support a candidate’s case for promotion.
Klein, S., Benjamin, R., Shavelson, R. & Bolus, R. (2007). The Collegiate Learning Assessment Facts and Fantasies. Evaluation Review, 31(5), 415-439. [link]
Organisation for Economic Co-operation and Development (OECD). (2013). Synergies for better learning: An international perspective on evaluation and assessment. [link]
Show more
Show case study
Case study
Professor Craig Forest, Georgia Institute of Technology, US
Stacks Image 25551
In 2015, Dr Forest submitted a successful case for promotion to Associate Professorship at Georgia Tech. Of the five ‘noteworthy accomplishments’ listed in his application, four related to research achievements within his field of biomolecular science and one related to achievements in education. Dr Forest noted that, as an academic following a tenure track in a research-led institution, the decision to include an educational component in his promotion case was carefully considered.

A wide range of evidence sources was used to demonstrate Dr Forest’s institutional impact and influence in teaching and learning, including:
  • Professional activities: the educational portion of the promotion case centred on a description of three activities: (i) the co-foundation of the ‘InVenture Prize’, a university invention competition, (ii) the establishment of the ‘Invention Studio’, an open-access space for student creativity, innovation and design, and (iii) the redesign of an engineering capstone design course.
  • Peer assessments: including national press coverage of the educational activities developed by Dr Forest, a peer-reviewed pedagogical publication and details of the funds raised for the establishment of the ‘Invention Studio’.
  • Indirect measures of student learning: including estimates of the number of companies founded by students engaged in the entrepreneurial and innovation activities established by Dr Forest.
  • Direct measures of student learning: including an evaluation of the quality of student projects from the multi-disciplinary final year design course established by Dr Forest, as described below.
Building on an existing capstone design experience within the engineering school – where teams of students from a single discipline were tasked to solve authentic industry problems – Dr Forest led the creation of a new multi-disciplinary capstone experience, bringing together mechanical and biomedical engineering students to work together on these ‘real world’ problems. Based on the scores allocated by a judging panel of industry partners, an evaluation was conducted of the quality of student projects developed by these multi-disciplinary teams compared to that of their mono-disciplinary peers. The evaluation (Hotaling et al., 2012) concluded that “the [multi-disciplinary] teams’ holistic performance in innovation, utility, analysis, proof of concept, and communications skills was superior to that of the mono-disciplinary counterparts”.
Hotaling, N., Fasse, B. B., Bost, L. F., Hermann, C. D., & Forest, C. R. (2012). A quantitative analysis of the effects of a multidisciplinary engineering capstone design course. Journal of Engineering Education, 101(4), 630-656. [link]