The evaluation of learning efficiency has always been one of the essential tasks for workplace training professionals. The three major approaches to assessment are based on the works of three scholars: Donald Kirkpatrick, Jack J. Phillips, and Robert Brinkerhoff. The first two concepts share similar grounds and are, in many ways, complementary, while the Brinkerhoff’s model is based on entirely different principles. All three approaches have their pros and cons, and choosing the most appropriate one in each particular case may turn a challenging task.
Kirkpatrick’s Four Levels of Evaluation
Donald Kirkpatrick first published his ideas on evaluating training programs in the late 50s and then developed them in his fundamental book as well as in a series of complimentary works. According to Kirkpatrick, training programs assessment is essential for several reasons. First, it provides grounds for improvement or dropping a particular curriculum. Also, evaluation is a crucial tool for measuring the performance of training offers.
At present, Kirkpatrick’s four-level model has gained full recognition in evaluating training effectiveness. As per the model, each successive stage is based on the results of the previous one, which prescribes a linear design to the assessment process (Kirkpatrick, 1994). The assessor is believed to get a precise picture of the training outcomes after all the levels are approached gradually.
At the first level, participants’ reactions to the learning experience are focalized. The trainees are encouraged to give some feedback on the relevance and efficiency of the program as well as the teaching methods employed by the tutor. The principal evaluation tools at this stage include feedback forms, post-training surveys, and questionnaires, which can provide all relevant information. A subsequent analysis of the data is aimed at detecting the program’s drawbacks and getting some hints on its improvement. Although the results obtained at the first stage are of independent significance, they also serve the grounds for the second phase of the evaluation process.
At the second level, the increase in participants’ knowledge is assessed by comparing the results of tests before and after learning. The participants’ reactions are bridged to the newly acquired competencies, skills, and attitudes. One should bear in mind that the primary goal is to reveal how the trainees have advanced owing to the new knowledge, not the experience as such (Kirkpatrick, 1994). The evaluation tools at this level are more intricate, involving various forms of testing, team assessment, and self-assessment, which are often supported by interviews and observations.
At the third level, the improvements in the participants’ daily performance are assessed. This stage is believed to provide the most reliable measurement of learning outcomes through the employment of observation and interview. As Ho et al. (2016, p. 184) stated, “observation was rated the most important and the most frequently employed method for managers in evaluating training.” However, the two methods have severe limitations. The feedback provided by the trainees and their immediate supervisors may be arbitrary and subjective, and the evaluation process is tricky at this phase.
At the fourth level, the evaluation focuses on the training outcomes concerning business results such as higher production levels, improved quality control, decreased costs, rocketed sales, lower staff turnover, decreased wastage, or increased profits. Positive changes in KPI are the only sound reason for considering investments in a training program. However, the training results are often impossible to link directly to the financial results. Hence, there are no universally applicable evaluation methods to be used at the final stage. As the assessment process may be somewhat challenging, especially at the last two levels, the selection of specific tools is to be an integral part of the training program development.
Phillips ROI Methodology
Jack J. Phillips has granted Kirkpatrick’s model further development by introducing some modifications to the stages and adding a new phase called ROI. According to Phillips (2012, p. 34), the first level measures “reaction, satisfaction, and planned action,” which means the learner’s perception of the course and intention to practice the new skills. The second level evaluates the increase in knowledge and competences of the participants utilizing tests and assessments.
The third level in Phillips’ concept deals with the application and implementation of the newly acquired skills into the working process. Evaluation is spread over time and involves on-the-job observations, interviews, and focus groups. The fourth level is aimed at evaluating the training impact on business. The areas affected may include output, quality, costs, customer satisfaction, employee loyalty, and others. Operating records, such as sales volumes or decreased customers’ complains, may provide valid assessment tools at this level. Since it is not always easy to separate the training contribution from the impact produced by other factors, some isolation techniques must be employed. The fundamental limitation is the costs of extensive data collection (Keen and Berge, 2014). Besides, and some soft skills cannot be reliably measured.
The modifications at the four evaluation levels proposed by Phillips as compared to Kirkpatrick’s model are minor and insignificant. The core Phillips’ innovation is the introduction of the firths level, implying return on investments as a principal measuring tool, which compares net program benefits to the program costs. However, the calculation of ROI depends on the results obtained in the previous stages. As Ravicchio and Trentin (2015, p. 25) demonstrate, “to estimate ROI, we must first evaluate how the knowledge and skills acquired in the training course (Level II) are applied in the workplace (Level III).” There are several ROI calculations methods, and the assessor is free to choose whichever better fits his purposes and the available data.
The Success Case Method of Robert Brinkerhoff
The Success Case Method developed by Robert Brinkerhoff is based on an in-depth analysis of the best and the worst results demonstrated by the trainees in a particular program. This approach is employed to assess the outcomes of training and coaching by studying stories of success and failure. The purpose is not to evaluate the average performance of the participants, but to investigate the extreme cases. The focus is placed on determining the key factors that contributed to the failure or success.
The five principle steps in Brinkerhoff’s method include planning a study; determining the features of success; conducting a survey to detect the extreme cases; interviewing and documenting the relevant cases, presenting results, and giving recommendations (Brinkerhoff, 2003). The method is recommended for large scale and long term evaluations, especially for repeated assessment of the same program.
Choosing the Right Method for Talent Development Reporting
Out of the three approaches, the Phillips ROI Methodology provides a broader range of tools of data collection for Talent Development Reporting. The principle advantage of this approach is the involvement of quantitative techniques, which can be adapted to each particular case. Financial indicators, KPI, and ratios have sound grounds and are easily understood by decision-makers. Although a straightforward way to link training outcomes to specific business results does not always exist, the limitations may be diminished or eliminated by a smart modification of accurate data collection tools and methods.
Brinkerhoff, R. (2003) The success case method: find out quickly what’s working and what’s not. San Francisco: Berrett-Koehler Publishers.
Ho, A., et al. (2016) ‘Exploration of hotel managers’ training evaluation practices and perceptions utilizing Kirkpatrick’s and Phillips’s models’, Journal of Human Resources in Hospitality & Tourism, 15(2), pp. 184–208.
Keen, C. and Berge, Z. (2014) ‘Beyond cost justification: evaluation frameworks in corporate distance training’, Performance Improvement, 53(10), pp. 22–28.
Kirkpatrick, D. (1994) Evaluating training programs: the four levels. San Francisco: Berrett‐Koehler Publishers.
Phillips, J. (2012) Return on investment in training and performance improvement programs. London, UK: Routledge.
Ravicchio, F. and Trentin, G. (2015) ‘Evaluating vocational educators’ training programs: a Kirkpatrick-inspired evaluation model’, Educational Technology, 55(3), pp. 22–28.