Short Communication

Dynamic job competency evaluation of medical graduates


Juan Lu1, Yang Shi2, Lin Zhou3*

1Vocational Education Center of the Naval Medical University, Shanghai 200433, China

2Vocational Education Center, Naval University of Engineering, Chinese PLA, Wuhan 430032, Hubei Province, China

3The 967 Hospital of the Joint Logistics Support Force, Chinese PLA, Dalian 116021, Liaoning Province, China


Address for correspondence:

Lin Zhou, E-mail: zlily72@sohu.com

For reprints contact: reprints@sppub.org

Received 01 April 2021; Accepted 15 June 2022; Available online 28 September 2022



In 2017, the general office of the State Council in China issued “Opinions on deepening the cooperation and further promoting the reform and development of medical education” (hereinafter referred to as the “opinions”). Job competency must be at the core of medical education. Guidelines for continuing medical education should be formulated using hierarchical classifications that are based on the career development needs for various talents. Job competency in healthcare is also crucial from a global perspective; after the first science-based global medical education reform from 1900 to 1960 started by Flexner’s report and the second problem-based reform from 1960 to 2000 proposed by Harden RM, International Committee on medical education planned the development strategy of crossing national boundaries and weakening discipline boundaries which gave birth to the third global medical education reform in 21st century with patients and people as the center and competency training as the core. Given the importance of job competency in medical education, this paper explores how to dynamically assess medical graduates’ competency in China and other countries to provide a basis for future evaluation. It should be noted that competency is assessed differently for medical graduates and physicians. The competencies of medical graduates are closely related to medical education in colleges and universities. They focus on new talent training systems and new educational needs. Changes in these assessments can guide colleges and universities to implement standardized teaching and review many different policies and processes, such as syllabi, training specifications, standards, objectives, curriculum system, teaching content, and methods. Assessing physician competency, however, is part of postgraduate education and includes licensed practice, residency, staged and regular specialized evaluation during post-practice, etc. This paper examines medical graduates and analyzes their competency assessment.


The concept of competency

Taylor, the father of scientific management, is believed to be the origin of competency research based on his “management competency movement” in the early 20th century.[1] Taylor identified different work requirements based on worker abilities using time and motion studies, thereby developing a new method of assessing and predicting work performance by measuring IQ, as well as academic and personal traits, which have been used for decades. In the early 1950s, the U.S. Department of State found that the practical performance of many seemingly intelligent employees was disappointing.[2] This led to the questioning of Taylor’s talent evaluation criteria and theory. At that time, the Harvard psychology professor, David McClelland, was invited to study what selection methods could effectively predict the actual performance of diplomats. Starting from primary research, McClelland abandoned Taylor’s presupposition of talent conditions and employed novel theories and techniques, such as the critical incident technique and behavioral event interview technique.[3] By examining specific behavioral characteristics and performance quality in diplomats, McClelland discovered that traditional intelligence tests and evaluations of academic ability failed to predict individuals’ performance or career success.[4] He also uncovered that this type of testing discriminated against women, people with low socioeconomic status, ethnic minorities, and other vulnerable groups. The traits and behavioral characteristics that can identify performance were factors such as “achievement orientation,” “interpersonal understanding,” and “team influence.” Together, these factors were called competency. In 1973, McClelland defined competency as personal characteristics that could distinguish between high- and low-performance employees within a specific job or organization.[5] McClelland also put forward six principles for effective testing. His study also proposed that competency testing should replace traditional intelligence testing, aptitude tests, and school grades. The publication of this article served as the earliest and most effective competency model theory, leading to the creation of the “competency movement”. McClelland also used the iceberg model to discuss competency. A small portion of an iceberg is visible above water, but the bulk of the ice is under the water’s surface. Similarly, competency has some components that are visible and easy to change, like knowledge and skills, while behavioral components like social roles, selfimage, traits, and motives, are hidden or beneath the surface and are harder to change. These invisible components play a key role in individual performance and are a predominant force that can predict an individual’s long-term performance. In a higher-level position within a company, these behavioral aspects, motives, and traits become more important than having the skills and knowledge required to do the job.


Classification of competency

At present, academic fields have not reached a consensus on the definition of competency; however, it can be divided into two types. The first type of competency is the individual’s success characteristics (not compared with the general population), which Boyatzis[6] defined as an individual’s potential characteristics that were used to produce successful performance in a certain role, such as knowledge, motivation, characteristics, skills, self-image, or social role. Hackney defined this type as the knowledge, skills, and attitudes that individuals needed to successfully achieve organizational goals, that is, possessing the multiple abilities needed to generate high performance. The second type refers to the abilities that cause an individual to stand out from the general population. For example, Spencer and Spencer[7,8] proposed that competency referred to deep-seated personal characteristics that distinguished people with superior performance from ordinary people. These characteristics included motivation, self-image, attitude or values, knowledge in a certain field, cognitive or behavioral skills, and any other personal characteristics that can be reliably measured or counted. Hay Group believed that competency was “any motive, attitude, skill, knowledge, behavior or personal characteristics that can distinguish high performers from average performers”.[9]


Characteristics of competency

Competency has five characteristics: 1. It has a significant correlation with job performance and can predict future performance; 2. It is dynamic and related to the completion of certain tasks; 3. It emphasizes behavioral characteristics and factors that are not related to intelligence, including motives, personality, attitude, value, and skill;[10,11] 4. It can distinguish between an excellent/successful person and an average/unsuccessful person;[12] and 5. It can effectively observe, evaluate, and improve talents, highlighting that job competency is an important part of training a skill.


Concept of evaluation

Evaluation is the calculation of value judgments made based on grasping the nature or related characteristics of things. In the education field, concepts and evaluation standards vary due to different perspectives. However, in job competency research, evaluation is recognized as a process of information collection, collation, analysis, and interpretation designed to assist decision making, as well as a way to measure the tested knowledge and skills.


Classification of evaluation

Evaluation is categorized differently based on perspectives. Macro, medium, and micro education evaluation are all possible. Evaluation criteria include both absolute and relative evaluation; the evaluation subject can be divided into self-evaluation and evaluation by others. The role and function of evaluation in different teaching stages can be categorized into diagnostic assessment, formative assessment, and summative assessment, which is also the most used and authoritative evaluation classification method. This method comes from Evaluation to Improve Learning by American educational psychologist B.S. Bloom.[13] Table 1 shows that formative assessments do not evaluate merits and demerits and do not involve the evaluation of teaching effects. For example, the formative assessment of students’ learning is not considered to be part of students’ academic performance. However, a formative assessment could function as regulation, feedback, adjustment, and control. Education evaluation experts, therefore, generally value and agree upon this method. Most cases use two or three evaluation methods as using only one can be difficult.


Table 1: Comparison of three evaluation methods proposed by Bloom.
  Diagnostic assessment Formative assessment Summative assessment
Time Before an educational/academic activity (before) During an activity (middle) At the end of an activity (e.g., the end of one semester/academic year) (after)
Content Students’ knowledge, ability, and other basic objective conditions Learning process, situations, and existing problems The learning effect, whether the educational goal and accomplishment have been achieved
Functions Forecast, judgment, and adjustment Feedback, regulation, and control Evaluation results and achievements
Targets Understand the basic situation of the evaluation object, carry out targeted guidance, and adopt appropriate teaching methods Based on feedback, improve the quality of education activities and ensure the realization of the expected educational goals Comprehensively identify the results to provide a basis to evaluate students’ performance in a certain course


Framework of evaluation system

The evaluation of medical graduates’ competency is a systematic process. The representative frame is the three-dimensional evaluation system proposed by Noreini, an American expert in medical education.[14]


The first evaluation dimension is the ability that is being evaluated. In this case, that is job competency. For medical graduates, this is the ability to benefit patients by providing daily medical services through a skilled understanding of medical knowledge, accurate use of techniques, comprehensive cultivation of clinical thinking, emotional expression, value orientation, etc.


The second evaluation dimension involves different evaluation levels. No single evaluation method can identify complex competency. The cultivation of medical graduates’ job competency is traced to their education. In the 1990s, American medical educator Professor George Miller created the “Miller pyramid” to explain the four levels of medical student clinical competency: knows (evaluation of knowledge, forms the base of the pyramid and the foundation for building clinical competence), knows how (know how to apply knowledge in the treatment and service of patients), shows how (requires the learner to demonstrate the application of knowledge and skills), and does (integration of knowledge and skills into routine clinical performance, the highest level of the pyramid).


The third evaluation dimension is evaluating different development stages. Medical education is a lifelong education system composed of several development stages. Medical students can enter the next stage only after mastering the knowledge, skills, and attitude of the previous stage. In the 1970s, the American philosopher Hubert Dreyfus offered a model of professional expertise that plotted an individual’s progression through five levels based on the study of skill acquisition and mastery[15] (Table 2). Different learning stages need different evaluation methods. For example, basic knowledge can be tested with multiple-choice questions during the novice stage. The standardized patient (SP) examination can be used in the competent or proficient stage. In Dreyfus’s model, competency is in the third or middle stage. Here, the learners are able to identify the important elements and make the right choice, gaining excitement from succeeding at various learning tasks. These characteristics have been able to distinguish the high achievers from the low achievers, and they are cultivable and malleable.


Table 2: Five-stage development model of learning process.
Learning stage Teaching models Learning steps Learning methods Learning features
Novice Lecture-oriented Identify context-free elements and follow the rules Decompose skills into context-free discrete tasks, concepts, or rules Discrete analytical thinking framework
Advanced beginner Practice-oriented Have a deep understanding, able to rely on experience and rule recognition, learn key behaviors according to new materials, and practice repeatedly Match the experience with the actual situation, point out new learning materials, and teach the rules and axioms of action Discrete analytical thinking framework
Competence With instructor guidance Be able to identify important elements in various learning aspects and make the right choice under guidance Make plans that separate important elements from those which are unimportant, clarify the action rules and reasoning methods, and cultivate the competency Learners participate in decisionmaking that includes emotional factors, such as worry about whether the task can be completed, the frustration of failing, and the excitement of succeeding
Proficient With instructor monitoring Adjust measures to local conditions, rely on intuitive judgment, have emotional responses to success or failure Learners can immediately notice the outstanding characteristics of learning objectives and can apply rules, principles, and axioms to determine how to achieve goals Have a deep, specific experience of results from decision-making, then summarize how they achieved the goal through rational analysis
Expert Independent supervisor Building-up experience, able to cope with subtle changes, automatically distinguish and identify measures taken in different situations Immediately know the goal and what action should be taken, and constantly enrich the previous learning experience Experienced, able to adapt to various situations, and be passionate in the workplace


Job competency elements of medical graduates

To meet the new requirements of healthcare development in the 21st century, the Accreditation Council for Graduate Medical Education (ACGME) selected and endorsed six competencies to define the foundational skills that every medical graduate should possess. The ACGME Core Competencies are: Patient Care (PC), Medical Knowledge (MK), Systems-Based Practice (SBP), Practice-Based Learning (PBL), Professionalism (PROF), and Interpersonal and Communication Skills (ICS).[16] Additionally, a Clinical Competency Committee (CCC) was created to conduct a milestone evaluation project for residency and specialist training, which emphasized clinical competence, professional knowledge, technology, professionalism, and clinical performance; it provides a clinical competency evaluation framework for each major/sub-major according to the ACGME core competencies. For each specialty, a milestone project working group formulates an evaluation plan. The Royal College of Physicians and Surgeons of Canada (RCPSC) updated the competency standard for medical students and published The CanMEDS 2005 Physician Competency Framework to create better standards, better physicians, and better care. Medical graduates have seven roles: medical expert, communicator, collaborator, manager, health advocate, scholar, and professional.[17] In 2006, the General Medical Council (GMC), which is responsible for higher education in medicine, stated “Patients need good doctors. Good doctors make the care of their patients their first concern. They are competent and can keep their professional knowledge and skills up to date. Good doctors establish and maintain good partnerships with their patients and colleagues. Good doctors are honest and trustworthy, and they act with integrity.” GMC also proposed the “Scottish Physician Model”.[18]


Common evaluation methods of job competency of medical graduates

Job competency evaluation in foreign countries of medical graduates is becoming increasingly specialized, and evaluations are becoming more standardized. In September 2000, ACGME created the Outcome Project, its resident training program assessment, which is used currently. At this time, the evaluation of educational contents and procedures in medical education began to focus on the effect of teaching. Given the rapid development of computer technology, MK assessment has moved from written tests to computer-based tests, 360-degree feedback, Chart-Stimulated Recall (CSR), Checklist Evaluation, Objective Structured Clinical Examination (OSCE), SP examinations, and other diversified evaluation methods. Table 3 shows a comparison of several common methods for evaluating medical graduates’ job competency. Some methods, such as the OSCE in high-level examinations, have been widely recognized within the academic field. Teachers who are inexperienced in education evaluation and have not received corresponding training will be affected by the utilization of these standards. However, there is no “bad” evaluation method; each has its application scope, advantages, and disadvantages. The challenge is to use the appropriate examination method to effectively and reliably measure the cognitive ability of medical graduates.


Table 3: Comparison among several common methods for evaluating job competency of medical graduates.
Evaluation method Definition Scope of application Reliability Advantages Disadvantages
360-degree feedback A series of evaluations by multiple raters through measurement tools (such as questionnaires) Provides feedback from a variety of sources 0.904 (military, education, and business) Multi-raters (supervisors, colleagues, subordinates, service objects) and comprehensive feedback Difficult to design a questionnaire suitable for all evaluators
CSR Real patients and standardized oral tests are used. Well-trained and experienced examiners ask trainees to externalize their thought processes to elicit diagnostic reasoning, decision-making, and related decisions and plans through a special worksheet Evaluates the mastery of medical knowledge and clinical decision-making ability 0.65-0.88 The ability to memorize, understand, and apply clinical decision-making and MK are evaluated Examiners need to be trained, there is low standardization, and it is time-consuming (5-10 minutes for each case, 2 hours for the overall examination)
Checklist Evaluation Examines competency steps of the specific behavior that must be or is expected to be completed. Investigates whether the listed behaviors occur and the integrity/correctness of the behaviors Suitable for evaluating medical service ability, interpersonal and communication ability, learning ability, etc., especially clinical practice ability 0.70-0.80 Can be customized for residents with different years of training according to the specific tasks and internal capacity of clinical operation Expert consensus is needed on key behaviors/operations, sequences, and standards
OSCE Candidates take turns completing the examination according to a unified schedule, including 12-20 independent standardized examination stations, spending 10-15 minutes at each station. Evaluation tools include standardized patients, clinical case analysis, etc. Widely used in most medical colleges and universities in the United States, residency training programs and Canadian medical licensing examination, gradually promoted in China 0.85-0.95 A standardized method to evaluate specific clinical skills and abilities (summarize the medical condition, detect positive signs quickly, differential diagnosis, comprehensive diagnosis and treatment ability) Controversial candidate scoring, long duration (high time cost), high requirements for sites (each site needs a separate space, special examination room or outpatient service room must be set up), not suitable for examining the situation of patients who need to visit a doctor more than once, the mock clinical operation may cause harm to humans
SP Trained healthy people who play the role of clinical patients in a standardized way or real patients who present the disease in a standardized way Evaluate a specific medical service ability High reliability of medical history collection, physical examination, and communication skills The most important evaluation tool in OSCE, and the most used examination form for summative evaluation of all clinical skills Long preparation time (8-10 hours for training a new SP for a new clinical problem, 6-8 hours for an experienced SP, double time for checklist evaluation)
Written tests (multiplechoice questions) Multiple-choice questions are used to measure the examinees’ mastery and understanding of MK in a specific field Both written and computer- based tests are available, and the scores of different groups are comparable >0.85 Not just about memorizing facts or information The overall design and questions should strictly follow the psychometric standards, and experts should define the evaluated knowledge, the examination instruction, and passing scores. Sufficient number of test questions is needed. The repetition rate is at least 25% - 30%.


The status of job competency evaluation of medical graduates: Curriculum-based

With the continuous updating and improvement of health professional quality requirements in the social medical system, curriculum reform has become an important piece of medical education reform. Through continuous certification, Canada has designed a competency-based education form for undergraduate courses. The Educational Commission for Foreign Medical Graduates (ECFMG) requires medical graduates from other countries who are trained in the United States to submit clinical skills assessments using the SP-based method. Australia implements competency-based training and takes various effective measures to evaluate medical graduates, residents, and practitioners. A study in the United Kingdom compared the competency-based assessment with the traditional subjective management assessment.[19] Results show that competency-based assessment is more scientific and objective (Table 4), and a small proportion of residents were evaluated as “competent” (2%). Further, the traditional model cannot directly measure the quality of education results in planning. The competency-oriented curriculum model does not exclude previous models; instead, it draws on the positive elements of traditional models to compensate for its negative training elements. The competency-oriented curriculum model also makes full use of all learning methods to fully realize students’ potential, including lecture-based teaching, group learning, team teaching, early contact with patients, multi-level base training, community practice, and the application of IT. This model centers on subjects, integrated curriculum, PBL, and other new curriculum models and methods. The aim is for students to gain clinical abilities, develop new professional qualities, have a flexible career development path, and respond to global health and safety threats. The evaluation of medical graduates’ job competency based on curriculum has become a trend going forward.


Table 4: Compa rison between competency-oriented model and the traditional structure and process-oriented model.
Project Result-oriented (competency) Process and structure-oriented (traditional)
Focus Educational results Educational contents
Curriculum function/teaching objectives Knowledge application Knowledge acquisition
Promoters Students Teachers
Level difference No Yes
Learning responsibilities Teachers and students share responsibilities Teachers take full responsibilities
Evaluation tools Multiple objective assessment (evaluation portfolio), simulating reality Single subjective assessment, indirect substitution
Evaluation sites “First line” (direct observation) Away from the first line
Evaluation types Standard reference Norm reference
Evaluation types Formative Summative
Evaluation time Variable Fixed


Source of Funding

This study was National Natural Science Foundation of China (71603269); The 13th Five-Year Plan Military Special Project of National Education Science and National Defense Military Education Discipline (JYKYD 2018037); Key Research Project of Navy Education Theory (NO.9 Naval Staff Tr aining Command [2019]).


Conflict of Interest

None declared.


  1. Sandberg J. Understanding human competence at work: An interpretative approach. Acad Manage J 2000;43:9–25. DOI: 10.2307/1556383
  2. Chen HT. [Establishment and analysis of competency model for employees in D group.] Beijing: Capital University of Economics andTrade, 2009.
  3. Shi Z, Wang JC, Li CP. Research on the Evaluation of Competency Model of Senior Managers in Enterprises. Acta Psychologica Sinica 2002;34:306–311.
  4. Chen YC, Lei Y. Review and Development Trend of Competency Research and Application. Sci Res Manag 2004;25:141–144.
  5. McClelland DC. Testing for Competency rather than for Intelligence. Am Psychol 1973;28:1–14. DOI: 10.1037/h0034092
  6. Boyatzis RE. Rendering into competence the things that are competency. Am Psychol 1994;49:64–66. DOI: 10.1037/0003-066X.49.1.64.b
  7. Huang XL, Du XL. Construction of Competency Model for Outstanding Medical Talents. China High Med Educ 2014;7:23–25.
  8. Spencer LM, Spencer SM. Competence at work: models for superior performance. New York: John Wiley&Sons, 1993:222–226.
  9. Song Q. [Competency model of university teachers and its relationship with job performance.] Guilin: Guangxi Normal University, 2008.Lu et al.: Dynamic job competency evaluation of medical graduatesPages 6/6 Hospital Administration and Medical Practices Volume 1 | 2022
  10. LeBleu R, Sobkowiak R. New work force competency models: Getting the IS staff up to warp speed. Inform Syst Manage 1995;12:7–12. DOI: 10.1080/07399019508962980
  11. Mansfield R. Building competency models: Approaches for HR professionals. Hum Resour Manage 1996;35:7–18. DOI: 10.1002/(SICI)1099-050X(199621)35:1<7::AID-HRM1>3.0.CO;2-2
  12. Li F, Fang SZ. [Job Competency of Health Institution Managers.] Beijing: People’s Health Publishing House, 2007.
  13. Bloom BS, Madaus GF, Hastings JT. Evaluation to Improve Learnin. Shanghai: East China Normal University Press, 1981.
  14. Sun BZ, Li JG, Wang QM. [Construction and application of competency model for Chinese clinicians.] Beijing: People’s HealthPublishing House, 2015.
  15. Dreyfus HL, Dreyfus SE. The Ethical Implications of the Five-Stage Skill-Acquisition Model. Bullet Sci Technol Soc 2004;24:251–264. DOI: 10.1177/0270467604265023
  16. Core Committee. Institute for International Medical Education. Global minimum essential requirements in medical education. Med Teach 2002;24:130–135.DOI: 10.1080/01421590220120731
  17. van der Lee N, Fokkema JP, Westerman M, Driessen EW, van der Vleuten CP, Scherpbier AJ, et al. The CanMEDS framework: Relevantbut not quite the whole story. Med Teach 2013;35:949–955. DOI: 10.3109/0142159X.2013.827329
  18. Simpson JG, Furnace J, Crosby J, Cumming AD, Evans PA, Friedman Ben David M, et al. The Scottish doctor-learning outcomes for the medical undergraduate in Scotland: a foundation for competent and reflective practitioners. Med Teach 2002;24:136–143. DOI: 10.1080/01421590220120713
  19. Press EM. A selected bibliography of competence-based education and training (CBET): primarily related to England and Wales, plusa selection of key publications from other countries, with brief annotations. New York: Edwin Mellen Press, 1997.