Chapters

“Ever tried. Ever failed. No matter. Try Again. Fail again. Fail better.” —Samuel Beckett

Few topics have generated more impassioned discussions among educators of health professionals than evaluation of learning. In many clinical practice settings, instructors are required to apply evaluation tools that they have not designed themselves. On one hand, criticisms of standardized assessment techniques for required professional competencies and skill sets note the over-emphasis on reproducing facts by rote or implementing memorized procedures. On the other hand, teachers may find themselves filling out extensive and perhaps incomprehensible checklists of criteria intended to measure critical thinking. How can evaluation possibilities be created to advance required competencies with individuals in complex practice environments?

Expectations for learner achievements must be set out clearly before learning can be measured accurately. Within the clinical environment, the stakes are high for learners. Client safety cannot be compromised. Further, measurement considerations must not dominate the time educators might otherwise spend on creating meaningful instructional approaches. In his seminal Learning to Teach in Higher Education, Paul Ramsden (1992) establishes an important distinction between deep and surface learning. In his view, deep and meaningful learning occurs when assessment focuses on both what students need to learn and how educators can best teach them.

Understanding the complexities in evaluating students and our teaching is an ongoing process. Approaching the process collaboratively in ways that consistently involve learners as active participants, rather than passive recipients, can support their success and inspire our teaching. In this chapter we introduce the vocabulary of evaluation and discuss methods of evaluating students and evaluating teaching. We suggest creative evaluation strategies that teachers can use in a variety of different clinical practice settings.

Vocabulary of Evaluation

Educators may feel overwhelmed by measuring how learners create personal meaning and demonstrate understanding of the consensually validated knowledge they will need to practice competently in their field of health. Measuring the efficacy of our own teaching in relation to preparing learners to practice safely, ethically, and in accordance with entry to practice competencies is not straightforward either. However, whether we are seeking to appraise student learning or our own teaching, knowing the criteria for expected outcomes will help us understand what is being measured. Measurement, assessment, evaluation, feedback and grading are terms used in appraising student learning and our own teaching.

Measurement, Assessment and Evaluation

Measurement determines attributes of a physical object in relation to a standard instrument. For example, just as a thermometer measures temperature, standardized educational tests measure student performance. Reliable and valid measurement depends on the skilful use of appropriate and accurate instruments. In 1943, Douglas Scales was one of the first to argue against applying the principles of scientific measurement to the discipline of education.

The kind of science which seeks only the simplest generalizations may depart rather far from flesh-and-blood reality, but the kind of science which can be applied in the everyday work of teachers, administrators, and counselors must recognize the great variety of factors entering into the practical conditions under which these persons do their work. Any notion of science which stems from a background of engineering concepts in which all significant variables can be readily identified, isolated, measured, and controlled is both inadequate and misleading. Education, in both its theory and its practice, requires a new perspective in science whichscience that will enable it to deal with composite phenomena where physical science normally deals with highly specific, single factors. (Scales, 1943. p. 1)

One example of a standardized measurement tool is a required student evaluation form. Most health professions programs provide clinical instructors with evaluation forms that have been designed to measure learning outcomes in relation to course objectives. These forms provide standardization in that they are implemented with all students in a course. They often focus on competencies such as safety, making them relevant to all members of the profession (Walsh, Jairath, Paterson & Grandjean, 2010). However, clinical instructors using the forms may have little or no input into their construction and may not see clear links to their own practice setting.

Another example of a standardized measurement tool is a qualifying examination that all members of a profession must pass in order to practice. Similarly, skills competency checklists, rating scales, multiple choice tests and medication dosage calculation quizzes can provide standardized measurement. Again, clinical instructors may have limited input into design of these tools.

Assessment obtains information in relation to a complex objective, goal or outcome. While the kinds of standardized measurements noted above can all contribute to assessing student performance, additional information is necessary. Processes for assessment require inference about what individuals do in relation to what they know (Assessment, n.d.). For example, inferences can be drawn about how students are applying theory to practice from instructor observations of students implementing client care, from student self-assessments, and from peer assessments.

Evaluation makes judgments about value or worthiness in relation an objective, goal or outcome. Evaluation needs information from a variety of different sources and at different times. Evaluation of learners in clinical practice settings is considered subjective rather than objective (Emerson, 2007; Gaberson, Oermann & Shellenbarger, 2015; Gardner & Suplee, 2010; O’Connor, 2015).

Formative evaluation is continuous, diagnostic and focused on both what students are doing well and areas where they need to improve (Carnegie Mellon, n.d.). As the goal of formative evaluation is to improve future performance, a mark or grade is not usually included (Gaberson, Oermann & Scellenbarger, 2015; Marsh et al., 2005). Formative evaluations, sometimes referred to as mid-term evaluation, should precede final or summative evaluation.

Summative evaluation summarizes how students have or have not achieved the outcomes and competencies stipulated in course objectives (Carnegie Mellon, n.d.), and includes a mark or grade. Summative evaluation can be completed at mid-term or at end of term. Both formative and summative evaluation consider context. They can include measurement and assessment methods noted previously as well as staff observations, written work, presentations and a variety of other measures.

Whether the term measurement, assessment or evaluation is used, the outcome criteria or what is expected must be defined clearly and measured fairly. The process must be transparent and consistent. For all those who teach and learn in health care fields, succeeding or not succeeding has profound consequences.

Creative Strategies

The Experience of Being Judged

Clinical teachers measure (quantify), assess (infer) and evaluate (judge). Tune in to a time in your own learning or practice where your performance was measured. The experience of having others who are in positions of power over us make inferences and judgments about what we know can be both empowering and disempowering. Reflect on an occasion when you were evaluated. Did the evaluation offer a balanced view of your strengths and weaknesses? Did you find yourself focusing more on the weaknesses than on the strengths? How can our own experiences with being judged help us be better teachers?

Students also bring with them experiences of being judged. One helpful strategy may be to have them share their best and worst evaluation experiences. Focus a discussion on the factors that made this their best or worst experience, to help learners reveal their fears. You can consider asking learners to draw a picture of their experience before they reflect and discuss.

Feedback

Feedback differs from assessment and evaluation. Assessment requires instructors to make inferences and evaluation requires them to make judgments. Feedback is non-judgmental and requires instructors to provide learners with information that facilitates improvement (Concordia University, n.d.). Feedback should focus on tasks rather than on individuals, it should be specific, and it should be directly linked to learners’ personal goals (Archer, 2010).

Periodic, timely constructive feedback that recognizes both strengths and areas for improvement is perceived by students as encouraging and helpful in bolstering their confidence and independence (Bradshaw & Lowenstein, 2014). The tone of verbal or written feedback should always communicate respect for the student and for any work done. The feedback should be specific enough that students know what to do, but not so specific that the work is done for them (Brookhart, 2008).

Arthur Chickering and Zelda Gamson (1987, p. 1) identified seven seminal principles of good practice in undergraduate education.

  1. Encourages contact between students and faculty.
  2. Develops reciprocity and cooperation among students.
  3. Encourages active learning.
  4. Gives prompt feedback.
  5. Emphasizes time on task.
  6. Communicates high expectations.
  7. Respects diverse talents and ways of learning.

All these principles should be considered when providing feedback to students in the clinical area. Certainly “provides prompt feedback” is particularly relevant. If more time passes before you give feedback on learning experiences, you will find it more difficult to remember details and provide effective feedback (Gaberson, Oerman & Schellenbarger, 2015).

Including students’ self-assessments when providing feedback is a critical element in the process. Throughout their careers, health professionals are encouraged to reflect on their own practice. This needed reflectivity can be developed by opening any feedback session with open-ended questions that invite learners to share their reflections and self-assessment. This strategy may soften perceptions of harshness associated with corrective feedback and may bring unexpected questions and issues into the discussion (Ramani & Krackov, 2012).

All too often feedback is viewed as educator-driven, with instructors assuming primary responsibility for initiating and directing a session. A more learner-centred approach encourages students to take a central role in the process and to seek out opportunities to gather feedback from instructors and others in the practice area (Rudland et al., 2013).

Creative Strategies

Beyond just ‘Good Job’ or ‘Needs Work’

When offering feedback, try these five simple steps to go beyond just ‘good job’ or ‘needs work.’

  1. Affirm positive aspects of what a student has done well.
  2. Explore the student’s own understanding and feelings about the experience.
  3. Pick up on any area the student identifies as needing work.
  4. Identify any additional areas where the student needs to improve, including an explanation of why these are important.
  5. Provide opportunity for the student to reflect and respond (in writing if possible) to the feedback.

Grading

Grading, whether with a numerical value, letter grade or pass/fail designation, indicates the degree of accomplishment achieved by a learner. Differentiating between norm-referenced grading and criterion-referenced grading is important. Norm-referenced grading evaluates student performance in comparison to other students in a group or program, determining whether the performance is better than, worse than or equivalent to that of other students (Gaberson, Oermann, & Shellenbarger, 2015). Criterion-referenced grading evaluates student performance in relation to predetermined criteria and does not consider the performance of other students (Gaberson, Oermann, & Shellenbarger, 2015).

Criterion-referenced grading reflects only individual accomplishment. If all the participants in a learning group demonstrate strong clinical skills, they all earn top grades. In contrast, a learner’s grade in norm-referenced grading reflects accomplishment in relation to others in the group. Only a select few can earn top grades, most will receive mid-level grades, and at least some will receive failing grades. Norm-referenced grading is based on the symmetrical statistical model of a bell or normal distribution curve.

Advantages of norm-referenced grading include the opportunity to compare students in a particular location with national norms; to highlight assignments that are too difficult or too easy; and to monitor grade distributions such as too many students receiving high or over-inflated grades (Centre for the Study of Higher Education, 2002). The disadvantages of norm-referenced grading centre on the notion that one student’s achievements, successes and even failure can depend unfairly on the performances of others. Grade inflation, an upward trend in grades awarded to students, has led many programs in the health disciplines to establish rigorous admission requirements and use a pass/fail grading approach.

Criterion-referenced grading judges student achievement against objective criteria outlined in course objectives and expected outcomes, without consideration for what other students have or have not achieved. The process is transparent and students can link their grades to their performance on predictable and set tasks (Centre for the Study of Higher Education, 2002). In turn, this approach can consider individual student learning needs and build in opportunities for remediation when needed (Winstrom, n.d.). One disadvantage of criterion-referenced grading is the need for more instructor time for grading. Also, awarding special recognition with prizes or scholarships to excelling students may not be as clear-cut when students are not compared to peers.

Creative Strategies

Can All Students be Above Average?

Consider the advantages and disadvantages of evaluation approaches that are norm-referenced (comparing students to other students) and criterion-referenced (comparing students to set criteria). Discuss with your students when comparing their achievements with others in the group can be useful and when evaluating performance only in relation to set criteria can be useful. How can clinical teachers incorporate ideas from both approaches into practice?

Methods of Evaluating Students

Professional expectations dictate that all health care practitioners must demonstrate prescribed proficiencies. Assessing, evaluating, providing feedback and ultimately assigning a grade to students in clinical courses requires teachers to implement a variety of different evaluation methods. Going beyond measuring students’ performance on standardized tests and checklists is essential. Here we discuss methods of evaluating students that invite collaboration, tap into what students know, and identify future learning needs.

Instructor Observation, Self-Assessment, Peer Assessment, Anecdotal Notes

Instructor observation is one of the most commonly implemented methods of evaluating students. Instructor observation, also referred to as clinical performance assessment, provides important information about contextual aspects of a learning situation (O’Connor, 2015). Knowing the context of why a student acted in a particular way can provide more complete understanding of behaviour. If a task was not completed on time, knowing that the student reasoned it was more important to stop and listen to clients’ concerns can help instructors make inferences about students’ strengths and weaknesses. And yet, anxiety surrounding the experience of being observed is well known to all of us. At what point is the stress of achieving course outcomes equivalent to the stress inherent in actual practice conditions? Does performance anxiety help or hinder evaluation?

Performance anxiety can be expected during instructor-observed activities (Cheung & Au, 2011; Weeks & Horan, 2013; Welsh, 2014). Instructional strategies that decrease performance anxiety include 1) demonstrating skills with supplemental return sessions in laboratory settings before students complete skills in clinical settings and 2) arranging opportunities for peers to observe and evaluate one another. Engaging students in non-evaluated discussion time can also help reduce their anxiety (Melrose & Shapiro, 1999). Further, inviting students to complete a self-assessment of any instructor-observed activity can help make the experience collaborative.

Self-assessment opportunities can be made available and acknowledged to help students develop critical awareness and reflexivity (Dearnley & Meddings, 2007). Self-assessment is a necessary skill for lifelong learning (Boud, 1995). Practitioners in self-regulating health professions are required to self-assess. When students become familiar with the process during their education, they enter their profession with a stronger capacity for assessing and developing needed competencies (Kajander-Unkuri et al., 2013).

Self-assessment can shed light on the incidental, surprise, or unexpected learning (chapter 3) that can occur beyond the intended goals and objectives of a clinical course. Pose questions such as “What surprised you when …?” or “Talk about what happened that you didn’t expect when …” Encouraging students to identify and then discuss their incidental learning in individual ways helps build confidence.

A cautionary note: self-assessments can be flawed. The most common flaw is that people often overrate themselves, indicating inaccurately that they are above average (Davis et al., 2006; Dunning, Heath & Suls, 2004; Mort & Hansen, 2010; Pisklakov, Rimal & McGuirt, 2014). They may not accurately identify areas of weakness (Regehr & Ewa, 2006) and may overestimate their skills and performance (Baxter & Norman, 2011; Galbraith, Hawkins & Holmboe, 2008). Students who are least competent in other areas of study are least able to self-assess accurately (Austin & Gregory, 2007; Colthart et al., 2008). Despite the flaws of self-assessments, inviting students to actively contribute their perceptions of what they have learned and what they still do not know is a critical aspect of evaluation.

Peer assessment, where individuals of similar status evaluate the performance of their peers and provide feedback, can also help students develop a critical attitude towards their own and others’ practice (Mass et al., 2014; Sluijsmans, Van Merriënboer, Brand-gruwel & Bastiaens, 2003). Advantages of peer assessment include opportunities for students to think more deeply about the activity being assessed, to gain insight into how others tackle similar problems, and to give and receive constructive criticism (Rush, Firth, Burke & Marks-Maran, 2012).

Disadvantages include peers having limited knowledge of a situation, showing bias towards their friends, and hesitating to award low marks for poor work because they fear offending peers (Rush, Firth, Burke & Marks-Maran, 2012). Personalities or learning styles may not be compatible among peers and students may feel they spend less individualized time with instructors when being reviewed by peers (Secomb, 2008). Instructors need to remain involved with any peer assessment activities in order to correct Inaccurate or insufficient peer feedback (Hodgson, Chan & Liu, 2014).

Peer and self-assessments often differ from clinical teachers’ assessments, indicating that neither of these can substitute for teacher assessment (Mehrdad, Bigdeli & Ebrahimi, 2012). Even though peer assessments of students’ clinical performance cannot be expected to provide a complete picture of students’ strengths and areas needing improvement, they are a useful evaluation method and one that should be incorporated whenever possible. When students step into the role of evaluator, either for themselves or for others at a similar stage of learning, they gain a new perspective on the teaching role. This familiarity may help them feel they are actively participating in the evaluative process for themselves and others.

Anecdotal notes are the collections of information that instructors record, either by hand or electronically, to describe student performance in clinical practice (Hall, 2013). Notes are usually completed daily or weekly on all students and provide a snapshot of students’ range of clients and skills. Instructors are expected to complete anecdotal notes after observing a student complete a client care procedure or report. Notes are also completed after incidents in which students have behaved in unusual or concerning ways such as difficulty completing previously learned skills, showing poor decision making, appearing unprepared, or behaving in an unprofessional manner (Gardner & Suplee, 2010).

Each individual anecdotal note should be completed as soon as possible after observing a student’s performance or concerning incident and should only address that one performance or incident. Each note should include a description of the client and the required skills as well as objective observations of the student behaviours actually seen and heard by the instructor. Individual anecdotal notes are narrative accounts of an experience at one time point and should be shared with students (Gaberson, Oerman & Schellenbarger, 2015; O’Connor, 2015). Many instructors invite students to respond or add to anecdotal notes after students review the notes and reflect on the comments.

Cumulatively, individual anecdotal notes can be reviewed over time for patterns of behaviour useful in evaluating student progress and continued learning needs. Anecdotal notes should be retained after courses end, as disputes over clinical grades may occur (Heaslip & Scammel, 2012). Anecdotal notes need not just be descriptions of students’ behaviour. They can and should also include the specific suggestions and guidance that teachers provide to support their students towards success.

Records of students’ assignments should also be retained. These records can reflect how different opportunities were available to students to demonstrate required skills. They can illustrate the kinds of situations where students performed well and poorly. These records have also been used to defend instructors’ decisions to fail students who assert that students were given overly difficult assignments (O’Connor, 2015).

Creative Strategies

Balancing Instructor, Student and Peer Assessments

Imagine creating three piles of documentation for each student in a clinical course. One pile contains instructor observations and anecdotal notes. The second pile contains student self-assessments and responses to their instructors’ anecdotal notes. The third pile contains peer assessments of student work. Are the piles balanced and equal? Should they be? What, if any, additional opportunities could be built into your teaching practice to balance instructor, peer and self-assessments?

Learning Contracts

Adult educator Malcolm Knowles (1975, p. 130) explains that a learning contract is a “… means of reconciling the ‘imposed’ requirements from institutions and society with the learners’ need to be self-directing. It enables them to blend these requirements in with their own personal goals and objectives, to choose ways of achieving them and the measure of their own progress toward achieving them.” In other words, the goal of any learning contract is to promote learner self-direction, autonomy and independence. As Knowles emphasizes, learning contracts must include what is to be learned, how it will be learned, and how that learning will be evaluated.

As part of continuing competence requirements, most health professionals are expected to engage in self-directed learning activities. These demonstrate to regulatory bodies that health professionals can identify what they need to know, how they will learn it, and how they will evaluate their learning. Initiating learning contracts with students can help prepare them for this practice requirement.

Traditionally, learning contracts have been used mainly with low-performing students who are struggling to meet clinical objectives and standards or whose performance is perceived as unsafe (Frank & Scharff, 2013; Gregory, Guse, Dick, Davis & Russell, 2009). In these instances instructors must clearly identify the outcomes to be addressed and work collaboratively with students to determine the resources and assistance used to address the issues (Atherton, 2013). A contract must be signed by both instructor and student and both must document the progress made or not made after each clinical experience.

Extending the idea of learning contracts beyond struggling students is becoming more common. Learning contracts can be a teaching strategy that fosters motivation and independent learning in students in nursing (Chan & Wai-tong, 2000; Timmins, 2002), respiratory care (Rye, 2008), physiotherapy (Ramli, Joseph & Lee, 2013), and clinical psychology (Keary & Byrne, 2013). Although incorporating learning contracts for all students and not just those who struggle may initially seem time-consuming, the end result can be rewarding.

Creative Strategies

Model Self-Direction in a Learning Contract

Model the kind of self-direction that professionals need in everyday practice by creating your own learning contract. Think about one of your own learning needs. Write down what you need to learn, how you will learn it, and how you will evaluate the learning.

Keep the learning need simple, manageable and easy to understand. If your regulatory body requires you to use learning contracts or a similar process, use the language and protocols required by your profession. Share your contract with students early in the course and encourage them to support and critique your progress. If you are not comfortable with sharing your own learning contract, create one that illustrates how a member of your professional group might learn.

Create a Learning Contract Gallery

Students can learn from viewing learning contracts prepared by classmates or former students. Why not create an online gallery of effective learning contracts that students can browse through as they consider developing their own? They will see examples of elements of an effective learning contract as well as sample formats that could be adapted for their own personalized version.

Failure

In spite of clear objectives, thoughtful teaching strategies, and a supportive learning environment, some learners are simply not able to demonstrate the competencies required to pass a clinical course. The experience of failure can be devastating for all involved (Black, Curzio & Terry, 2014; Larocque & Luhanga, 2013).

The accepted norm within clinical teaching is that, at the beginning of any educational event, participants will be thoroughly informed about both the learning outcomes they are expected to achieve and specific institutional policies that apply when those objectives are not met. Similarly, learners must be informed promptly when an evaluator begins to notice problems with learners` progress. Typically, learners are informed of problems through collaborative formative evaluations and feedback, long before a final failing summative evaluation.

The daily anecdotal notes or records of learner actions mentioned above are essential throughout any evaluative process, but they become particularly important when a learner is in danger of failing. Most formative or mid-term evaluation instruments are designed to provide feedback on learning progress and identify further work needed. Summative or final evaluations describe the extent to which learners have achieved course objectives. Thus, when a learner is not progressing satisfactorily, a prompt, documented learning contract or plan can be invaluable in identifying specific behaviours that the instructor and student agree to work on together. Instructors’ supervisors must be informed about any students who are struggling or unsafe and they must be kept up to date throughout the process.

In some cases, learners may choose not to collaborate on a remedial learning contract or plan. Documenting student and instructor perceptions on this process is important as well. Providing students with information about institutional procedures for withdrawing from the learning event or appealing a final assessment is essential in demonstrating an open, fair and transparent evaluation process.

Given the emotionally charged nature of clinical failure, those involved in the process may not be able to immediately identify how the experience is one of positive growth and learning. In fact, having opportunities in place to talk and debrief may help both students and instructors. For university, college and technical institute students, counselling services are generally available through their institution. For instructors, both full-time continuing faculty and those employed on a contract or sessional basis, counselling services may be available from an employee assistance program.

Knowing that students may fail and that counselling services might help, you can distribute pamphlets outlining contact information for those counselling services to all students in the group at the beginning of the course. If the information is already at hand, referring an individual learner to the service when needed normalizes the suggestion. In some cases, without compromising confidentiality, actually accompanying an individual to a counselling appointment or walking with them into the counselling services area can begin to ease the devastation.

Failure to fail. Clinical instructors and preceptors can be reluctant to fail students. The term failure to fail (Duffy, 2003; 2004) is used to describe a growing trend towards passing students who do not meet course objectives and outcomes. In one study, “… 37% of mentors [preceptors] passed student nurses, despite concerns about competencies or attitude, or who felt they should fail.” (Gainsbury, 2010)

One key reason clinical instructors fail to fail is lack of support (Black, Curzio & Terry, 2014; Bush, Schreiber & Oliver, 2013; Duffy, 2004; Gainsbury, 2010; Larocque & Luhanga, 2013). When universities overturn failure decisions on appeal and require detailed written evidence justifying an instructor’s decision to fail, clinical instructors can feel as though they are not supported (Gainsbury, 2010). Further, as caring health professionals, instructors can feel that failing is an uncaring action (Scanlan, Care & Glessler, 2001). Many also fear that a student’s failure will reflect badly on the instructor and that others will judge them as bad teachers (Gainsbury, 2010).

However, health professionals have a duty of care to protect the public from harm. When students whose practice is unsafe and who fail to meet required course outcomes are not assigned a failing grade, instructors must question whether they are neglecting their duty of care (Black, Curzio & Terry, 2014). The reputation of the professional program can be diminished as a result of failing to fail a student (Larocque & Luhanga, 2013). Viewing clinical failure in a positive light is difficult for both students and instructors. Learning from the experience is what counts. As Samuel Beckett wrote, “Ever tried. Ever failed. No matter. Try Again. Fail again. Fail better” (1983).

Creative Strategies

Fail Better

How can clinical instructors follow Samuel Beckett’s sage advice and “fail better”? To begin, have a clear working knowledge of course outcomes. Next, maintain detailed documentation that gives an objective and balanced picture of student behaviours and agreed-upon strategies for improving these. Connect with any available support services for students and for instructors. Finally, consider the implications of failing to fail.

Methods of Evaluating Teaching

Clinical programs in the health professions usually stipulate specific assessment tools to be used to evaluate clinical teachers. Commonly, students are given anonymous questionnaires to complete at the end of their courses and supervisors complete standardized performance appraisals. Clinical instructors employed as full-time continuing faculty members may be involved in constructing these tools but sessional instructors or those employed on a contract basis are seldom consulted. While clinical instructors may have little control over the evaluation tools required by their program, performance appraisal documents are likely to include opportunities for self-assessment. Self-assessments can be framed as teaching portfolios. Collecting information from a variety of different assessment tools over a period of time is needed to construct accurate student evaluations and the same is true for instructor evaluation (Billings & Halstead, 2012).

Teaching Portfolios

Teaching portfolios, also called teaching dossiers or teaching profiles, are pieces of evidence collected over time and are used to highlight teaching strengths and accomplishments (Barrett, n.d.; Edgerton, Hutching & Quinlan, 2002; Seldin, 1997; Shulman, 1998). The collection of pieces of evidence can be paper based or electronic. Teaching portfolios can usually be integrated into self-assessment sections of performance appraisal requirements. No two teaching portfolios are alike and the content pieces can be arranged in creative and unique ways.

Portfolios usually begin with an explanation of the instructor’s teaching philosophy. In chapter 2, we provided suggestions for crafting a personal teaching philosophy statement. When the purpose of a portfolio is to contribute self-assessment information to performance appraisals, goals should also be explained. Reflective inquiry is a critical element in any portfolio and reflections about teaching approaches that failed, as well as those that succeeded, should be included (Lyons, 2006). Both goals that have been accomplished and specific plans for accomplishing future goals should be noted. As clinical teachers must maintain competencies in their clinical practice and their teaching practice, another segment of the portfolio could list certifications earned; workshops, conferences or other educational events attended; papers written about clinical teaching in a course; and awards received.

Teaching products could constitute another segment, such as writing a case study about a typical client in your practice setting; developing a student orientation module for your students; crafting a student learning activity such as a game or puzzle; devising an innovative strategy to support a struggling student; or demonstrating a skill on video. Mementos such as thank-you messages from students, colleagues, agency staff or clients could also be included. Distinguish between pieces of content that can be made public and those that should be kept as private records. For example, a student learning activity might be made public by publishing it in a journal article or teaching website, while mementos would be private and would likely only be shared with supervisors.

If your program does not provide students with an opportunity to offer formative evaluation to instructors about how the course is going, create this opportunity. Rather than waiting for student feedback at the end of course, seek this feedback at mid-term. Provide a mechanism that is fully anonymous, for example an online survey, where students can comment on what is going well, what is not going well and what advice they would like to give their instructor. In your portfolio, discuss this formative feedback, your responses to the feedback and your evaluation of the process.

As the above examples illustrate, the possibilities for demonstrating instructional achievements are limitless. Each item in your portfolio should include a brief statement explaining why this piece of content is included and how it reflects a valid and authentic assessment of teaching achievements (Barrett, n.d.). Two other pieces of content commonly included in teaching portfolios are responses from student questionnaires and peer assessments.

Responses from student questionnaires. Students’ assessments of their instructors’ teaching effectiveness are most often collected through anonymous questionnaires at the end of the course. Anonymity is important as students can fear that rating their instructor poorly could affect their grade. Completing the questionnaire is optional and instructors must not be involved in administering or collecting the questionnaires (Center for Teaching and Learning, n.d.).

Research indicates that instructors are more likely to receive higher ratings from students who are highly motivated and interested in the course content (Benton & Cashin, 2012). Although student ratings of their instructors yield valuable interpretations about an instructor’s engagement of students and enthusiasm, students are not subject matter experts and therefore cannot evaluate the accuracy and depth of their instructor’s knowledge (Oermann, 2015). In general, college students’ ratings of their instructors tend to be more statistically reliable, valid and relatively free from bias than any other data used for instructor evaluation. They are, however, only one source of data and should be used in combination with other sources of information (Benton & Cashin, 2012). Including samples of responses from student questionnaires is expected in most teaching portfolios.

Peer assessments. If peer assessment of instructors is not usually part of your program, consider including peer assessments in your teaching portfolio. Acquire permission for peer assessment from both program and clinical site administrators. Peer observers can be other teachers in the program or staff at the clinical agency and should be provided with an evaluation instrument. For example, Chickering & Gamson’s (1987) seven principles can guide peer observers in framing their feedback. Introduce the peer observer to students in the group and relevant agency staff members. Ensure students understand that the purpose of the observation is instructor evaluation, not student evaluation (Center for Teaching and Learning, n.d.).

Creative Strategies

What’s in YOUR Teaching Portfolio?

If you have not done so already, initiate a teaching portfolio and keep adding to it throughout your teaching career. Visualize a large black artists’ case that holds all the items that an artist would use to illustrate or sell work. For example, a portrait painter’s case might contain a black and white sketch of a young girl, a full-colour family portrait, and a detailed replica of a classic piece. Each item would be individual, would have personal relevance to the artist, and would reflect the artist’s skills. Similarly, your teaching portfolio will contain items that illustrate your individual interests and expertise. What’s in YOUR teaching portfolio?

Conclusion

Evaluating our students and ourselves is a critical aspect of clinical teaching. In this chapter we discussed methods of evaluating students and methods of evaluating teaching. The process of evaluating students requires clinical teachers to make judgements about whether students are meeting objectives or not, based on information gathered and recorded throughout the course. Clinical teachers measure attributes of learning with standardized instruments and assess learning through inferences about how students are applying theory to practice, based on observation in different situations.

Meaningful evaluation goes beyond identifying students’ progress in relation to course objectives and outcomes. Deep learning occurs when teachers provide their students with specific and individualized feedback. Students’ own self-assessment of their strengths and plans for improvement should frame any feedback conversation. Ultimately, instructors must assign grades. Grades can be determined through a norm-referenced approach that compares students to other students or through a criterion-referenced approach that compares students to set criteria.

Clinical teachers can evaluate students using instructor observations, students’ self-assessments and peer assessments. Daily anecdotal notes should be kept and shared with students, recording instructors’ objective observations of each student’s clinical performance. Learning contracts can be co-created with all students, although they have traditionally been used mainly with students whose practice is unsafe or who are not meeting course objectives.

Student failure is a devastating experience. All too often clinical teachers and preceptors fail to fail students whose practice is unsafe or who have not met course outcomes. Evaluating students requires that those involved consider the duty of care for all health professionals to protect the public from harm.

We also discussed methods of evaluating our own teaching. We suggested creating a teaching portfolio as a method of self-assessment. Teaching portfolios can include a statement of personal teaching philosophy, responses from student questionnaires, and peer assessments. They can showcase a variety of different achievements and reflections.

References

Archer, J. (2010). State of the science in health profession education: Effective feedback. Medical Education, 44(1), 101–108. doi: 10.1111/j.1365-2923.2009.03546.x

Assessment. (n.d.). In The glossary of education reform. Retrieved from http://edglossary.org/

Atherton, J. (2013). Learning and teaching: Learning contracts [Fact sheet]. Retrieved from http://www.learningandteaching.info/teaching/learning_contracts.htm

Austin, Z. & Gregory, P. (2007). Evaluating the accuracy of pharmacy students’ self-assessment skills. American Journal of Pharmaceutical Education, 71(5), 1–8.

Barrett, H. (n.d.). Dr. Helen Barrett’s electronic portfolios [website]. Retrieved from http://electronicportfolios.com/

Baxter, P., & Norman, G. (2011). Self-Assessment or self-deception? A negative association between self-assessment and performance. Journal of Advanced Nursing, 67(11), 2406–2413. doi:10.1111/j.1365-2648.2011.05658.x

Beckett, S. (1983). Worstward ho. Location Unknown: Margarana Books.

Benton, S. & Cashin, W. (2012). Student ratings of teaching: A summary of research and literature. Idea paper #50. Manhattan KS: The Idea Centre. Retrieved from http://www.ntid.rit.edu/sites/default/files/academic_affairs/Sumry%20of%20Res%20%2350%20Benton%202012.pdf

Billings, D. & Halstead, J. (2012). Teaching in nursing: A guide for faculty (4th ed.). St Louis: Elsevier.

Black, S., Curzio, J. & Terry, L. (2014). Failing a student nurse: A new horizon of moral courage. Nursing Ethics, 21(2), 224–238

Boud, D. (1995). Enhancing learning through self-assessment. London: Kogan Page.

Bradshaw, M. & Lowenstein, A. (2014). Innovative teaching strategies in nursing and related health professions education (6th ed.). Burlington, MA: Jones & Bartlett.

Brookhart, S. (2008). How to give effective feedback to your students. Alexandria, VA: Association for Supervision and Curriculum Development.

Bush, H., Schreiber, R. & Oliver, S. (2013). Failing to fail: Clinicians’ experience of assessing underperforming dental students. European Journal of Dental Education, 17(4), 198–207. doi: 10.1111/eje.12036

Carnegie Mellon. (n.d.) What is the difference between formative and summative assessment? [Fact sheet]. Retrieved from the Eberly Centre for Teaching Excellence, Carnegie Mellon University, Pittsburgh, PA. http://www.cmu.edu/teaching/assessment/basics/formative-summative.html

Center for Teaching and Learning. (n.d.). Peer observation guidelines and recommendations [Fact sheet]. Minneapolis, MN: University of Minnesota. Available at http://www1.umn.edu/ohr/teachlearn/resources/peer/guidelines/index.html

Centre for the Study of Higher Education. (2002). A comparison of norm-referencing and criterion-referencing methods for determining student grades in higher education. Australian Universities Teaching Committee. Retrieved from http://www.cshe.unimelb.edu.au/assessinglearning/06/normvcrit6.html

Chan, S. & Wai-tong, C. (2000). Implementing contract learning in a clinical context: Report on a study. Journal of Advanced Nursing, 31(2), 298–305.

Cheung, R. & Au, T. (2011). Nursing students’ anxiety and clinical performance. Journal of Nursing Education, 50(5), 286–289. doi: 10.3928/01484834-20110131-08

Chickering, A. & Gamson, Z. (1987). Seven principles for good practice in undergraduate education. American Association for Higher Education AAHE Bulletin, 39(7), 3–7.

Colthart, I., Bagnall, G., Evans, A., Allbutt, H., Haig, A., Illing, J. & McKinstry, B. (2008). The effectiveness of self-assessment on the identification of learning needs, learner activity, and impact on clinical practice: BEME Guide No. 10. Medical Teacher, 30,124–145.

Concordia University. (n.d.) How to provide feedback to health professions students [Wiki]. Retrieved from http://www.wikihow.com/Provide-Feedback-to-Health-Professions-Students

Davis, D., Mazmanian, P., Fordis, M., Van Harrison, R., Thorpe, K. & Perrier, L. (2006). Accuracy of physician self-assessment compared with observed measures of competence: A systematic review. The Journal of the American Medical Association, 296(9), 1094–1102.

Dearnley, C. & Meddings, F. (2007) Student self-assessment and its impact on learning: A pilot study. Nurse Education Today, 27(4), 333–340.

Duffy, K. (2003). Failing students: A qualitative study of factors that influence the decisions regarding assessment of students’ competence in practice. Glasgow, UK: Glasgow Caledonian Nursing and Midwifery Research Centre. Retrieved from http://www.nm.stir.ac.uk/documents/failing-students-kathleen-duffy.pdf

Duffy, K. (2004). Mentors need more support to fail incompetent students. British Journal of Nursing, 13(10), 582.

Dunning, D., Heath, C. & Suls, J. (2004). Flawed self-assessment: Implications for health education and the workplace. Psychological Science in the Public Interest, 5(3) 69–106. Retrieved from https://faculty-gsb.stanford.edu/heath/documents/PSPI%20-%20Biased%20Self%20Views.pdf

Edgerton, R., Hutching, P. & Quinlan, K. (2002). The teaching portfolio: Capturing the scholarship of teaching. Washington DC: American Association of Higher Education.

Emerson, R. (2007). Nursing education in the clinical setting. St Louis: Mosby.

Frank, T. & Scharff, L. (2013). Learning contracts in undergraduate courses: Impacts on student behaviors and academic performance. Journal of the Scholarship of Teaching and Learning, 13(4), 36–53.

Gaberson, K., Oermann, M. & Schellenbarger, T. (2015). Clinical teaching strategies in nursing (4th ed.). New York: Springer.

Gainsbury, S. (2010). Mentors passing students despite doubts over ability. Nursing Times, 106(16), 1.

Galbraith R., Hawkins R. & Holmboe E. (2008). Making self-assessment more effective. Journal of Continuing Education in the Health Professions, 28(1), 20–24.

Gardner, M. & Suplee, P (2010). Handbook of clinical teaching in nursing and health sciences. Sudbury, MA: Jones & Bartlett.

Gregory, D., Guse, L., Dick, D., Davis, P. & Russell, C. (2009). What clinical learning contracts reveal about nursing education and patient safety. Canadian Nurse, 105(8), 20–25.

Hall, M. (2013). An expanded look at evaluating clinical performance: Faculty use of anecdotal notes in the US and Canada. Nurse Education in Practice, 13, 271–276.

Heaslip, V. & Scammel, J. (2012). Failing under-performing students: The role of grading in practice assessment. Nursing Education in Practice, 12(2), 95–100.

Hodgson, P., Chan, K. & Liu, J. (2014). Outcomes of synergetic peer assessment: first-year experience. Assessment and Evaluation in Higher Education, 39(2), 168–179.

Kajander-Unkuri, S., Meretoja, R., Katajisto, J., Saarikoski, M., Salminen, L., Suhonene, R. & Leino-Kilpi, H. (2013). Self-assessed level of competence of graduating nursing students and factors related to it. Nurse Education Today, 34(5), 795 – 801.

Keary, E. & Byrne, M. (2013). A trainee’s guide to managing clinical placements. The Irish Psychologist, 39(4), 104–110.

Knowles, M. (1975). Self-Directed Learning. New York: Association Press.

Larocque, S. & Luhanga, F. (2013). Exploring the issue of failure to fail in a nursing program. International Journal of Nursing Education Scholarship, 10(1), 1–8.

Lyons, N. (2006). Reflective engagement as professional development in the lives of university teachers. Teachers and Teaching: Theory and Practice, 12(2), 151–168.

Marsh, S., Cooper, K., Jordan, G. Merrett, S., Scammell, J. & Clark, V. (2005). Assessment of students in health and social care: Managing failing students in practice. Bournemouth, UK: Bournemouth University Publishing.

Mass, M., Sluijsmans, D., Van der Wees, P., Heerkens, Y., Nijhuis-van der Sanden, M. & van der Vleuten, C. (2014). Why peer assessment helps to improve clinical performance in undergraduate physical therapy education: A mixed methods design. BMC Medical Education, 14, 117. doi:10.1186/1472-6920-14-117.

Mehrdad, N., Bigdeli, S. & Ebrahimi, H. (2012). A comparative study on self, peer and teacher evaluation to evaluate clinical skills of nursing students. Procedia – Social and Behavioral Science, 47, 1847–1852.

Melrose, S. & Shapiro, B. (1999). Students’ perceptions of their psychiatric mental health clinical nursing experience: A personal construct theory explanation. Journal of Advanced Nursing, 30(6), 1451–1458.

Mort, J. & Hansen, D. (2010). First-year pharmacy students’ self-assessment of communication skills and the impact of video review. American Journal of Pharmaceutical Education, 74(5), 1–7.

O’Connor, A. (2015). Clinical instruction and evaluation. Burlington, MA: Jones & Bartlett.

Oermann, M. (2015). Teaching in nursing and role of the educator. New York: Springer.

Pisklakov, S., Rimal, J. & McGuirt, S. (2014). Role of self-evaluation and self-assessment in medical student and resident education. British Journal of Education, Society and Behavioral Science, 4(1), 1–9.

Ramani, S. & Krackov, S. (2012). Twelve tips for giving feedback effectively in the clinical environment. Medical Teacher, 34, 787–791.

Ramli, A., Joseph, L. & & Lee, S. (2013). Learning pathways during clinical placement of physiotherapy students: A Malaysian experience of using learning contracts and reflective diaries. Journal of Educational Evaluation in the Health Professions, 10, 6. doi: http://dx.doi.org/10.3352/jeehp.2013.10.6

Ramsden, P. (1992). Learning to teach in higher education. London: Routledge.

Regehr, G. & Eva, K. (2006). Self-assessment, self-direction, and the self-regulating professional Clinical Orthopaedics and Related Research, 449, 34–38. Retrieved from http://innovationlabs.com/r3p_public/rtr3/pre/pre-read/Self-assessment.Regher.Eva.2006.pdf

Rudland, J., Wilkinson, T., Wearn, A., Nicol, P., Tunny, T., Owen, C. & O’Keefe, M. (2013). A student-centred model for educators. The Clinical Teacher, 10(2), 92–102.

Rush, S., Firth, T., Burke, L. & Marks-Maran, D. (2012). Implementation and evaluation of peer assessment of clinical skills for first year student nurses. Nurse Education in Practice, 12(4), 2219–2226.

Rye, K. (2008). Perceived benefits of the use of learning contracts to guide clinical education in respiratory care students. Respiratory Care, 53(11), 1475–1481.

Scales, D. E. (1943). Differences between measurement criteria of pure scientists and of classroom teachers. Journal of Educational Research 37, 1–13.

Scanlan, J. & Care, D. & Glessler, S. (2001). Dealing with the unsafe student in clinical practice. Nurse Educator, 26(1), 23–27.

Secomb, J. (2008). A systematic review of peer teaching and learning in clinical education. Journal of Clinical Nursing, 17(6), 703–716. doi: 10.1111/j.1365-2702.2007.01954.x

Seldin, P. (1997). The teaching portfolio: A practical guide to improved performance and promotion/tenure decisions (2nd ed.). Bolton, MA: Ankor.

Shulman, L. (1998). Teacher portfolios: A theoretical activity. In N. Lyons (Ed.). With portfolio in hand: Validating the new teacher professionalism. New York, NY: Teachers College Press.

Sluijsmans, D., Van Merriënboer, J., Brand-gruwel, S. & Bastiaens, T. (2003). The training of peer assessment skills to promote the development of reflection skills in teacher education. Studies in Education Evaluation, 29, 23–42.

Timmins, F. (2002). The usefulness of learning contracts in nurse education: The Irish perspective. Nurse Education in Practice, 2(3), 190–196. doi: 10.1054/nepr.2002.0069

Walsh, T., Jairath, N., Paterson, M. & Grandjean, C. (2010). Quality and safety education for nurses clinical evaluation tool. Journal of Nursing Education, 49(9), 517–522.

Weeks, B. & Horan, S. (2013). A video-based learning activity is effective for preparing physiotherapy students for practical examinations. Physiotherapy, 99, 292–297.

Welsh, P. (2014). How first year occupational therapy students rate the degree to which anxiety negatively impacts on their performance in skills assessments: A pilot study at the University of South Australia. Ergo, 3(2), 31–38. Retrieved from http://www.ojs.unisa.edu.au/index.php/ergo/article/view/927

Winstrom, E. (n.d.). Norm-referenced or criterion-referenced? You be the judge! [Fact sheet]. Retrieved from http://www.brighthubeducation.com/student-assessment-tools/72677-norm-referenced-versus-criterion-referenced-assessments/?cid=parsely_rec

License

Icon for the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License

Creative Clinical Teaching in the Health Professions Copyright © 2015 Version 2.1 by Melrose, Sherri, Park, Caroline, Perry, Beth is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.

Share This Book