Small Changes Big Impact
Small Changes Big Impact
Importance of learning in an educational institution with Dr. Mahan Kulasegaram
In studio today, we have Mahan Kulasegaram, an assistant professor in the Department of Family and Community Medicine. He works as a scientist at the Wilson Centre, the MD program and the Office of Education Scholarship. Today's episode focuses on the importance of learning in an educational institution.
Small Changes, Big Impact: a DFCM podcast. I'm your host, Dr. Jeremy Rezmovitz. In studio today, we have Mahan Kulasegaram, an assistant professor in the Department of Family and Community Medicine. He works as a scientist at the Wilson Centre, the MD program and the Office of Education Scholarship. Today's episode focuses on the importance of learning in an educational institution. I hope you enjoy the show. So anywho. Um, so yeah, this, this podcast is called Small Changes, Big Impact. So do you have any small changes that you want to talk about today that you implemented?
Dr. Kulasegaram:Most of the work that I do- so I'm going to talk about it professionally- at least in the type of the, the impact of the research that we do. Um, so most of the work I do is small, at least on a paper to paper, project to project basis. We're not doing any grand experiments that are going to win Nobel prizes.
Dr. Rezmovitz:That wasn't you, that like did the cookies before the clinic and the evaluation?
Dr. Kulasegaram:No, no, that was definitely not me- although I must say that's a wonderful study. I would, I would, uh, I'm envious of the authors, not, not in the least because it's like the most downloaded paper of 2018 in medical education. Um, I'm, I'm- the, the work that I said we do is we- we're building a big story, but we're doing it one word or one sentence at a time where we're collecting information, doing research in order to build a, um, a build a stronger edifice that rests on a deep understanding of the thing that we call learning, right? So the research that I do has been trying to sort of pick at that problem of how we get people to learn in a meaningful way. And most importantly in medical school- a big context where I work- apply what they know to future problems or to learn some things and learn some- a new skill or a new, uh, a new procedure. And what we do know from the research that has been done in psychology since about the 1900s is that people don't do it intuitively. Right? Like a lot of people understood this, I think all throughout history is that learning is- learning fun things is fun, but learning hard things is hard. Medicine as I am given to understand is quite a difficult thing. Uh, in occasion- especially if you're a first year undergraduate student and you're trying to figure out, uh, you know, acid base and homeostasis and some, you know, pharmacology and some physiology. And the whole point of it I guess is to get people to recognize, yeah, I know this information- I can use this information. It's not just knowing it, I can use this information. So a small change I'd like to talk for the week- we've done- Um, and it's not just me, so this is a collaborative effort. Um, and it sounds, you know- the big, the bigger project was the revision of our foundations curriculum in the MD program. Um, some people may be aware of this, some people may not be, but essentially the first two years of our curriculum got revamped to align with a lot of new evidence and findings in educational literature, educational research on training people for the development of expertise. My small contribution to this issue was to think about how we talk- make basic science information relevant. And this part, uh, has been to support- this part has been in support of other people's research as well- especially at the level of assessment. So how do you ensure that our assessment makes it clear to learners that basic science information should be, uh, used and applied. It's not just kind of an inert thing that sits at the, in the, in the back of your head. It's not just fact that you need to know for your exam. And what we've been doing working with our faculty is to help them write, uh, questions that assess integration. So that required them to use basic science information, uh, in order to sort of come to an answer that probably involves- and if it's the question has been written well- some application of that basic science knowledge to help frame or solve a clinical task or decision, um, not, not being clinician- I don't know how to write those questions. I can tell you more or less what the questions in principle should tap. But each one of those little multiple choice questions in our program have to be written and delivered and um, and uh, you know, evaluated on, on that basis. So I work very closely with some colleagues in our program to sort of faculty develop our, our, uh, teachers, uh, to that stage where they can sort of comfortably do that. And it's an ongoing process but it's a small thing because basically the principle is that you've got to ensure that the question mentally requires them to activate basic science knowledge and think about the connection between that basic science information, the clinical thing that you want them to connect that to. Um, at the level of a single multiple choice question. It's very simple, very straightforward, uh, difficult to write some times, but, um, it's reinforces what we want to do with our curriculum. Now you scale that one little question up, up, up, up, up to two years worth of assessment. Hopefully what you have is a program of assessment that aligns with what we want our learners to do, which is to mobilize and apply their foundational knowledge to understanding clinical problems or to solve clinical, uh, clinical issues.
Dr. Rezmovitz:So it obviously sounds logical, like, yeah, that sounds great, but have you had any pushback?
Dr. Kulasegaram:Um, yeah. Uh, on a number of different, uh, reasons- all of them are valid. So one is people who are experts don't think about this, uh, intuitively, right? So you probably have a ton of basic science knowledge sitting in the back of your head, but you don't need it, right? Most of the time you're operating at the level of like, I've seen this case of flu before. You don't need to think about the pathophysiological reasons for why flu manifests the way it does. Most of the time your patients are like, you've got enough experience to perform clinical reasoning without reference to foundational knowledge.
Dr. Rezmovitz:Although recently, um, I went to a talk on vaccination and they used the pathophysiology of the influenza virus to give me a tidbit of information that helped me sell flu vaccination on my patients.
Dr. Kulasegaram:Oh, neat.
Dr. Rezmovitz:And so I'm not looking for my patients to buy in. I want them to own their own health and stuff like that. But when I use this information with them, they were like,"really?" I said,"yeah." And so the information based on evidence- and it's a really simple study that was done- looked at the, um, the burden of illness and its relation to people who have cardiac disease and people who don't have cardiac disease. And so from a pathophysiologically standpoint, the cascade of catacholamines that come, um, in response to the flu virus end up, um, burdening the heart. So if you have the flu virus and don't- the flu vaccine and don't get the flu, then you're going to limit your, um, burden on the heart, which means that you probably won't end up with a heart attack. But it turns out people that end up with heart disease have more heart attacks when they get the flu. Pathophysiology. Science makes sense. And so you tell this to a patient and I say, listen- you tell them all the other stuff, which is, you know, it's important to vaccinate yourself. We want 95%. It's herd immunity, but they don't want to listen to, you know, nobody wants to do- it seems not a lot of people take the flu, a flu shot for other people. But as soon as you tell them, listen, there's a risk on your heart and that we can reduce that with the flu vaccine, they're like, huh, so you think I should get it? Well, duh, yes, I think you should get it. And so it's interesting to see the switch. Um, so I've been very successful with that right now. So thank God for science sometimes.
Dr. Kulasegaram:Well maybe all the time, but that's kind of a, I think there's a point, right? So in, in these circumstances where there is novelty or like there's a intractable- not an intractable problem, but a problem in which you've got to kind of go to first principles, you probably, you can do that easily, right. And you understand how to, how to do that. Um, but most of the time you probably won't need to.
Dr. Rezmovitz:Um, and yet I'm using the Krebs cycle again. I am. So turns out, strangely enough- I just have to bring this in because maybe we'll have some listeners in year one and year two who are like,"we don't need the Krebs cycle." Well, it turns out magnesium is a, is a vehicle for hydroxylation, I believe. Um, because of its cation status, it can bind easily. And so you need magnesium. So we're finding the, the readings that I've been doing lately on magnesium is that it is so essential and you can get it through a transdermal approach, you can get it through an oral approach, and yet people who are fatigued and have mood issues and have, um, um, I can't think of what- shaky leg syndrome- I can't think of the word, restless legs syndrome. You know- you're at a magnesium deficit and maybe because your calcium to magnesium ratio is out of whack, but, but people are not getting enough magnesium I think. It's tied into their vitamin D as well. And so all this stuff is predicated upon the Krebs cycle. Who would have ever thought.
Dr. Kulasegaram:That it was useful.
Dr. Rezmovitz:That it was useful, but it turns out that it's quite useful in producing ATP. And so if you don't have enough magnesium to as a cofactor to bring in- I believe it's under the, uh, through the, through the membrane to receive it, you're not going to get your ATP. And if you're not going to get your ATP, you're going to be at a deficit for energy. We're going to feel fatigued.
Dr. Kulasegaram:I, uh, I recall very vividly- my colleague Nikki Woods, you know, does this research sort of on the instructional design side and the example that she always uses once she talks about basic science, like, well, do we really need to know the Kreb cycle popping up? Right, right. So in ed DFCM day, I think in 2016, a family physician out in the community, I think she was a nutritionist- nutrition is a specialty focus of hers- stood up and said, well, actually the Krebs cycle is very important for magnesium. It was the first we heard of it. I'm like,"Oh, I guess that makes sense." She explained all the things you're talking about the restless syndrome. I'm like, I wonder if I'm at a deficit for magnesium and-
Dr. Rezmovitz:You probably are.
Dr. Kulasegaram:It fed into my hypochondria very well. Um, but yeah, but you're right, it's funny, right? Like these things pop up that, um, that you don't, that you don't often realize there's the, the utility of the science underlying it. Now that that's in practicing physicians. But for trainees, it's, uh, it's a different story because the utility of the basic science is actually to help them learn clinical stuff. It's just a scaffold or a ladder on the way to clinical expertise. Um, and often the challenge is that while we can teach in ways that sort of support the integration, our assessments aren't always connected to that idea, right? They're often assessing isolated facts or factoids, uh, or information that, um, doesn't reinforce or support the fact you have to know the relationship between these things. For faculty, this can be a little bit difficult because if they don't operate at that level, writing these questions is not intuitive to, to begin with. The other thing is to write good assessment questions in the way that goes beyond people- just kind of requiring them to recall information or use a memory or remember, you know, remember the answer- is a skill. Like any other skill. And most people aren't taught that skill even if they have an education background or if they've done faculty development around education. Well, we don't spend a lot of time on that. So it took a long time to actually, uh, push people to push pull course, convince, um, enhance and support them in the way to get, get to this point. So it took some time working with, uh, other, you know, faculty leaders who had a lot of experience writing multiple choice questions, in fact, professional faculty developers and go into our teaching colleagues, uh, you know, at, at the, you know, where they are- going to their academies or bringing them in at a course level and getting them together. Um, the benefits to our program of course, are kind of a knock on. So one is you reinforce a curriculum, uh, lofty goals using using the assessment. Uh, the second is you ensure assessment quality is very good. And I- as an assessment researcher primarily, I feel very strongly about that. You know, these, these things that we assess people on. It's not just hoop jumping exercises, but stuff that should help them prepare and develop, uh, help the learners prepare and develop on their road to expertise, tell them what's coming and give them some important feedback. So for, for those reasons, while, you know, while we did initially, you know- people don't, um, have the initial skillset, once you start building that skill set up, people are oftentimes willing to come along for the ride.
Dr. Rezmovitz:So then what's the impact been? Um, have you seen the impact yet of the foundation's curriculum? You said the small change was, was adding this in and you know, it's part of a bigger collaborative that you're doing, but so what's the impact now?
Dr. Kulasegaram:So I think the impact is one, is to ensure that our curricular goals are not, um, are not, are, are not lost in the, in the haze of the assessment, right. So where people recognize they're being assessed for, for a real reason and uh, to see that people are able to recognize- by people, I'm talking about the students here, they are people, but it's specifically the students- are able to see the spiral of the curriculum reflected in both the, uh, in both the way we can do instruction, what we teach, but also in the way they're assessed that they get a lot more feedback on different aspects of the curriculum they may not have received before and they get lot of'em, uh, they get a lot of experience and practice with, uh, applying their knowledge. That's hopefully preparatory for the things they have to face down the line in clerkship, but also at licensing.
Dr. Rezmovitz:Are they getting different types of feedback other than formative or summative?
Dr. Kulasegaram:They get much more detailed formative feedback.
Dr. Rezmovitz:Formative being like someone had to fill out a form.
Dr. Kulasegaram:Uh, no in the sense that they have- good one. Uh, summative m eaning we totaled up their, you know- the scores on the, u m, the test. Yeah. U m, no, I think we give them more detailed formative feedback. So it's broken down by competency. If they so wish to apply, we break it down by for example, some of our assessments, we ask people to tell us when they're guessing, right? We tell them, here's how you're doing on the questions you're guessing at, here's how you're doing on the questions you're not guessing at. So you get a bit of self-monitoring about, you know, am I, is my, u h, y ou know, calibration or a self assessment of whether, whether I know this domain of knowledge or not right on. So they get a bit of that type of, u m, u h, i nformation which they may not have had in the previous curriculum.
Dr. Rezmovitz:So it's interesting cause you know, obviously the end goal here is to make better doctors.
Dr. Kulasegaram:I would say- longterm. Yeah. We, we'd settle for better clerks and then from better clerks to better residents.
Dr. Rezmovitz:Okay. So, so let's say the goal is better clerks in the MD program. How do you measure that? What's the metric to say that somebody's better?
Dr. Kulasegaram:So I think part of it is we have to sometimes abandon our traditional metrics, right? Because a lot of our traditional metrics or ways of thinking about education are about performance, not learning. So a classic example of this when we're talking to some of our faculty, um, and, uh, this was in, uh, this was not part of the foundation's curriculum, but it was about another significant curricular change. And they were talking about, well, this person doesn't, you know, perform the way I expect them to perform as a clerk should in, in April. Well, it's the first time they're coming into this, this learning space. So, of course they're not going to perform, but they hopefully will learn. And so it means changing our sort of benchmarks from attaining a criterion to actually is there a growth, right? So we want growth as opposed to attainment of a specific criterion. We want people to be competent, right? But I can more or less guarantee that the majority of Canadian medical students graduating across this country are going to meet minimal competence because we have certifying process that ensure that we have multiple stages of licensure that ensure that. And you know, as you know, for all the things that people like to nitpick at with our, you know, our culture of medical education, it's pretty good at delivering minimally competent physicians. What we want to do is go beyond that.
Dr. Rezmovitz:Okay. Thank God.
Dr. Kulasegaram:Yeah.
Dr. Rezmovitz:I thought we were going to stay at minimally competent.
Dr. Kulasegaram:No, no, yes, no. It's not about minimal conpetence. Well, we eventually want excellence. We want expertise. And I think that's the, um, that's the part where we have to start shifting the metrics. So the questions become now is not whether the students can simply, you know, recite, you know, knowledge in, in the context of a[inaudible] performance. But do they have a different attitude towards seeking feedback? Are they better able to set goals for themselves as a result of the feedback that they receive? Are they better able to apply their knowledge? And even if they don't know the answer, can they apply what they know previously to find that what the answer could be for that problem? Are they oriented towards building a stronger conceptual understanding and can we see that? We'll still use traditional metrics to look at that, right? We're still gonna look at assessment data. We're still gonna look at evaluation data, but we're also looking at how people are, um, uh, how the students are responding to, uh, the, the challenges of clerkship and how much they draw on the foundation's experience to help them with that process. And whether they're elaborating on the, the knowledges that they built from, from the first two years. So that data is still coming in. But by and large, they look at sort of the assessment data, the formal, that type of information, uh, that, that, that level of evaluation. Um, we're certainly about the same, if not better than we were before. I think.
Dr. Rezmovitz:So it leaves me, you know, it leaves me, listeners, probably begging for the question of- it's great that we have this, this goal now of making them better and better learners. Right? Because that's- so does the individual in, in each conversation- individual in case here, I guess, you know, each individual medical student, do we ever take into a fact into, into effect- into effect? What's the word I'm looking for?
Dr. Kulasegaram:In to account?
Dr. Rezmovitz:In to account, sorry, in to account. Their learning style doesn't matter at all?
Dr. Kulasegaram:That's a really interesting question. Most of the learners I come across are olfactory learners. That is, they like to smell the learning. Um, and so that's been a challenge given that we often have to put people in scent free spaces. So probably we're not meeting them where their learning styles are, but I, I suspect they're learning in spite of the misalignment between their olfactory learning style and their, um, and their, uh, and the environments in which we're teaching them and what we're teaching them. And for any listeners who may be wondering this, I do not believe in learning styles. Jeremy's being facetious. Don't believe in learning styles. Really bad idea.
Dr. Rezmovitz:No, I, I'm not being facetious. What I'm saying is we have outcomes and so we have to get to those outcomes. Do we ever account for the individualism of getting there because as long as we're seeing the curve, so- because it plays into minimal competency versus outcomes. And so do we want, when we graduate these learners, for them to be minimally competent or do we want to attain excellence? And so there's this curve of, you know, competence and excellence and mastery. I don't expect mastery after four years. I don't even expect excellence. I expect, um, curiosity. I expect, um, um, motivation. But, but what I've seen sometimes is this, um, I expect you to deliver it to me on, on a plate of- and so that's part of the minimal competence. They haven't had that flare for, for learning as much as I would like to see. They don't have that go and get it motivation that I would love to see. And so I'm not being facetious when I truly ask, are we accounting for the individual styles of each learner going through this program that may not make it to the outcomes that we've decided. Because you and I both know that learning styles at the end, they don't matter as long as you get to the goal. And so if the goal is lifelong learning and they're graduating, because there's also a culture of sometimes failing to fail. Who are we doing a disservice to? The patients, the culture, the physicians. And so I really, truly asking,does it matter in the end? Like how do we account for these individuals if we're trying to get to a certain point to this outcome. Um, doesn't, does the individual's pathway as long as they're learning? Is that the only thing that's important here? You know, they show the demonstrated that they're learning, so it's going up, but they actually haven't met or they've met minimal competence. Is that really what we're, we're looking for here? And I don't know if you can answer that question.
Dr. Kulasegaram:Well I think it's cliche to say that we're all trying to move towards thinking about, uh, you know, the education process of a physician as a lifelong continuum, right? That it never stops. And I think that's the attitude taken by everyone from medical regulators to, you know, what the medical school is trying to do in terms of, in relation to that. Right? We want people to graduate with that and of course it's going to be, um, it will be difficult. And just by the nature of the game, there will be some people who, you know, will less likely to espouse that attitude or less likely to want to do that.
Dr. Rezmovitz:Of course, I'm not saying that every medical student is like this, you know, if you group them into thirds, you've got the top third, you've got the middle third, you've got the bottom third. But it's the- I'm worried about the bottom third and their- you know if there's something individual about them that's not getting them to that minimal competency, is that, is that something that you guys take into account?
Dr. Kulasegaram:In all honesty, I don't know how much we can take that into account for a number of reasons. For one is there's a bit of a gap in our knowledge around that. Right? So, um, this is a very fraught area of research for all sorts of like social sensibilities. Um, and you know, people have been using ideas like growth mindset or, you know, fix- to kind of describe that. I don't know how much of these things are traits as opposed to states of people, depending on the knowledge that they're being asked to learn where they're coming from, their particular context at that time of day. Um, second is, you know, we have a, you know, we're a very fairly large medical school, probably not the largest, but a fairly large medical school with an enormous social investment in, in the education process, right? So I think if I'm not mistaken, medical students pay a third of the cost of training and you know, the society subsidizes the rest. So we have responsibility to graduate people who are going to be competent. But if, if we'll settle for, we can settle for minimal competence, you know, uh, if that's what it takes to meet the workforce needs that that might be the case. Um, that's not the official position of any anyone. That's just my view. This is the reality in which we work with. It's expensive and it's a financially and logistically, uh, difficult.
Dr. Rezmovitz:Which brings me to my next question. As someone who studies big data, um, the amount of information that is available to the medical learner now is exponential to what it was 30 years ago. And yet the model hasn't changed. It's still four years of medical school. So do you think that factors into it at all? Does, should there be a change in, in the length of medical school or how it's, how it's set up even? I'm just asking you as Mahan not as a representative of the University of Toronto.
Dr. Kulasegaram:That's great cause you know, my opinions don't count for much. Um, so it's a, it- again, you know what, four years is a choice, right? If you look at the historical reasons why we do this- four years is a choice, other places six years, other places shorter than that- three year programs, we have just down the street and uh, in the QEW. Um, and in some places in the United States have been experimenting with like an integrated undergrad to postgrad pathway that shortens that time based on some competency based models. Um, all those things are, all those things are quite possible. I think there's probably not- the reasons why we do things are probably more contextual than necessarily evidence-based. I have not seen any evidence to suggest that three year trained physicians are, uh, you know, less competent than four year trained physicians. In fact, all of the data seems to look at these large studies around, uh, you know, medical school curricula, show by and large in highly credited and highly, you know, uh, standardized learning environment- medical education, you know, communities or cultures like North America, most students are- can be- a most medical school curricula are fairly, fairly similar because the content is all the same. I think the bigger issue around what's happening with, with the explosion of uh, education, uh- so an explosion of clinical data, education data, all of that stuff is that, uh, we probably have to come back to some of the first principles around this stuff cause it's not the knowledge, not the content or volume of knowledge that ends up being mattering, but rather can you organize it and use it. And that stuff is still, you know, still one-on-one basics of education, learning in psychology. How do you teach people to make meaning of the world? How do you help them connect what they know to what they, uh, what they know to what they're supposed to be learning. And how do you go about ensuring that you give them feedback on that process through the way you assess them? And that's, you know, that's Dewey. That's if you can go back to Rousseau or like, you know, uh, uh, Socrates and Plato, that's like fundamental stuff and how we, how we understand learning. And so I've never been bothered by the fact that, you know, people say, Oh, you've got all this information at the fingertips of, of your phone or you know, the worldwide web. You still have to know how to ask the right questions. Right. And that's the bigger issue. And that's where I think the educational, uh, game is always.
Dr. Rezmovitz:So let me leave you with this last question then. Is there anything you'd like to impart on our listeners, um, before we end today? Is there any final words? If only they knew, um, Mahan, at age 20, I assume you're like 25.
Dr. Kulasegaram:Not quite.
Dr. Rezmovitz:24? Um, so any final words?
Dr. Kulasegaram:If only they knew? That's a really good one. So I hope that all of your learners have the privilege of being teachers as well as being learners at the same time, and that they have the opportunity to interact with our students, um, in whatever capacity that brings across them. I wish for them to keep in mind that, you know, we're, we're in a learning- we're in the business of learning. When we're training students, not always asking them to perform and to think back to their own struggles as a, as a teacher, and to recognize that your knowledge is, and the way you organize your knowledge is going to far outstrip what the student has. And your job as a teacher hopefully is to make clear to them the process by which you perform, not the performance itself. So what did it take to get to the answer? What did it take to understand this problem? Cause, um, this is the stuff that's most opaque to students. And this is why we always complain. Well, they relied too much on the algorithm or they just fight the patient or, you know what I mean? We, we talk- we give them all sorts of like these tools to help them do procedures. Um, but in reality, that's not where expertise lies. It's going one step below. And so even when they're assessing them at the bedside, I keep in mind that what you want to assess is can they understand what's going on? Do they have a why or a rationale? Can they think about the what if and extrapolate their thinking? And if they can't do that, then hopefully you get a chance to push them along those lines. I didn't really start learning, I think till my teachers, my mentor started pushing me in those directions and that was a profound shift. It was discomforting, uncomfortable. Um, but if with, with appropriate feedback, um, I think it can be very beneficial for both the learner and the teacher.
Dr. Rezmovitz:I agree. Thank you so much for coming in today.
Dr. Kulasegaram:It's been a real pleasure.
Dr. Rezmovitz:Okay. Have a wonderful day.
Dr. Kulasegaram:You too.
Dr. Rezmovitz:This podcast was made possible through the support of the Department of Family and Community Medicine at the University of Toronto. Special thanks to Allison Mullin, Ryan's cell phone and the whole podcast committee. Thanks for tuning in. See you next time.