More Resources Re Teaching, Learning, and Bar Passage

Thank you to Best Practices for Legal Education Blog for having me as a blogger this week.  I hope the examples I’ve provided about methods medical schools use to evaluate their curriculum, test the effect of new programs, and look for factors that affect success on licensing exams.  As I mentioned at the end of my last post, the most comprehensive source for research based information about bar passage programs as well as a source of funding for sources is AccessLex.  There is a growing literature of articles from schools which have implemented successful bar passage programs.  Here’s an article by Louis Schulze about his work at FIU.

You might also be interested in a series of articles from back in 2009-2010 when those at the front lines of legal education, first year faculty and legal writing and research faculty, began to see significant differences in performance between the students they were teaching and those in the past.  These articles provide information about how substantial changes to the k-college education system in the U.S.A. impacts law students’ transition to law school. This article by Rebecca Flanagan is a good overview.  Prof. Patricia Grande here.  A literature review of law learning strategies by Profs Jennifer M. Cooper and Regan A.R. Gurung.   One more by Profs Susan Stuart and Ruth Vance

Here are the proceedings of a 2014 Symposium entitled “Teaching the Academically Underprepared Law Student” and I invited readers to take advantage of the comments section of this blog to share other publications—including the many more recent ones.  My point here is historical, not bibliographical.  And here, as a quick reminder of one of the crucial skills the bar doesn’t test– research.  Caroline L. Osborne

Finally, something I’ve mentioned elsewhere, the new collaboration between LSAC and Khan Academy providing free, on-line, very high quality LSAT preparation may have something to offer law students.  The skills underlying LSAT performance, close reading and legal reasoning, are not immutable—students can get better at them after enrolling in law school and may find some time with these materials a helpful and interesting way to brush up on these skills.

 

 

Advertisements

What can Law Schools Learn about Bar Passage from Medical Schools’ Approach to Studying Students Who Struggle with Licensing Exams?

It’s not unusual for a provost or a colleague or a relative at Thanksgiving to ask a legal academic why law students have so much trouble passing the bar exam when the pass rates for medical students are usually in the high 90th percent.  The short answer to that question is that the two processes are completely different—and there’s no obvious trick, technique, or intervention that could convert our bar passage rates into their licensure passage rates.   For one thing, it’s the wrong question.  “Passing” the medical licensing exams is certainly important, but unlike the “all or nothing” process of passing the bar exam, the score achieved on Step 1 affects medical students’ entire career path.  But there is a lot to learn about the methods that medical schools use in studying the very few students who have trouble as well as how they evaluate the effect of changes to their curriculums on scores on the licensing exams.

Quick recap on professional licensing—future doctors take a series of three exams over the first six years of their undergraduate medical education and the start of their residency.  (more links in a post I wrote earlier this year here).  The exams are almost entirely national although the actual process of being licensed is conducted on a state by state basis.   Law students take a licensing exam in the state where they intend to practice upon graduation.  For purposes of this post, the closest analogy to the bar exam is the more academic Step One students take during their second year of medical school.  Like  our NCBE, the National Board of Medical Examiners which produces United States Medical Licensing Examination works with medical licensing boards and depends on their confidence.  It issues annual reports.

The focus of this post is on the methods that medical schools use to study the small number of their students who do have trouble passing the licensing the exams as well as the factors that can affect the scores students achieve.  I’ve tried to focus on articles outside of paywalls, and would certainly encourage you to conduct your own searches in the various data bases to which you have access.  There are several journals devoted directly to studying medical education—although these articles can pop up anywhere.

Medical educators use a wide range of research techniques to learn more about students who struggle with licensure exams.  Like us, medical schools would prefer students pass the first time and many articles like this one look for characteristics who fail the first time but eventually pass.  Others look for characteristics of students at risk for failure here and here  or even  what students think of the exam.    Another area for inquiry involves the role stress plays in the score students achieve.   In partnership with social scientists at our schools or in our communities, we too could be conducting studies to help us learn more about students who face difficulty passing the bar exam.  These studies can be part of graduate student work or may even be funded by groups like Access which is making money available to study bar passage.

 

The actual reason the medical school pass rates are so high, though, may not be all that helpful.

It’s not just because they are able to limit admission to students who have already demonstrated an ability to score very highly on the MCAT.  A test that is much more similar to step 1 than the bar exam is to the LSAT.  Indeed, medical schools have direct input in both the MCAT and the Licensing Exams—so when one changes, the other can too. And it’s not clear that anything in the curriculum makes a difference at all—the industry offering study aids and licensure prep courses dwarfs the bar prep and study aid market to a point where students often start studying for the licensing exams before the first day of medical school.

But if it is the curriculum, it’s important to remember the vast difference in time scale between medical and legal education.  We have students for three years post B.A. Medical schools in the U.S. plan their curriculum based on  8 plus years of increasingly specialized medical education.  They are therefore comfortable holding off on the direct teaching of practice skills for the first two years while they are aligning their curriculum with the content of the Step 1 exam.

Even Step 1, though, is far more focused on practice than on knowledge accumulation or deliberately confusing question formulations that characterize the bar exam. Step 2,  the second round of licensing exams prior to graduation medical school,  go past paper and pencil in that they actually test students’ ability to conduct exams and exercise medical judgement.  Another reason for the high pass rate is that most medical schools have stopped developing their own tests and instead use assessment instruments (shelf exams) provided by the same company that produces the exam.   Sure, there is grumbling and criticism about content & timing of the licensing exams, but medical schools work hard to make sure that their curriculums are aligned with the content of the exams.  Finally, medical education is extremely self-reflecting–they are constantly aware of the risks that come from confusing correlation and causation.  How do you know that a change in one part of the curriculum is the cause of a change in test scores?  You run Pearson correlations followed by stepwise linear regressions.  Seeing is not believing when comes to identifying factors that affect performance on licensure exams.   Look here, here, here, and here for studies evaluating curriculum changes.  They take nothing for granted—does attendance make a difference, does flipping classrooms really work? Does reducing the number of hours spend in the anatomy lab reduce USMLE scores?

Another standard practice in medical schools is curriculum mapping— an essential first step for any school that wants to understand what they are teaching—let alone make changes.   Like all maps, curriculum maps are DESCRIPTIVE, not PROSCRIPTIVE.  Here is   Harvard’s curriculum map, but you can find examples on the home page of just about every U.S. Medical School.This is a an article walking through how to map a curriculum.

So what’s helpful to us isn’t so much what medical schools are doing, but how they are evaluating themselves. 

In recap, neither I nor anyone else who has ever practiced law thinks it would be a good idea to emulate medical schools by fully aligning our curriculum with the bar exam so as to turn the three years of law school into one extended bar prep course.  Among other reasons, the material tested on the bar is quite static and doesn’t reflect the realities of today’s law practice.   It also wouldn’t make much sense for schools whose students take the bar exam in many different jurisdictions.   Also, the bar exam is just not equivalent to the three rounds of USMLE exams in actually testing both the knowledge and application of knowledge needed to be a successful lawyer.  If it was, we wouldn’t hear so many complaints about how students who have passed bar are never-the-less not “practice ready.”

Tomorrow—where can we get the help we need to find out this information, and who is going to pay for it?  Spoiler--Access Lex has a program.

We have to talk about the bar exam

Thank you very much to the team at Best Practices for Legal Education for inviting me to blog this week.  My particular thanks to Elizabeth Murad for administering the blog, Professor Mary Lynch, Kate Stoneman Chair in Law and Democracy & Director, Center for Excellence in Law and President & Dean Alicia Ouellette of Albany Law School for hosting this blog.  It is an honor to join such a distinguished group of scholars and teachers.

We knew it was going to be a bad bar year when on Sept 14, 2018 the NCBE announced that “the national average MBE Score for July 2018” had decreased “about 2.2. points from the July 2018 average.”  And, indeed, as states have announced the pass rates based on their own individual formula of MBE plus essays plus the MPT (multistate performance test) plus their own individualized questions, the results were bad.  A lot of our students failed the bar exam.  Pass rates were down in New York, in California, Georgia, Florida, in Texas, and lots of other places.  Yet at the same time, individual schools saw significant success in raising pass rates in the face of grim news all around them.  All of this makes for glib headlines and much handwringing, but in the context of a blog post on “Best Practices for Legal Education” it is more helpful to take a step back and assess the tools we, as legal educators, have available to us in addressing bar passage in our individual schools.  I do so from my Ph.D. studies in higher education as well as from my experience as a dean, associate dean, law professor, and medical school professor.

One of my main themes this week will be to argue for individualized problem solving.  If anyone comes to you with a product to solve all your bar passage problems, I hope after this week you will be able to ask some questions about the data on which they base their claims.    Because a productive discussion of bar exam passage really rests on two questions—1. Why aren’t the students at your law school passing the bar exam at the rate they “should” and 2. What should you do about it?

I am going to use this week to share with you some of the resources available to law schools, to individual faculty members, and even to law students who want to increase their chances of passing the bar the first time.  Along the way, I hope to address some of the unhelpful myths that have arisen and to endorse a very old idea borrowed from the then revolutionary 1960s era child rearing techniques of Dr. Benjamin Spock: These are your students—and you know more than you think do.  Trust your judgement.  Ask questions.  That doesn’t mean that you can do everything yourself—it’s fine to consult with experts, but in the end addressing bar exam passage issues is a school wide effort and everyone has relevant information to add and a valuable role to play.

To get started, it’s helpful to have an overview of the players.  As a matter of foundational Constitutional Law, each state retains the power to license and regulate professionals.  (more detail here).   As a result, every state and territory has its own process for setting criteria for bar passage.   Almost every state contracts with the National Conference of Bar Examiners which develops the annual exam, grades it, and spends a lot of time explaining itself.  If you have any interest in this topic, a free subscription to The Bar Examiner will quickly bring you up to speed.

Tomorrow–how a test from the 1950’s trips up today’s digital natives (or “Do we need a Tardis to match law school curriculum to the bar exam?”)

Studying Better Ways to Test Bar Applicants for Minimum Competence: Another Reason to Care about the California Bar Exam (Besides the Cut Score Debate)

In addition to her post on Law School Café about alleged correlations between bar exam scores and lawyer discipline (discussed on this blog here), Professor Deborah Merritt recently offered another bar exam-related post. This one provides intriguing historical perspective on the current need to expand the range of skill sets tested on the bar exam. Following up on points made by Professor Derek Muller, Professor Merritt discusses a 1980 study by the California Committee of Bar Examiners, cosponsored by the National Conference of Bar Examiners (NCBE), on adding a clinical component to the bar exam. Several hundred applicants who had taken the July 1980 California Bar Exam volunteered to complete an additional clinical evaluation requiring them, among other things, to interview a client and examine witnesses. Professional actors played the role of clients, akin to the standard patient role that actors perform for clinical evaluations in medicine. The applicants were scored based on an elaborate protocol.

Delving into the statistical results of the study, including comparisons between outcomes on the conventional bar exam and outcomes on the clinical evaluation, Professor Merritt illuminates how crucial it is nearly 40 years later for bar examiners to study and implement alternative assessments of skills not currently evaluated by the bar exam. She points out that, while the study’s results were by no means definitive, they at least suggest “the disturbing conclusion that a significant percentage of conventional bar passers (about two of every five) lack basic practice skills that are essential in representing clients.”

I find this discussion particularly apt in 2017, the 20th anniversary of the first administration of the Multistate Performance Test (MPT), the written skills test now a part of the bar exam in 40 states and D.C.  What started the path toward written performance testing and the MPT? A study conducted by the California Committee of Bar Examiners (cosponsored by the NCBE), possibly the same one referenced by Professor Merritt.  On the occasion of the MPT’s 10-year anniversary in 2007, the Bar Examiner, a magazine published by the NCBE, briefly described the California-based origins of the performance test and indicated that the MPT was ultimately based largely on “the California model.” (The piece, in the November 2007 edition of the Bar Examiner, is apparently not retrievable online.)

Written performance testing was the last meaningful innovation in bar exam testing. In thinking about who might lead an effort toward the next one that introduces greater clinical evaluation, including possibly of oral skills, I think not of a top-down effort from the resolutely conservative NCBE. It is focused on getting as many jurisdictions as possible to adopt its Uniform Bar Exam (26 and counting as of today). Rather, I think of a bottom-up effort by individual states—perhaps with California in the lead—serving as laboratories for testing methods that could ultimately spread to other jurisdictions, thereby persuading or forcing the NCBE to join.

The history of written performance testing is illustrative of my point. Long before the NCBE went forward with the MPT in 1997, not just California but also Alaska and Colorado devised performance tests of their own and administered them on the bar exam. Indeed, those three states were administering performance tests in the early 1980s, playing an important initial role in advancing the cause of a needed bar exam reform. Here, for example, is a follow-up study of the 1983 California Bar Exam, discussing its two performance tests.

The biggest barrier to innovation at the state level is the NCBE’s influence, which increases with each state that adopts the UBE and thereby constrains itself to offer the conventional bar exam that the NCBE requires it to. Indeed, both Alaska and Colorado, two of the original performance test states from the 1980s, have adopted the UBE, meaning neither of those states will be doing any more bar exam innovation. That leaves California (and any of the other 23 states that have yet to join the UBE, none of which matches the influential profile of California).

Why the California study or studies did not lead to some form of clinical evaluation beyond written performance tests is unclear, though two obstacles that come to mind are expense and testing reliability. Indeed, the 1980 study that Professor Merritt references summarized one of its findings as follows: “[T]he relatively low reliability, administrative difficulties, and high costs associated with most (but not necessarily all) standardized oral tasks probably precludes even considering them as possible components of a general bar examination. Written tests of clinical skills, on the other hand, are relatively easy to construct, administer, and score. Further, unlike oral tasks, the score on written tasks are moderately correlated with one another.”

It seems worthwhile to revisit those conclusions, given the passage of time and possible advances in testing methods, and given that the medical profession requires clinical evaluation of its applicants.  Today, 24 years after the MacCrate Report, 20 years after the advent of the MPT, and 10 years after the Carnegie Foundation Report, the legal profession needs a better bar exam.  I join Professor Merritt’s call for a national task force on the bar exam, sponsored by AALS, the Conference of Chief Justices, the ABA Section of Legal Education and Admissions to the Bar, and maybe even the NCBE.  As Professor Merritt writes, such a task force could “study current approaches to the bar exam, develop a more realistic definition of minimum competence, and explore best practices for measuring that competence.”

But I also come back to the states, and to California specifically. There is a vigorous debate going on about whether California should lower its bar exam cut score. That’s an important discussion to have. But I might suggest another discussion to have about the California Bar Exam: Shouldn’t California resist the UBE and instead conduct a new study of alternative methods for assessing today’s relevant lawyering skills that are not encompassed by the UBE?

Professor Merritt’s Blog post on attorney discipline and bar exam WORTH A READ!

Our blog has often posted about many issues related to licensing lawyers, experiential requirements for admission, the monopolizing power of the NCBE and the pros and cons of the UBE.  Thus, I recommend to our readers an excellent post by our blogger friend Professor Deborah Merritt over at Law School Cafe on bar exam scores and lawyer discipline. Professor Merritt analyzes an article by Pepperdine Professors Robert Anderson and Professor Derek Mueller entitled The High Cost of Lowering the Bar Exam.   Professors Anderson and Mueller opine that “lowering the bar examination passing score will likely increase the amount of malpractice, misconduct, and discipline among California lawyers.” Merritt objects to any causal inference noting,

Two key facts, however, weigh strongly against drawing that type of causal inference. First, as Anderson and Muller point out, “[t]here is virtually no discipline in the first 10 years of practice.” If the bar exam measured qualities related to attorney discipline, one would expect to see disciplinary cases emerge during those 10 years. Wouldn’t attorneys with marginal competency (as measured by the current bar exam) reveal their deficiencies during their early practice years?

Second, attorney discipline almost never rests on lack of knowledge about legal doctrine, poor reasoning skills, or bad writing–the skills currently measured by the bar exam. Levin and her colleagues reported that attorneys most often received discipline for failing to communicate with clients (20.0%), lack of diligence (17.93%), and failure to safeguard client property (11.26%). Only 4.14% of disciplinary sanctions related to “competence”–and even some of those cases may have reflected incompetence in areas that are not tested by the bar exam.

My favorite comment by Professor Merritt provides another example from which we should not infer causality (however tempting it might be to some of us who have been hurt by patriarchy),

We should not exclude individuals from a profession based on qualities that merely correlate with misconduct.

To underscore that point, consider this: The strongest predictor of attorney discipline is the y chromosome. Male attorneys are much more likely than female ones to be disciplined. If we want to use correlations to reduce instances of attorney discipline, it would be much more efficient to ban men from the profession, subject them to special character exams, or require them to achieve a higher bar exam score than women. Those actions, of course, would raise special issues of gender discrimination–but they illustrate the drawbacks of predicting malfeasance based on correlations.

These questions and assumed correlations are important ones. Many defend the decreasing bar passage statistics as appropriate market correction to prevent “undesirables” from entry into the profession — a consumer protection argument. However, as Professor Merritt points out, there is so much more to unpack here. For example, most misconduct challenges occur against solo practitioners or small firms. This raises overlapping socio-economic questions: which lawyers could be perceived as easiest to challenge, which lawyers have the best legal defense teams, and which kind of clients have the most reason to complain.

After teaching for over 28 years and observing which graduates pass the bar on the first try and which do not , I am skeptical of the Anderson-Mueller argument. I would love to see the NCBE and other scholars engage in a socio-economic analysis of bar passage and of disciplinary misconduct.

Legislation & Regulation and the Bar Exam

Most readers of this blog will be familiar with the performance test (PT), a portion of the bar exam in 42 states and D.C. (Forty states use the Multistate Performance Test (MPT); examiners in Pennsylvania and California write and administer their own PT.) For states using the Uniform Bar Exam (UBE), the MPT counts for 20 percent of the overall exam score.

I wrote about the performance test previously here. I extolled its virtue as the only part of the exam that exclusively tests lawyering skills, requiring zero memorization of legal rules; and I bemoaned its status as the ugly step-child of the bar exam that gets next to no attention in conversations about exam reform.

Over time, bar examiners have concluded that certain substantive subjects have grown or lessened in importance to law practice such that they have added subjects to the MBE (e.g., Federal Civil Procedure) or dropped subjects from essays (e.g., Secured Transactions, in some jurisdictions). Why not the same with skills on the PT? Is it not fair to say, for example, that a greater percentage of beginning lawyers today work in fields dominated by regulations than did in 1993 when the MPT was born? Yet the vast majority of PTs to this day test the ability to reason from cases, not from statutes or regulations without the aid of cases.

The anti-regulation bent of the current administration notwithstanding, we live in a heavily regulatory state. Lawyers in numerous specialty areas, including health care law and environmental law; lawyers working for government agencies; or lawyers serving as in-house compliance officers—among the most important skill sets for all of them are reading, interpreting and applying statutes and regulations. (Compliance, by the way, has been a growing field, and positions in compliance are J.D. preferred jobs increasingly being filled by newly licensed lawyers.) Many law schools have responded to this reality by adding a 1L course on legislation and regulation to provide law students the needed foundation for practicing law in our heavily regulatory state. (A running list, accessible from here, indicates that about 30 law schools are offering a course of this nature in the first year.)

In reviewing summaries of the last 28 MPT items (covering the last 14 exams back to February 2010), I found only one among the 28 that provided only statutes and regulations and no cases as part of its law library. Typically, PTs presenting issues of statutory application have both statutes and cases in the library, and the cases provide the statutory interpretation needed to answer the issue posed. That’s still common law reasoning—a very important skill, to be sure, but not very helpful for a lawyer when the only applicable law is a statute or a regulation.

All of the above helps to explain how pleasantly surprised I was to see a purely statutory issue on the February 2017 performance test on the Pennsylvania Bar Exam. The assigned task was to write a memorandum analyzing and supporting the client’s position on three legal issues raised by opposing counsel in a motor vehicle accident. One of the issues was whether a driver had violated the state’s law banning texting while driving. The text of the law appeared in the materials, and applicants had to dissect its language and apply it to the facts—all without the aid of cases in the materials, each of which was relevant only to other issues. This is basic stuff, but exactly the kind of basic stuff that beginning lawyers must be able to do well.

Teaching Tips to Think about Early in the New Semester- By Steven Friedland

With the beginning of a new semester upon us, these thoughts and tips are a great thing to keep in the back of everyone’s mind whether you are a student or a professor.  This great post was done by Steven Friedland.

Flexibility and Mobility in Law School Learning

As a professor who has been teaching for more than two decades, it is easy to feel like a dinosaur in classes populated by students mostly in their 20s.  But within that notion lies the fact that not only do ages change, but cultures as well.  It is evident that within the born-digital generation, cultural understandings, particularly involving learning, are different than mine.

While I think cross-cultural competency is more important than ever in this global era, it also applies to us teaching dinosaurs.  I learned in law school in a linear and fixed fashion – go to class, take notes, go to the library, study and prepare for the next class.  Based on studies and my own anecdotal evidence, there is an increasing preference for mobility and flexibility in learning.  I am becoming a believer in both — using Web platforms like TWEN, Blackboard or Moodle as integral parts of a course, and allowing students to have flexibility in where and when they learn.

I am now experimenting in doctrinal courses to include several flex classes — audiotaped, with an option to take each over a 24 hour period in a self-paced fashion.  These self-paced classes are combined with deliverables — writing an answer to a problem based on the class material and then posting it on the Web platform, or doing some other relevant task based on the material to ensure that some form of learning has occurred.  So far, these classes have been well-received; to my surprise, students like the flexibility about when they take class as much as the remote opportunity. I am enjoying shaking it up in this way.  What is the saying?  Even an old dinosaur can learn….

 

Note-Taking Breaks

In a law school class, there are a variety of note-takers.  Some are the “court reporters,” taking down every word.  Some take far fewer notes, within their own organizational schemes. Many students are using computers, with note-taking programs. I also have had some “deep observers,” who appear to take no notes at all.

But all students seem to rely on the notes they take in putting a course together for deep understanding, especially in the first year of school.  Interestingly, teachers do not generally know how students are taking notes and whether those notes taken are even accurate.  This is why I have started using a colleague’s technique (yes, I like borrowing good ideas from others, no hiding there), of taking “note breaks” in the middle of a doctrinal class — allowing students to check their notes with other students, particularly about important rules, principles or insights. I usually prompt the break by asking, “What were the most important points in class so far?”  This has several effects.  Everyone perks up and the students appear present and engaged.  Students also are more likely to ask questions about what has occurred thus far.  I get useful feedback on what I have communicated well and what I have done poorly.  So all the way around, I find it to be a helpful technique. When students walk out of class, they should be able to rely on and have ready access to useful notes.

 

Retention and Retrieval

Lots of studies have been done that show experts learn differently than novices.  In any educational process, the goal is to move up the scale, from unconscious incompetence, to conscious incompetence, to conscious competence, to the highest level, unconscious competence.  I know about the lowest level, having been there in law school and many other contexts (just thinking back on the longest years of my life taking piano lessons).  The highest level of competence is epitomized by Captain Sully, the U.S. Air pilot who landed his commercial plane without engines in the Hudson River.

So what learning features are associated with experts? Experts recognize patterns of information, have deep understanding of material within a domain, organize their information well for ready access, and constantly self-monitor.  We can learn from these characteristics in law school.  It is traditional for law school professors to evaluate student performance through a single final examination, (although sometimes mid-terms are also offered).  The traditional summative evaluation framework promotes a particular type of studying.  Students study like crazy just before an exam, and then dump all of their knowledge on the test. (This approach was a familiar one for me when I was in school.) To help students progress from novice to expert, though, we should teach for long-term retention and retrieval.  This can occur through the use of numerous problems and opportunities throughout a course by which to practice organizing and storing material before a final exam, the use of structures or outlines by which to approach topics, and a greater emphasis on mnemonics, anchor words and other learning devices.   Sometimes, in our desire to cover great swaths of material, we don’t drill as deeply as we could or should.

%d bloggers like this: