Are the Students Failing the Bar Exam Today Canaries in the Coal Mine warning us of a More General Need to Change Legal Education?

Thank you so much to Best Practices for Legal Education for inviting me to blog again and to Elizabeth Murad for her remarkable work in keeping contributors in touch and on track.  So much is written about the very real decline in bar passage that it is easy for schools with high pass rates–or at least high in relation to other schools in their state– to ignore the need to change what goes on in the classroom and dismiss the excellent work being done in effective law teaching as a problem for “lesser schools” in “lower tiers.”

We know, as legal educators , members of the bar and even members of the public, that bar passage rates have been falling.  And we also know that many, if not most, law schools are admitting students today with LSAT scores lower than those that they  admitted ten years ago. So it’s easy to see a correlation between lower scores and falling rates.  After all, the bar exam is a test much like the LSAT–why wouldn’t there be a relationship?   But even if students are failing the bar exam for the same reasons they are getting low LSAT Scores,  we still have the opportunity to intervene in ways that we know raise pass rates.  This blog contains so many resources for those who want to teach more effectively.   Why wouldn’t we want this for all our students?

Everyone at a school with a “bar passage problem” is well aware that we cannot continue to do the same things we always have when they are no longer working the way they used to.  But we hear this less at schools satisfied with their bar passage  Perhaps the students who are failing are really canaries in the coal mine and a warning to all of legal education that all of today’s law students find it more difficult translating their legal education into the very peculiar format required for bar passage-regardless of LSAT score? Everyone who has ever studied for the bar exam remembers it as a grueling, unpleasant, and highly intensive process–but until very recently that process started after graduation and barring personal disaster almost always resulted in passage.  Even when it didn’t, the consequences of were lower.  Today, students safely employed in September find themselves fired if October brings news of failure.  We need to consider bar passage as an issue both for students who fail and for those who pass–after all, both groups spend the same three years in law school.

Anecdotal evidence (which we could easily substitute for actual data by doing some surveys) suggests that bar passage anxiety spreads well beyond those students most at risk.  All students know that the stakes are high and many believe that their chances of passing are lower than students in the past.  Does that affect their choices while in law school?  Could they be doing more to prepare for their future careers if we could provide them more effective instruction?

Medical students and educators are expressing the same kinds of concerns about their curriculum being shaped by a test as we should be about ours.   We can’t easily change the bar exam–but we can adopt more direct methods of instruction that support not just bar passage but create time for the more complex and less exam focused thinking that we want to be going on in class.

I hope over the week to share resources that would encourage everyone to consider how studying for a very old fashioned test is negatively shaping the education of all of today’s law students. (and because it always warrants reposting-here is a recently revised article by, Louis Schulze of what they have done at FIU to apply the “science of learning” across the curriculum in support of higher bar passage.

 

Advertisements

New Rubrics Available to Help Law Schools that Have Adopted Learning Outcomes Related to Professional Identity Formation

By: Professor Benjamin V. Madison, III

 

A recent blog by Andi Curcio and Dean Alexis Martinez addressed the manner in which well-developed rubrics help law schools in program assessment. As newcomers to assessment of program learning outcomes, see Article, law schools need guidance on best practices for program assessment.

Rubrics are clearly a key part of assessing whether law students, by the time they leave law school, have attained skills, competencies, and traits embodied in a given school’s program learning outcomes. The Holloran Center for Ethical Leadership in the Professions created a database of program learning outcomes adopted by law schools. See Database. The program learning outcomes that many of us find most intriguing are those under ABA Standard 302(c) (exercise of professional and ethical responsibilities to clients and the legal system) and Standard 302(d) (professional skills needed for competent and ethical participation as a member of the legal profession). The competencies and skills in learning outcomes adopted by law schools under these categories include: Cultural Competency (46 schools), Integrity (27 schools), Professionalism (31 schools), Self-Directedness (41 schools), and Teamwork/Collaboration (52).

Associated with St. Thomas School of Law, the Holloran Center brought together two leaders in the professional formation movement, Professor Neil Hamilton and Professor Jerry Organ of St. Thomas Law, with faculty and staff from other law schools that have committed to pursuing professional identity formation as part of their law schools’ effort to produce complete lawyers. Like Professor Hamilton and Professor Organ and St. Thomas, these faculty, administrators, and staff–and their law schools–have demonstrated a commitment to the professional identity formation movement—a movement inspired by the 2007 publication of the Carnegie Report and of Best Practices in Legal Education. Recently, rubrics developed over the past year by working groups assigned to specific competencies were added to the Holloran Center web site, see Holloran Competency Milestones.

The Holloran Competency Milestones offer any law school that has published a program learning outcome in the competencies listed above—competencies that some educators may consider too challenging to assess. If anyone believes these competencies are impossible to assess, however, the Holloran Competency Milestone rubrics show otherwise. A law school must decide in what courses, or in what contexts (possibly clinical settings), the school uses the rubrics to assess attainment of a given competency. However, the Milestones are a valuable tool for assessing these competencies.

The work of the Holloran Center, and of those of us on the working groups that developed these first rubrics will continue. (The persons and schools who have participated in this project to date are identified on the site with the Milestones.) Law schools that have not previously been involved in development of rubrics have recently committed to developing further rubrics. Continuing the progress that has begun will provide rubrics for program assessment of competencies for which assessment tools have not been developed. For instance, these schools are likely to address competencies such as Reflection/Self-Evaluation (36 schools include in published learning outcomes), Active Listening (31 schools include in published learning outcomes), and Judgment (18 schools include in published learning outcomes).

Anyone who considers the competencies discussed here to be too abstract to include in a law school’s program of instruction ought to review the impressive survey by Educating Tomorrows Lawyers (ETL), called the Foundations of Practice Survey. There, ETL’s survey of more than 24,000 lawyers nationwide demonstrated that the very competencies discussed above (1) were among the most important factors in employers’ decisions whether to hire law students, and (2) determined whether the student is likely to succeed in law practice. See Foundations of Practice Report (The Whole Lawyer and the Character Quotient).

In short, the law schools that adopted learning outcomes designed to produce lawyers who are not only legal technicians but whole persons are on the right track. The law schools that adopted competencies that go beyond traditional competencies (analytical skill, writing, etc.) showed they believed a complete lawyer needed other competencies to be complete. The efforts described here validate the decision of such schools to adopt learning outcomes that go beyond the traditional ones. The hope, of course, is that law schools now use these rubrics to do program assessment of competencies such as cultural competency, integrity, professionalism, self-directedness, and teamwork/collaboration.

May these efforts ultimately produce more lawyers that embody these competencies.

The Feedback Sandwich: A Bad Recipe for Motivating Students’ Learning

This past year, I’ve been participating in the hiring process for clinical professor positions at our law school. I’ve observed job talks and engaged with candidates about how they provide supervision. Because I believe that giving students feedback is, perhaps, the hardest part of being a clinical professor, I tend to ask lots of questions about how candidates would, ideally, provide feedback in an academic or practice setting.

I’ve been surprised by how many candidates still ascribe to the “feedback sandwich” as a model for delivering feedback and by how many clinical professors claim they use the model in their teaching. The feedback sandwich is a feedback delivery method that uses a compliment, criticism, compliment format. It’s meant to soften the blow of critical feedback and increase the likelihood that the recipient will actually listen to the “meat” of the sandwich – the corrective measures. But the feedback sandwich has been widely criticized.

Feedback is the backbone of clinical education. One of the greatest benefits of experiential learning is the opportunity to give and receive constant feedback. Feedback helps students develop their skills and their professional identities. Well-designed feedback can lead to increased motivation and learning. But ill-designed feedback can lead to decreased motivation, low performance and disengagement.

No doubt, most feedback is well-intentioned whatever form it takes. The feedback sandwich certainly seems well-intentioned too. Professors often use it to remind students that they can and have done some things well. But danger lurks in the good intentions of comforting feedback.

Researchers have demonstrated that giving students comforting feedback significantly decreases their motivation to learn.  Comforting feedback communicates low expectations. For example, telling a student that plenty of people have difficulty with this skill but may be good at others doesn’t empower students to improve. In fact, it may even suggest that the professor doesn’t think the student can improve.

On the other hand, controllable feedback increases students’ motivation and effort to learn. Controllable feedback gives students the specific strategies they need to improve. For example, suggesting a student talk through strategies used to complete a task, and together develop specific ways that the approach can be improved offers students a pathway to increase their learning.

Don’t let your feedback get hijacked by the sandwich myth. Research shows that when we hide feedback that is critical for learning, students tend to remember the compliments and forget critical aspects that will lead to real struggle and learning. And, importantly, students interpret comforting feedback to mean that they may not be able to improve their performance in this particular skill. Compliments and comforting feedback may help students feel better in the short term, but it doesn’t help them address their deficits.

If you are uncomfortable giving critical feedback, consider the learning culture you foster. The type of feedback one gives reflects one’s mindset. Instructors with a growth mindset foster a belief that students’ intelligence or aptitude can grow with effort and good strategies. Those with a fixed mindset believe that one’s intelligence or ability is mostly fixed, and one can’t significantly change their natural abilities. Researchers have shown that instructors with a fixed mindset give significantly more comforting feedback than instructors with a growth mindset. This makes sense because if we believe a student may not be able to greatly improve their performance despite their best efforts, we seek ways to make them feel better about themselves.

A growth-minded culture allows for feedback to be taken in the spirit it was intended – to provide students with an honest assessment of their performance and concrete ways to improve it. It’s essential for clinical professors to provide growth-minded and controllable feedback. That’s because students can detect instructors’ mindsets. They see through the comforting feedback and come to believe they aren’t capable of significantly upping their game. Only controllable feedback provides a path for sustained improvement and growth. Law students will need to learn to receive and give this kind of feedback as they enter the legal profession, and law schools can play a role helping them manage this process.

CLEA, SALT and others urge Council on Legal Education to increase transparency and reject proposed changes to Standard 316 at their Friday 2.22.19 meeting

FROM CLEA website:

On February 20, 2019, CLEA submitted two joint advocacy memorandums, with the Society of American Law Teachers (SALT) and others, to the Council on the ABA Section of Legal Education and Admissions to the Bar. 

In the first joint memo, CLEA and SALT urge the Council to increase transparency in its processes and engage in meaningful dialogue with all interested constituencies before making decisions that affect law schools and the legal profession.

The second advocacy memo urges the Council to once again reject the proposed changes to Standard 316 relating to bar passage.  The second memo is co-signed by SALT, the ABA Coalition on Racial and Ethnic Justice, ABA Commission on Disability Rights, ABA Commission on Hispanic Legal Rights & Responsibilities, ABA Commission on Sexual Orientation & Gender Identity, ABA Commission on Women in the Profession, ABA Council for Diversity in the Educational Pipeline, ABA Law Student Division, ABA Young Lawyers Division, HBCU Law Deans Gary Bledsoe, John C. Brittain, Elaine O’Neal, John Pierre, & LeRoy Pernell,  and the Hispanic National Bar Association (HNBA).

Assessing Institutional Learning Outcomes Using Rubrics: Lessons Learned

By: Professor Andi Curcio & Dean Alexis Martinez

Experience confirms using rubrics to assess institutional learning outcomes is relatively easy and cost-effective. It is also an iterative process. Below we share some of the lessons we learned as we engaged in this rubric-based institutional assessment process. We also share examples of final report charts to illustrate how this process results in usable assessment report data.

A Review of the Basics

Georgia State University College of Law has institutional outcomes that encompass the ABA required legal knowledge, analysis, research and writing outcomes as well as outcomes covering self-reflection, professional development, ethical and professional obligations, teamwork, ability to work effectively with courts and clients, and awareness of pro bono responsibilities.

An earlier blog and article provide an in-depth discussion about the development and use of rubrics to assess these institutional outcomes.

To briefly review the main idea: we engaged faculty in designing rubrics with measurable criterion for each institutional outcome.

For example, for our legal knowledge and analysis outcomes, our criterion included: substantive legal knowledge; issue spotting; fact usage; critical analysis; and policy analysis. For each criterion, we identified a continuum of competence.

For example, for issue spotting, the rubric looked like this:

ACC1

As the excerpt above illustrates, we drafted rubrics so that faculty teaching a wide range of courses could use the rubric, regardless of course content or assessment methodology.

For each outcome, we identified multiple first year and upper level courses that would provide a solid student sample and used those courses to measure the outcome. In the designated courses, faculty graded as usual and then completed a rubric for each student.

Faculty did not have to change how they taught or assessed and the only extra work was completing a rubric – a process the faculty agreed took little additional time.

All data was entered from the completed rubrics into one master database and used to create a faculty report identifying student achievement, by cohort year (1L,2L 3L) for each rubric criterion [see sample below].

Lessons Learned:

1. Drafting Rubrics

We struggled to draft rubrics that could be easily adapted to a wide range of courses. If we were starting from scratch, it might have been easier if we used the rubrics drafted by the American Association of Colleges and Universities [AAC&U] as a starting point. Those rubrics have been developed and tested for reliability and validity. They also look at the big picture skills.

Because law faculty often think in context of how individual courses are taught it was sometimes challenging for faculty to start from scratch and draft rubrics that could be easily applied across the curriculum. Starting with the AAC&U rubrics allows faculty members to review examples of language and how larger/generalized program outcomes could be assessed through multiple different teaching methods and in a wide range of courses.

We also learned that it works best if we keep the rubrics to one page per learning outcome. Although outcomes could have a lot of criterion, it is important to identify 4-5 key criteria. Keeping the rubrics to one page forces us to hone in on the critical skills and helps ensure that the process is not overly burdensome for either faculty completing the rubric or staff entering the rubric data. It also makes reporting the data more manageable.

We also found it useful to remind faculty that the institutional rubrics are not meant to capture all skills taught in a given course and that we did not expect all faculty to assess every rubric criterion which is why we included a “N/A” [not applicable] choice for each criterion.

Finally, we found it helpful to emphasize that while we cannot change the rubrics mid-year, we welcome feedback and are open to changing future rubric iterations based upon faculty input. This keeps the faculty engaged and ensures the rubrics are as meaningful as possible.

2. Labeling Criterion Levels

Originally, we drafted rubrics and labeled each criterion level with word descriptors such as: needs significant help; developing; competent; and aspirational. Faculty found those labels more confusing than helpful. We thus changed the continuum labels to: level 1, level 2, etc. This change made it easier for faculty to focus on the descriptors along the continuum, rather than the achievement labels. It also eliminated any concerns about how the data collected could be used in the future, either internally or externally, to describe the quality of current and future graduates.

3. Data Compilation and Report Format

We chose a wide variety of 1L and upper level courses to get a robust data sample. In each course assessed, the professor completed a rubric for each student. Professors used anonymous exam numbers for the rubrics, just like for grading.

Initially, each rubric submitted was a data point. However, we realized that some students were taking multiple courses used in our data collection while others took only one course. To address the issue of “double counting” some of the same students, we changed our data entry system so that each student became a data point.

To the extent students took multiple courses where the outcome was measured, and they were rated differently by different professors, we averaged their score. Thus, if a student was at a Level 2 in issue spotting in Con Law II and a level 3 in issue spotting in Administrative Law, the student was entered into the program as a 2.5 for issue spotting. That also allowed us to have a more granular final report because instead of having four levels, we had seven.

The charts below illustrate what final data compilation might look like using that data entry system.

ACchart

ACC3

After experimenting with developing a software program to compile the data, we discovered it was cheaper, and significantly simpler, to use excel for data entry and basic data compilation. The excel option also allows for future entry into SPSS for additional correlations or data analysis.

As we move forward in assessing additional outcomes this year, we are experimenting with moving from hard copy to electronic rubrics to ease the administrative burden of data entry of hard copy rubrics.

There are multiple software options, such as Qualtrics, that allow for the same questions included in hard copy rubrics to be organized electronically for reports to be run quickly and efficiently.

4. Using the Report Data to Improve Learning

After compiling the data, the assessment committee reported out the analysis in a short, factual report to the faculty using the chart format above and some additional explanatory narrative.

Throughout the reporting process and ensuing discussions about how to use the data, we reminded faculty that the point of outcome measures is to improve student learning [something we all care about].

We also were very upfront about issues with methodology that produced imperfect results, and we reminded faculty that our goal was an overview, not a publishable paper. Reminders about why we are engaging in the process and transparency about imperfections in the process went a long way toward moving the discussion forward.

We used the report as a starting point for a big picture discussion. After briefly reviewing the report with the faculty, we asked the faculty to break out into small groups and answer questions such as: given the data on 1Ls, are we satisfied with where our 1Ls are at the end of the first year? If not, what changes should we consider to help improve their learning?

By engaging the faculty in answering specific questions, we got great feedback that we turned into recommendations/action steps that led to further discussions. Eventually we adopted action steps that we have begun implementing in the hope that we can improve student learning. For example, based upon the data and the experience using the rubrics, faculty agreed to develop criterion-referenced rubrics for their own courses so that students had more information than simply a curved grade by which to assess their progress.

Conclusion

Institutional outcomes assessment is a new process for most law schools. It is also an iterative one. We learn as we go along and make changes as necessary. At GSU, we changed our data compilation methods and tweaked the rubrics. We expect to continue rubric revision as we become more familiar with the process.

What we have learned is that the rubric assessment process is fairly easy to implement, cost-effective, and can provide us useful information as we continually strive to improve our students’ learning.

What’s in a Name? Teaching Implicit Bias

Every semester I weave into my classrooms several opportunities to teach about implicit bias. I have shown videos like this and led discussions on articles like this.

Last week in my Family Law Clinic seminar, we discussed Peggy McIntosh’s Unpacking the Invisible Knapsack, which describes the author’s quest to overcome her biases stemming from white privilege. A student shared their pain and frustration over college and law professors never using their full name, and often mispronouncing the parts of their name the professor is willing to speak out loud. “It’s dehumanizing,” my student said.

Those words have haunted me all week. Names are fundamental parts of human identity. Why can we, as educators–members of an elite profession–not get this right? Why is it not a norm in higher education for professors and teaching assistants to learn to pronounce every student’s name?

Also this week, I read in a memo from a colleague a to-do item along the lines of “practice pronouncing graduates’ names.” The colleague was sharing with me tips for the job I will soon begin: associate dean for academic affairs. One privilege of this job is reading the names of all Penn State Law graduates at the annual commencement ceremony. It was profoundly touching to learn that my colleague takes the time to practice every graduate’s name–and they felt it important enough to share with me as one of a handful of their significant monthly action items.

I give all my students the opportunity to share the pronunciation of their name with me on the first day of class, on note cards I keep with me at every class. An earlier post explained more about the note card system, which I learned from fellow blogger Paula Schaefer. Pronouncing each student’s name is challenging, and I sometimes falter. Last semester I began writing the pronunciations on my seating chart, to minimize my fumbling through the note cards. This is my seventeenth year of teaching. My only regret is not starting this earlier. It enriches my classroom, and it enriches me. It bakes into my pedagogy an indirect lesson about implicit bias, a lesson I re-learn every time I call on a student and say their name, whether it is Ainslie or Zhao-Ji.

New Blog on Teaching and Learning Features Contributions from Law Faculty

Touro College has launched a Teaching and Learning Exchange Blog that all are welcome to drop in on.  Some of the recent posts discuss topics this Best Practices blog has highlighted in the past. For example, recent postings from four law faculty include: Laura Dooley , Hypo Hell: Using Short Form Questions in Class to Engage Students with Important Texts; Jack Graves, Multiple Choice Questions as an Integral Part of an Effective Assessment Regime; Dean Harry Ballan (and Dylan Wiliam), In Defense of Multiple Choice; and Meredith Miller, Day One: You Never Get a Second Chance to Make a First Impression.  Other interesting posts from faculty across the College include: Attention, Memory, and  Learning: What Do We Know? So What?, “I’m Not an Actor?” and Curiosity Feeds the Cat. Please bookmark this blog, facilitated by Dr. Rima Aranha.

%d bloggers like this: