MAKING IT PERSONAL

I’ve been a devotee of Parker Palmer ever since I read The Courage to Teach.[1]  I often think of his statement: “We Teach Who We Are.”[2]   In January, David Brooks, Op-Ed columnist for the New York Times, echoed a similar sentiment.  Brooks’ piece, entitled Students Learn from People They Love,[3] told of how a class he taught at Yale softened around him after he had to cancel office hours a few years ago, having shared with his students that he “was dealing with some personal issues and a friend was coming up to help me sort through them.”  Recognizing something that many of us have long known, Brooks drew the connection between emotional relationships and learning.  Thus his Palmer-like statement: “[W]hat teachers really teach is themselves—their contagious passion for their subjects and students.”[4]

But we teachers are much more than our passion for our subjects and our students. We are human beings who bring into our teaching the accumulation of all the innate and environmental influences and experiences of our entire lives.  While these influences implicitly affect how we teach, each of us strikes a balance of how much of the personal we explicitly bring into our interactions with students, both in the classroom and outside.  My tendency is to share a great deal about myself, to share my personal stories.

I am an advocate of holistic lawyering, of the essentiality of understanding that a client comes into a lawyer’s office with a host of needs, only some of which are legal.  The lawyers I regard as most effective, the ones I most admire, are those who recognize their clients’ multiplicity of extra-legal challenges and, where appropriate, address them, if only by suggesting or referring to other professionals.

In writing this blog entry, I came to realize that I might appropriately call myself a holistic teacher. I’m not only committed to teaching my students the knowledge, skills, and values of the profession they are studying to enter.  I care about how they will bring their entire beings into their careers.  All of their signature strengths as well as their challenges.  If they are struggling in any part of their lives, it will likely bleed into their performance as students and, ultimately, left unaddressed, into their careers as lawyers.

I teach a general civil externship seminar.  I have the luxury of inviting my students to focus on key aspects of successful lawyering generally taught, if at all, only in clinics and externships.  These include, above all, the people skills so essential to effective lawyering:  communication; cultural competence; emotional intelligence; self-care; and finding realistic and healthy balance among work, family, friends, and self.

A joke I tell about myself and often share with students in distress is if a student comes into my office complaining of a hangnail, I recommend talk therapy, because it has been so helpful in my own life.  I have long been open about my history of episodic clinical depression, and have shared it with students and others struggling with their own mental demons.[5]   Sometimes, however, I wonder if I risk crossing the line between teacher and therapist.

Here’s a recent example.  I have been working on a pro bono case with a student now in her last semester.  We’ll call her Susan; not her real name.  Several times Susan had promised to get me a draft of a letter to a Congressperson for my review by a certain date and had not done so.  I had told her that I understood she had a lot else on her plate, and just to send me an email if she wasn’t able to get it to me by the date she had promised.  A week or so ago, she assured me that wouldn’t be necessary; she would absolutely get it to me by the end of the following weekend.  That didn’t happen.  Last week Susan and I were talking after a lunch hour program about her upcoming interview for post-bar exam employment at a firm with which she was currently externing.  It was an encouraging and upbeat conversation.  At the end, I gently mentioned that I had neither received the draft letter or an email explaining she was unable to get to it.   She confessed that she had begun to draft the email, but was just too anxious to finish it.  Apparently, this was not an isolated instance; she has long been plagued by anxiety.  A deeper conversation ensued about the importance of being in communication for the career she was entering.  I mentioned something I often said to students, that they wouldn’t have gotten as far as they had if they didn’t have an awful lot going for them.  But if something wasn’t working and they were unable to fix it on their own, there was no shame in seeking professional help.  Susan shared that she knew this, and also that she had stopped going to therapy several years ago when her therapist had suggested she might benefit from anti-anxiety medication.  Susan was and is adamantly opposed to psychiatric medication.  I stressed the importance of not burying her head in the sand, that the choice of whether to take medication would be her own but that not doing anything to solve a seemingly intractable problem was not a rational choice.  I reminded her of my own mental health history.  She later wrote, thanking me for the advice.

I write all of this with admitted ambivalence.  I even question whether it belongs in the “Best Practices” Blog.  For I struggle with my tendency to so readily recommend therapy to my students.

There are times when crafting my journal prompts, I have to remind myself to relate them to my students’ lives as externs and the lawyers they are becoming.  An example, from our exploration of Emotional Intelligence:

  1. Reflect on how well you think you know yourself, your MO (modus operandi). For example, are you aware of what your immediate response is to an upsetting or difficult situation?  Are there automatic responses you have that you would like to change?  Specifically, do any of these responses tend to create problems for you professionally, to keep you from being the law student or lawyer you most want to be? If so, what steps can you take to change them?

and another based on the chapter I contributed to Learning from Practice,[6] on Work and Well-being:

  1. Considering chapter 25 and our class discussion on Monday, reflect on what, if any, habits or practices you have developed in law school that detract from your sense of well-being. What, if any, habits or practices contribute positively to your sense of well-being?  What if anything might you change to improve your well-being, now and going forward?

Is it good practice to probe so personally into my students’ inner lives?  Is it sufficient that I offer them the option of writing about something less personal?  Although I may have doubts, I find that these prompts often elicit some of the most thoughtful reflections of any my students write.  Self-awareness, like awareness of others’ emotional contexts, is so critically important to being an effective lawyer.  Where in the curriculum can we explore this if not in experiential courses and, specifically, in journals?  Here’s a recent example from one of my student’s journal responding to this prompt:

One habit developed during law school that detracts from my well-being is that I have stopped going to the gym and eating healthy. I was always very into fitness and living a healthy lifestyle….  The time constraints of law school and working fulltime have forced me to essentially eliminate this from my life. . . .  My physical health and body image definitely play an important role in my mental health.  In the future, I think it will be important for me to carve out time to keep this part of my life….

I value the importance of introducing my students to positive psychology[7] and mindfulness practices, both empirically demonstrated to provide a plethora of benefits.[8]  For more than ten years, I have had a regular morning meditation practice which has been hugely beneficial in my own life and work.  For many years I have introduced my students to mindfulness meditation in the first class of the semester.  I usually show a wonderfully accessible twelve-minute video of Anderson Cooper’s introduction to meditation at a weekend silent retreat with Jon Kabat Zinn.  For even longer than that, I have begun each of my classes with two minutes of what I have come to call “settling in,” accompanied by an introduction, and invitation, to mindful breathing.  Some students find it to be an invaluable tool for settling their minds and reducing their anxiety, in and out of class.  Many others are agreeable to practicing it in class, but not inspired to try it elsewhere.  Still others find it to be a hippy-dippy waste of time.  I know it alienates some students, but that’s a cost I deem worth it for the possibly life-long benefits it provides for others.

I consciously model vulnerability, fallibility, and taking responsibility for messing up.  I admit my MO—being a scold.  I am naturally impatient with students who haven’t lived up to their responsibilities, who haven’t exhibited the professionalism becoming a lawyer requires.  I work hard at not acting out of my “default position.”  Too often, I fail.  Even if I say nothing, it shows on my non-poker face.  To put it mildly, this does not improve the climate in the classroom.   Here’s an example from this semester of failing and recovering:

I have the smallest seminar I have ever had: only five students.  We meet on Mondays, late afternoon.  On a Tuesday that was a “legislative” Monday[10] following Presidents’ Day weekend, only three students showed up.  The absent students hadn’t notified me. The following Monday, three students showed up.  One of the absent students had let me know that she had a stomach flu; I heard nothing from the other.

The assignment for that particular class was to prepare for partnered simulations based on the ethical dilemma hypotheticals in chapters 10 and 11 of Learning from Practice.[11]  The instructions I sent with the assignment right after the prior class, in bolded text, instructed the students to do two things:  1) coordinate with their simulation partner in advance and 2) notify me if they weren’t going to be in class so that I could make alternative assignments.  Of the three students who showed up, only one had read all of the assigned pages, and none had communicated with their partners about the simulations.

I did my best not to blow up but I was practically ready to end class then and there.  Instead, I took a deep breath, gave them a few minutes to read the hypotheticals, and left the classroom for a few minutes to cool down.  When I returned, we discussed the scenarios, rather than acting them out.  It was the best I could think to do at the time, and the discussion was sufficient to get us through the remainder of class.  It probably goes without saying that it was not a great class.

My true recovery actually occurred the following day when, having sufficient distance in time and place, I drafted, edited and emailed the class a missive I titled “IMPORTANT.”  After reciting the concerning events of the previous two classes, I added:

I appreciate that you are all juggling multiple responsibilities and substantial workloads.  This is training for careers as lawyers.  You are in the process of developing your professional identities.  I am committed to supporting you in that process.

Your professional training to become lawyers requires you to be accountable and in communication.  If you need to miss a class for which you have been assigned a particular role or task, you should inform your professor and any affected classmates in advance, or, if not possible, as soon as you can.

Law school generally, and the externship and clinical programs in particular, serve as a laboratory for developing the professionalism habits you will need for your future careers.  Towards that end, I … have attached … a Professionalism rubric[12]… .  I ask all of you, as you prepare your mid-semester self-evaluation to rate yourself on this rubric and see where you need and want to improve.  I will ask you to do the same at the end of the semester.  It’s up to you whether you want to share your rubric with me.

We are a very small group.  That has advantages and disadvantages, the latter having been evident for the past two classes.  We all need to work hard to live up to our obligations in order to maximize this learning experience for you.

I am posting the above on the Discussion forum and invite replies.  Or contact me privately and/or anonymously.

I’m committed to your success and know that you are, too.

No one took me up on the offer to post on the Discussion forum, nor to contact me otherwise.  Nonetheless, at the following class, all five students were present, thoroughly prepared for the simulations, and completely engaged.  It was a terrific class thanks to the work they put into it.

* * *

I have never been a trial lawyer or practiced law in a private firm.  I don’t have much in the way of war stories relevant to my externs’ placement experiences.  But I do have stories gleaned from seven decades of lived experience.  I have wisdom gained from pursuing my three major life passions:  One, to write and speak about more healing, relational and non-adversarial methods of achieving justice, resolving conflict, and ordering legal affairs.  Two, to decrease the shame and stigma around mental illness, having suffered six episodes of major clinical depression over the past forty-four years.  And three, to help my students envision and strive for careers that will make them excited and happy to get out of bed every morning.  I have been blessed with a career that has enabled me to pursue all three.

When I look back on my more than 36 years of teaching, I see that I have lived them holistically.  My work has been almost seamlessly integrated into the rest of my life, not separate and apart.  In both I have experienced the full gamut of emotions: joy, sadness, frustration, contentment—but never boredom.

We teach who we are.

 

 

[1] Parker Palmer, The Courage to Teach: Exploring the Inner Landscape of a Teacher’s Life (1st ed., 1998).

[2] Id. at 1.

[3] David Brooks, Students Learn from People They Love, New York Times, Op Ed 1/17/19.

[4] Id.

[5] See, e.g., Marjorie A. Silver, Healing Classrooms, in Transforming Justice, Lawyers, and the Practice of Law 264-65 (Marjorie A. Silver, ed., 2017);  Marjorie A. Silver, A Transformational Melancholy: One Law Professor’s Journey Through Depression, https://ssrn.com/abstract=1908992 (2011).

[6] Marjorie A. Silver, Chap. 25: Work and Well-Being 699-724 in Learning from Practice (Wortham, et al. eds., 3rd ed. 2016).

[7] Id.at 700-05.

[8] See, e.g., Shailini Jandial George, The Cure for the Distracted Mind: Why Law Schools Should Teach Mindfulness, 53 Duq. L. Rev. 215 (2015).

[9] omitted.

[10] This is Touro’s term for following a particular day’s schedule on a different day of the week.

[11] Lisa G. Lerman & Lisa V. Martin, Ch.10: Ethical Issues in Externships: An Introduction 261-78; Alexis Anderson, Ch. 11: Ethical Issues in Confidentiality  279-93 in Learning from Practice (Wortham, et al. eds., 3rd ed. 2016).

[12] See https://www.stthomas.edu/media/hollorancenter/pdf/FINALProfessionalismRubricMarch2019.pdf.

Advertisements

A Pedagogical Twist for the 1L Appellate Brief and Oral Argument

For those who teach legal writing to first-year law students, it is the season for appellate oral argument. Yes, the long-standing tradition of requiring first-year students to complete an appellate oral argument in the legal writing course continues today at the large majority of American law schools–at just under 75% of them, according to recent data. At those schools, the oral argument, which is commonly the capstone exercise near the end of the spring semester, has become something of a rite of passage for the students.

In a 2011 article, Legal Research and Writing as Proxy, I argued that assigning an appellate brief and appellate oral argument in the 1L legal writing course remains a pedagogically sound practice, even though a large majority of practicing attorneys will never engage in appellate practice, let alone complete an appellate oral argument. I still retain that view but won’t rehash my arguments here. Rather, I will focus on a pedagogical opportunity afforded by the brief/oral argument sequence of assignments that I discovered more recently.

In the last few iterations of my legal writing course, the appellate brief and oral argument assignments have proven an excellent vehicle for a bit of a pedagogical twist: A few weeks before the brief is due, not after, I teach lessons on oral argument and require the students to complete a practice oral argument round in front of my 2L teaching assistants. (The formal rounds of oral argument in front of a trio of local attorneys still occur after the briefs are submitted.) For many years, I kept brief writing and oral argument entirely separate—only after the briefs were completed and submitted would I shift the students’ attention to oral argument. (After all, that mimics the realities of the “real world“ of appellate practice.) But as a pedagogical matter, just like writing the brief helps in preparing an oral argument, working on an oral argument–and thereby having to talk out and defend one’s positions–can help in preparing a brief.

A few weeks before the brief is due, most students will have a scattered and underdeveloped array of arguments. Completing a practice oral argument can help them–or, in the case of those students who are spinning their wheels, force them–to organize and further develop those arguments for the purposes of the brief. In pursuit of this goal, I ask my TAs to give extensive feedback to both students after each practice round. Moreover, I require every student to attend two additional practice rounds as observers. At each round, the student representing Petitioner, the student representing Respondent, and the students attending as observers also begin to appreciate the formalities and peculiarities of oral argument, thus helping them to prepare for the formal rounds that will occur after submission of their briefs.

This semester, shortly after the practice rounds (just over a week before the briefs were due), my students graciously agreed to provide me some feedback on the experience. One of my students volunteered to solicit comments from all of her classmates, anonymize those comments and her own, and then send them to me. Twelve out of fourteen students in my small section gave a positive review. I include two of the more thoughtful evaluations here:

  1. I found doing the practice oral arguments before my brief was fully written to be helpful. Arguing my side in the courtroom and fielding questions from the TAs helped me more precisely narrow the theme of my arguments and determine how I wanted to frame my position in the brief itself. After receiving pushback from the TAs on certain points, I was able to refine my responses to common criticisms that would come from the other side. Additionally, I now feel more comfortable going into the “official” oral arguments having completed a practice round. However, I would have liked to participate in another mandatory practice round with the TAs after my brief is written; the substance of my oral argument has substantially changed since my first practice round.
  2. Practice oral arguments were a large motivator to get my arguments organized. I found it really helpful to speak out loud about the arguments. Doing so really helped me understand what my points were and whether or not they held up against scrutiny. Speaking about the arguments also helped me understand how they related to each other. The TA’s did a good job of making us feel comfortable throughout the process. I think overall the exercise is going to be beneficial as long as the practice round is kept informal. We were all stressed about how to perform the oral arguments, so maybe there could be a concession in the formality/process of the oral argument that could make us more comfortable.

Good food for thought, as I continue the tradition of appellate oral argument again next spring.

Are the Students Failing the Bar Exam Today Canaries in the Coal Mine warning us of a More General Need to Change Legal Education?

Thank you so much to Best Practices for Legal Education for inviting me to blog again and to Elizabeth Murad for her remarkable work in keeping contributors in touch and on track.  So much is written about the very real decline in bar passage that it is easy for schools with high pass rates–or at least high in relation to other schools in their state– to ignore the need to change what goes on in the classroom and dismiss the excellent work being done in effective law teaching as a problem for “lesser schools” in “lower tiers.”

We know, as legal educators , members of the bar and even members of the public, that bar passage rates have been falling.  And we also know that many, if not most, law schools are admitting students today with LSAT scores lower than those that they  admitted ten years ago. So it’s easy to see a correlation between lower scores and falling rates.  After all, the bar exam is a test much like the LSAT–why wouldn’t there be a relationship?   But even if students are failing the bar exam for the same reasons they are getting low LSAT Scores,  we still have the opportunity to intervene in ways that we know raise pass rates.  This blog contains so many resources for those who want to teach more effectively.   Why wouldn’t we want this for all our students?

Everyone at a school with a “bar passage problem” is well aware that we cannot continue to do the same things we always have when they are no longer working the way they used to.  But we hear this less at schools satisfied with their bar passage  Perhaps the students who are failing are really canaries in the coal mine and a warning to all of legal education that all of today’s law students find it more difficult translating their legal education into the very peculiar format required for bar passage-regardless of LSAT score? Everyone who has ever studied for the bar exam remembers it as a grueling, unpleasant, and highly intensive process–but until very recently that process started after graduation and barring personal disaster almost always resulted in passage.  Even when it didn’t, the consequences of were lower.  Today, students safely employed in September find themselves fired if October brings news of failure.  We need to consider bar passage as an issue both for students who fail and for those who pass–after all, both groups spend the same three years in law school.

Anecdotal evidence (which we could easily substitute for actual data by doing some surveys) suggests that bar passage anxiety spreads well beyond those students most at risk.  All students know that the stakes are high and many believe that their chances of passing are lower than students in the past.  Does that affect their choices while in law school?  Could they be doing more to prepare for their future careers if we could provide them more effective instruction?

Medical students and educators are expressing the same kinds of concerns about their curriculum being shaped by a test as we should be about ours.   We can’t easily change the bar exam–but we can adopt more direct methods of instruction that support not just bar passage but create time for the more complex and less exam focused thinking that we want to be going on in class.

I hope over the week to share resources that would encourage everyone to consider how studying for a very old fashioned test is negatively shaping the education of all of today’s law students. (and because it always warrants reposting-here is a recently revised article by, Louis Schulze of what they have done at FIU to apply the “science of learning” across the curriculum in support of higher bar passage.

 

New Rubrics Available to Help Law Schools that Have Adopted Learning Outcomes Related to Professional Identity Formation

By: Professor Benjamin V. Madison, III

 

A recent blog by Andi Curcio and Dean Alexis Martinez addressed the manner in which well-developed rubrics help law schools in program assessment. As newcomers to assessment of program learning outcomes, see Article, law schools need guidance on best practices for program assessment.

Rubrics are clearly a key part of assessing whether law students, by the time they leave law school, have attained skills, competencies, and traits embodied in a given school’s program learning outcomes. The Holloran Center for Ethical Leadership in the Professions created a database of program learning outcomes adopted by law schools. See Database. The program learning outcomes that many of us find most intriguing are those under ABA Standard 302(c) (exercise of professional and ethical responsibilities to clients and the legal system) and Standard 302(d) (professional skills needed for competent and ethical participation as a member of the legal profession). The competencies and skills in learning outcomes adopted by law schools under these categories include: Cultural Competency (46 schools), Integrity (27 schools), Professionalism (31 schools), Self-Directedness (41 schools), and Teamwork/Collaboration (52).

Associated with St. Thomas School of Law, the Holloran Center brought together two leaders in the professional formation movement, Professor Neil Hamilton and Professor Jerry Organ of St. Thomas Law, with faculty and staff from other law schools that have committed to pursuing professional identity formation as part of their law schools’ effort to produce complete lawyers. Like Professor Hamilton and Professor Organ and St. Thomas, these faculty, administrators, and staff–and their law schools–have demonstrated a commitment to the professional identity formation movement—a movement inspired by the 2007 publication of the Carnegie Report and of Best Practices in Legal Education. Recently, rubrics developed over the past year by working groups assigned to specific competencies were added to the Holloran Center web site, see Holloran Competency Milestones.

The Holloran Competency Milestones offer any law school that has published a program learning outcome in the competencies listed above—competencies that some educators may consider too challenging to assess. If anyone believes these competencies are impossible to assess, however, the Holloran Competency Milestone rubrics show otherwise. A law school must decide in what courses, or in what contexts (possibly clinical settings), the school uses the rubrics to assess attainment of a given competency. However, the Milestones are a valuable tool for assessing these competencies.

The work of the Holloran Center, and of those of us on the working groups that developed these first rubrics will continue. (The persons and schools who have participated in this project to date are identified on the site with the Milestones.) Law schools that have not previously been involved in development of rubrics have recently committed to developing further rubrics. Continuing the progress that has begun will provide rubrics for program assessment of competencies for which assessment tools have not been developed. For instance, these schools are likely to address competencies such as Reflection/Self-Evaluation (36 schools include in published learning outcomes), Active Listening (31 schools include in published learning outcomes), and Judgment (18 schools include in published learning outcomes).

Anyone who considers the competencies discussed here to be too abstract to include in a law school’s program of instruction ought to review the impressive survey by Educating Tomorrows Lawyers (ETL), called the Foundations of Practice Survey. There, ETL’s survey of more than 24,000 lawyers nationwide demonstrated that the very competencies discussed above (1) were among the most important factors in employers’ decisions whether to hire law students, and (2) determined whether the student is likely to succeed in law practice. See Foundations of Practice Report (The Whole Lawyer and the Character Quotient).

In short, the law schools that adopted learning outcomes designed to produce lawyers who are not only legal technicians but whole persons are on the right track. The law schools that adopted competencies that go beyond traditional competencies (analytical skill, writing, etc.) showed they believed a complete lawyer needed other competencies to be complete. The efforts described here validate the decision of such schools to adopt learning outcomes that go beyond the traditional ones. The hope, of course, is that law schools now use these rubrics to do program assessment of competencies such as cultural competency, integrity, professionalism, self-directedness, and teamwork/collaboration.

May these efforts ultimately produce more lawyers that embody these competencies.

The Feedback Sandwich: A Bad Recipe for Motivating Students’ Learning

This past year, I’ve been participating in the hiring process for clinical professor positions at our law school. I’ve observed job talks and engaged with candidates about how they provide supervision. Because I believe that giving students feedback is, perhaps, the hardest part of being a clinical professor, I tend to ask lots of questions about how candidates would, ideally, provide feedback in an academic or practice setting.

I’ve been surprised by how many candidates still ascribe to the “feedback sandwich” as a model for delivering feedback and by how many clinical professors claim they use the model in their teaching. The feedback sandwich is a feedback delivery method that uses a compliment, criticism, compliment format. It’s meant to soften the blow of critical feedback and increase the likelihood that the recipient will actually listen to the “meat” of the sandwich – the corrective measures. But the feedback sandwich has been widely criticized.

Feedback is the backbone of clinical education. One of the greatest benefits of experiential learning is the opportunity to give and receive constant feedback. Feedback helps students develop their skills and their professional identities. Well-designed feedback can lead to increased motivation and learning. But ill-designed feedback can lead to decreased motivation, low performance and disengagement.

No doubt, most feedback is well-intentioned whatever form it takes. The feedback sandwich certainly seems well-intentioned too. Professors often use it to remind students that they can and have done some things well. But danger lurks in the good intentions of comforting feedback.

Researchers have demonstrated that giving students comforting feedback significantly decreases their motivation to learn.  Comforting feedback communicates low expectations. For example, telling a student that plenty of people have difficulty with this skill but may be good at others doesn’t empower students to improve. In fact, it may even suggest that the professor doesn’t think the student can improve.

On the other hand, controllable feedback increases students’ motivation and effort to learn. Controllable feedback gives students the specific strategies they need to improve. For example, suggesting a student talk through strategies used to complete a task, and together develop specific ways that the approach can be improved offers students a pathway to increase their learning.

Don’t let your feedback get hijacked by the sandwich myth. Research shows that when we hide feedback that is critical for learning, students tend to remember the compliments and forget critical aspects that will lead to real struggle and learning. And, importantly, students interpret comforting feedback to mean that they may not be able to improve their performance in this particular skill. Compliments and comforting feedback may help students feel better in the short term, but it doesn’t help them address their deficits.

If you are uncomfortable giving critical feedback, consider the learning culture you foster. The type of feedback one gives reflects one’s mindset. Instructors with a growth mindset foster a belief that students’ intelligence or aptitude can grow with effort and good strategies. Those with a fixed mindset believe that one’s intelligence or ability is mostly fixed, and one can’t significantly change their natural abilities. Researchers have shown that instructors with a fixed mindset give significantly more comforting feedback than instructors with a growth mindset. This makes sense because if we believe a student may not be able to greatly improve their performance despite their best efforts, we seek ways to make them feel better about themselves.

A growth-minded culture allows for feedback to be taken in the spirit it was intended – to provide students with an honest assessment of their performance and concrete ways to improve it. It’s essential for clinical professors to provide growth-minded and controllable feedback. That’s because students can detect instructors’ mindsets. They see through the comforting feedback and come to believe they aren’t capable of significantly upping their game. Only controllable feedback provides a path for sustained improvement and growth. Law students will need to learn to receive and give this kind of feedback as they enter the legal profession, and law schools can play a role helping them manage this process.

CLEA, SALT and others urge Council on Legal Education to increase transparency and reject proposed changes to Standard 316 at their Friday 2.22.19 meeting

FROM CLEA website:

On February 20, 2019, CLEA submitted two joint advocacy memorandums, with the Society of American Law Teachers (SALT) and others, to the Council on the ABA Section of Legal Education and Admissions to the Bar. 

In the first joint memo, CLEA and SALT urge the Council to increase transparency in its processes and engage in meaningful dialogue with all interested constituencies before making decisions that affect law schools and the legal profession.

The second advocacy memo urges the Council to once again reject the proposed changes to Standard 316 relating to bar passage.  The second memo is co-signed by SALT, the ABA Coalition on Racial and Ethnic Justice, ABA Commission on Disability Rights, ABA Commission on Hispanic Legal Rights & Responsibilities, ABA Commission on Sexual Orientation & Gender Identity, ABA Commission on Women in the Profession, ABA Council for Diversity in the Educational Pipeline, ABA Law Student Division, ABA Young Lawyers Division, HBCU Law Deans Gary Bledsoe, John C. Brittain, Elaine O’Neal, John Pierre, & LeRoy Pernell,  and the Hispanic National Bar Association (HNBA).

Assessing Institutional Learning Outcomes Using Rubrics: Lessons Learned

By: Professor Andi Curcio & Dean Alexis Martinez

Experience confirms using rubrics to assess institutional learning outcomes is relatively easy and cost-effective. It is also an iterative process. Below we share some of the lessons we learned as we engaged in this rubric-based institutional assessment process. We also share examples of final report charts to illustrate how this process results in usable assessment report data.

A Review of the Basics

Georgia State University College of Law has institutional outcomes that encompass the ABA required legal knowledge, analysis, research and writing outcomes as well as outcomes covering self-reflection, professional development, ethical and professional obligations, teamwork, ability to work effectively with courts and clients, and awareness of pro bono responsibilities.

An earlier blog and article provide an in-depth discussion about the development and use of rubrics to assess these institutional outcomes.

To briefly review the main idea: we engaged faculty in designing rubrics with measurable criterion for each institutional outcome.

For example, for our legal knowledge and analysis outcomes, our criterion included: substantive legal knowledge; issue spotting; fact usage; critical analysis; and policy analysis. For each criterion, we identified a continuum of competence.

For example, for issue spotting, the rubric looked like this:

ACC1

As the excerpt above illustrates, we drafted rubrics so that faculty teaching a wide range of courses could use the rubric, regardless of course content or assessment methodology.

For each outcome, we identified multiple first year and upper level courses that would provide a solid student sample and used those courses to measure the outcome. In the designated courses, faculty graded as usual and then completed a rubric for each student.

Faculty did not have to change how they taught or assessed and the only extra work was completing a rubric – a process the faculty agreed took little additional time.

All data was entered from the completed rubrics into one master database and used to create a faculty report identifying student achievement, by cohort year (1L,2L 3L) for each rubric criterion [see sample below].

Lessons Learned:

1. Drafting Rubrics

We struggled to draft rubrics that could be easily adapted to a wide range of courses. If we were starting from scratch, it might have been easier if we used the rubrics drafted by the American Association of Colleges and Universities [AAC&U] as a starting point. Those rubrics have been developed and tested for reliability and validity. They also look at the big picture skills.

Because law faculty often think in context of how individual courses are taught it was sometimes challenging for faculty to start from scratch and draft rubrics that could be easily applied across the curriculum. Starting with the AAC&U rubrics allows faculty members to review examples of language and how larger/generalized program outcomes could be assessed through multiple different teaching methods and in a wide range of courses.

We also learned that it works best if we keep the rubrics to one page per learning outcome. Although outcomes could have a lot of criterion, it is important to identify 4-5 key criteria. Keeping the rubrics to one page forces us to hone in on the critical skills and helps ensure that the process is not overly burdensome for either faculty completing the rubric or staff entering the rubric data. It also makes reporting the data more manageable.

We also found it useful to remind faculty that the institutional rubrics are not meant to capture all skills taught in a given course and that we did not expect all faculty to assess every rubric criterion which is why we included a “N/A” [not applicable] choice for each criterion.

Finally, we found it helpful to emphasize that while we cannot change the rubrics mid-year, we welcome feedback and are open to changing future rubric iterations based upon faculty input. This keeps the faculty engaged and ensures the rubrics are as meaningful as possible.

2. Labeling Criterion Levels

Originally, we drafted rubrics and labeled each criterion level with word descriptors such as: needs significant help; developing; competent; and aspirational. Faculty found those labels more confusing than helpful. We thus changed the continuum labels to: level 1, level 2, etc. This change made it easier for faculty to focus on the descriptors along the continuum, rather than the achievement labels. It also eliminated any concerns about how the data collected could be used in the future, either internally or externally, to describe the quality of current and future graduates.

3. Data Compilation and Report Format

We chose a wide variety of 1L and upper level courses to get a robust data sample. In each course assessed, the professor completed a rubric for each student. Professors used anonymous exam numbers for the rubrics, just like for grading.

Initially, each rubric submitted was a data point. However, we realized that some students were taking multiple courses used in our data collection while others took only one course. To address the issue of “double counting” some of the same students, we changed our data entry system so that each student became a data point.

To the extent students took multiple courses where the outcome was measured, and they were rated differently by different professors, we averaged their score. Thus, if a student was at a Level 2 in issue spotting in Con Law II and a level 3 in issue spotting in Administrative Law, the student was entered into the program as a 2.5 for issue spotting. That also allowed us to have a more granular final report because instead of having four levels, we had seven.

The charts below illustrate what final data compilation might look like using that data entry system.

ACchart

ACC3

After experimenting with developing a software program to compile the data, we discovered it was cheaper, and significantly simpler, to use excel for data entry and basic data compilation. The excel option also allows for future entry into SPSS for additional correlations or data analysis.

As we move forward in assessing additional outcomes this year, we are experimenting with moving from hard copy to electronic rubrics to ease the administrative burden of data entry of hard copy rubrics.

There are multiple software options, such as Qualtrics, that allow for the same questions included in hard copy rubrics to be organized electronically for reports to be run quickly and efficiently.

4. Using the Report Data to Improve Learning

After compiling the data, the assessment committee reported out the analysis in a short, factual report to the faculty using the chart format above and some additional explanatory narrative.

Throughout the reporting process and ensuing discussions about how to use the data, we reminded faculty that the point of outcome measures is to improve student learning [something we all care about].

We also were very upfront about issues with methodology that produced imperfect results, and we reminded faculty that our goal was an overview, not a publishable paper. Reminders about why we are engaging in the process and transparency about imperfections in the process went a long way toward moving the discussion forward.

We used the report as a starting point for a big picture discussion. After briefly reviewing the report with the faculty, we asked the faculty to break out into small groups and answer questions such as: given the data on 1Ls, are we satisfied with where our 1Ls are at the end of the first year? If not, what changes should we consider to help improve their learning?

By engaging the faculty in answering specific questions, we got great feedback that we turned into recommendations/action steps that led to further discussions. Eventually we adopted action steps that we have begun implementing in the hope that we can improve student learning. For example, based upon the data and the experience using the rubrics, faculty agreed to develop criterion-referenced rubrics for their own courses so that students had more information than simply a curved grade by which to assess their progress.

Conclusion

Institutional outcomes assessment is a new process for most law schools. It is also an iterative one. We learn as we go along and make changes as necessary. At GSU, we changed our data compilation methods and tweaked the rubrics. We expect to continue rubric revision as we become more familiar with the process.

What we have learned is that the rubric assessment process is fairly easy to implement, cost-effective, and can provide us useful information as we continually strive to improve our students’ learning.

%d bloggers like this: