New Rubrics Available to Help Law Schools that Have Adopted Learning Outcomes Related to Professional Identity Formation

By: Professor Benjamin V. Madison, III

 

A recent blog by Andi Curcio and Dean Alexis Martinez addressed the manner in which well-developed rubrics help law schools in program assessment. As newcomers to assessment of program learning outcomes, see Article, law schools need guidance on best practices for program assessment.

Rubrics are clearly a key part of assessing whether law students, by the time they leave law school, have attained skills, competencies, and traits embodied in a given school’s program learning outcomes. The Holloran Center for Ethical Leadership in the Professions created a database of program learning outcomes adopted by law schools. See Database. The program learning outcomes that many of us find most intriguing are those under ABA Standard 302(c) (exercise of professional and ethical responsibilities to clients and the legal system) and Standard 302(d) (professional skills needed for competent and ethical participation as a member of the legal profession). The competencies and skills in learning outcomes adopted by law schools under these categories include: Cultural Competency (46 schools), Integrity (27 schools), Professionalism (31 schools), Self-Directedness (41 schools), and Teamwork/Collaboration (52).

Associated with St. Thomas School of Law, the Holloran Center brought together two leaders in the professional formation movement, Professor Neil Hamilton and Professor Jerry Organ of St. Thomas Law, with faculty and staff from other law schools that have committed to pursuing professional identity formation as part of their law schools’ effort to produce complete lawyers. Like Professor Hamilton and Professor Organ and St. Thomas, these faculty, administrators, and staff–and their law schools–have demonstrated a commitment to the professional identity formation movement—a movement inspired by the 2007 publication of the Carnegie Report and of Best Practices in Legal Education. Recently, rubrics developed over the past year by working groups assigned to specific competencies were added to the Holloran Center web site, see Holloran Competency Milestones.

The Holloran Competency Milestones offer any law school that has published a program learning outcome in the competencies listed above—competencies that some educators may consider too challenging to assess. If anyone believes these competencies are impossible to assess, however, the Holloran Competency Milestone rubrics show otherwise. A law school must decide in what courses, or in what contexts (possibly clinical settings), the school uses the rubrics to assess attainment of a given competency. However, the Milestones are a valuable tool for assessing these competencies.

The work of the Holloran Center, and of those of us on the working groups that developed these first rubrics will continue. (The persons and schools who have participated in this project to date are identified on the site with the Milestones.) Law schools that have not previously been involved in development of rubrics have recently committed to developing further rubrics. Continuing the progress that has begun will provide rubrics for program assessment of competencies for which assessment tools have not been developed. For instance, these schools are likely to address competencies such as Reflection/Self-Evaluation (36 schools include in published learning outcomes), Active Listening (31 schools include in published learning outcomes), and Judgment (18 schools include in published learning outcomes).

Anyone who considers the competencies discussed here to be too abstract to include in a law school’s program of instruction ought to review the impressive survey by Educating Tomorrows Lawyers (ETL), called the Foundations of Practice Survey. There, ETL’s survey of more than 24,000 lawyers nationwide demonstrated that the very competencies discussed above (1) were among the most important factors in employers’ decisions whether to hire law students, and (2) determined whether the student is likely to succeed in law practice. See Foundations of Practice Report (The Whole Lawyer and the Character Quotient).

In short, the law schools that adopted learning outcomes designed to produce lawyers who are not only legal technicians but whole persons are on the right track. The law schools that adopted competencies that go beyond traditional competencies (analytical skill, writing, etc.) showed they believed a complete lawyer needed other competencies to be complete. The efforts described here validate the decision of such schools to adopt learning outcomes that go beyond the traditional ones. The hope, of course, is that law schools now use these rubrics to do program assessment of competencies such as cultural competency, integrity, professionalism, self-directedness, and teamwork/collaboration.

May these efforts ultimately produce more lawyers that embody these competencies.

Advertisements

The Feedback Sandwich: A Bad Recipe for Motivating Students’ Learning

This past year, I’ve been participating in the hiring process for clinical professor positions at our law school. I’ve observed job talks and engaged with candidates about how they provide supervision. Because I believe that giving students feedback is, perhaps, the hardest part of being a clinical professor, I tend to ask lots of questions about how candidates would, ideally, provide feedback in an academic or practice setting.

I’ve been surprised by how many candidates still ascribe to the “feedback sandwich” as a model for delivering feedback and by how many clinical professors claim they use the model in their teaching. The feedback sandwich is a feedback delivery method that uses a compliment, criticism, compliment format. It’s meant to soften the blow of critical feedback and increase the likelihood that the recipient will actually listen to the “meat” of the sandwich – the corrective measures. But the feedback sandwich has been widely criticized.

Feedback is the backbone of clinical education. One of the greatest benefits of experiential learning is the opportunity to give and receive constant feedback. Feedback helps students develop their skills and their professional identities. Well-designed feedback can lead to increased motivation and learning. But ill-designed feedback can lead to decreased motivation, low performance and disengagement.

No doubt, most feedback is well-intentioned whatever form it takes. The feedback sandwich certainly seems well-intentioned too. Professors often use it to remind students that they can and have done some things well. But danger lurks in the good intentions of comforting feedback.

Researchers have demonstrated that giving students comforting feedback significantly decreases their motivation to learn.  Comforting feedback communicates low expectations. For example, telling a student that plenty of people have difficulty with this skill but may be good at others doesn’t empower students to improve. In fact, it may even suggest that the professor doesn’t think the student can improve.

On the other hand, controllable feedback increases students’ motivation and effort to learn. Controllable feedback gives students the specific strategies they need to improve. For example, suggesting a student talk through strategies used to complete a task, and together develop specific ways that the approach can be improved offers students a pathway to increase their learning.

Don’t let your feedback get hijacked by the sandwich myth. Research shows that when we hide feedback that is critical for learning, students tend to remember the compliments and forget critical aspects that will lead to real struggle and learning. And, importantly, students interpret comforting feedback to mean that they may not be able to improve their performance in this particular skill. Compliments and comforting feedback may help students feel better in the short term, but it doesn’t help them address their deficits.

If you are uncomfortable giving critical feedback, consider the learning culture you foster. The type of feedback one gives reflects one’s mindset. Instructors with a growth mindset foster a belief that students’ intelligence or aptitude can grow with effort and good strategies. Those with a fixed mindset believe that one’s intelligence or ability is mostly fixed, and one can’t significantly change their natural abilities. Researchers have shown that instructors with a fixed mindset give significantly more comforting feedback than instructors with a growth mindset. This makes sense because if we believe a student may not be able to greatly improve their performance despite their best efforts, we seek ways to make them feel better about themselves.

A growth-minded culture allows for feedback to be taken in the spirit it was intended – to provide students with an honest assessment of their performance and concrete ways to improve it. It’s essential for clinical professors to provide growth-minded and controllable feedback. That’s because students can detect instructors’ mindsets. They see through the comforting feedback and come to believe they aren’t capable of significantly upping their game. Only controllable feedback provides a path for sustained improvement and growth. Law students will need to learn to receive and give this kind of feedback as they enter the legal profession, and law schools can play a role helping them manage this process.

CLEA, SALT and others urge Council on Legal Education to increase transparency and reject proposed changes to Standard 316 at their Friday 2.22.19 meeting

FROM CLEA website:

On February 20, 2019, CLEA submitted two joint advocacy memorandums, with the Society of American Law Teachers (SALT) and others, to the Council on the ABA Section of Legal Education and Admissions to the Bar. 

In the first joint memo, CLEA and SALT urge the Council to increase transparency in its processes and engage in meaningful dialogue with all interested constituencies before making decisions that affect law schools and the legal profession.

The second advocacy memo urges the Council to once again reject the proposed changes to Standard 316 relating to bar passage.  The second memo is co-signed by SALT, the ABA Coalition on Racial and Ethnic Justice, ABA Commission on Disability Rights, ABA Commission on Hispanic Legal Rights & Responsibilities, ABA Commission on Sexual Orientation & Gender Identity, ABA Commission on Women in the Profession, ABA Council for Diversity in the Educational Pipeline, ABA Law Student Division, ABA Young Lawyers Division, HBCU Law Deans Gary Bledsoe, John C. Brittain, Elaine O’Neal, John Pierre, & LeRoy Pernell,  and the Hispanic National Bar Association (HNBA).

Assessing Institutional Learning Outcomes Using Rubrics: Lessons Learned

By: Professor Andi Curcio & Dean Alexis Martinez

Experience confirms using rubrics to assess institutional learning outcomes is relatively easy and cost-effective. It is also an iterative process. Below we share some of the lessons we learned as we engaged in this rubric-based institutional assessment process. We also share examples of final report charts to illustrate how this process results in usable assessment report data.

A Review of the Basics

Georgia State University College of Law has institutional outcomes that encompass the ABA required legal knowledge, analysis, research and writing outcomes as well as outcomes covering self-reflection, professional development, ethical and professional obligations, teamwork, ability to work effectively with courts and clients, and awareness of pro bono responsibilities.

An earlier blog and article provide an in-depth discussion about the development and use of rubrics to assess these institutional outcomes.

To briefly review the main idea: we engaged faculty in designing rubrics with measurable criterion for each institutional outcome.

For example, for our legal knowledge and analysis outcomes, our criterion included: substantive legal knowledge; issue spotting; fact usage; critical analysis; and policy analysis. For each criterion, we identified a continuum of competence.

For example, for issue spotting, the rubric looked like this:

ACC1

As the excerpt above illustrates, we drafted rubrics so that faculty teaching a wide range of courses could use the rubric, regardless of course content or assessment methodology.

For each outcome, we identified multiple first year and upper level courses that would provide a solid student sample and used those courses to measure the outcome. In the designated courses, faculty graded as usual and then completed a rubric for each student.

Faculty did not have to change how they taught or assessed and the only extra work was completing a rubric – a process the faculty agreed took little additional time.

All data was entered from the completed rubrics into one master database and used to create a faculty report identifying student achievement, by cohort year (1L,2L 3L) for each rubric criterion [see sample below].

Lessons Learned:

1. Drafting Rubrics

We struggled to draft rubrics that could be easily adapted to a wide range of courses. If we were starting from scratch, it might have been easier if we used the rubrics drafted by the American Association of Colleges and Universities [AAC&U] as a starting point. Those rubrics have been developed and tested for reliability and validity. They also look at the big picture skills.

Because law faculty often think in context of how individual courses are taught it was sometimes challenging for faculty to start from scratch and draft rubrics that could be easily applied across the curriculum. Starting with the AAC&U rubrics allows faculty members to review examples of language and how larger/generalized program outcomes could be assessed through multiple different teaching methods and in a wide range of courses.

We also learned that it works best if we keep the rubrics to one page per learning outcome. Although outcomes could have a lot of criterion, it is important to identify 4-5 key criteria. Keeping the rubrics to one page forces us to hone in on the critical skills and helps ensure that the process is not overly burdensome for either faculty completing the rubric or staff entering the rubric data. It also makes reporting the data more manageable.

We also found it useful to remind faculty that the institutional rubrics are not meant to capture all skills taught in a given course and that we did not expect all faculty to assess every rubric criterion which is why we included a “N/A” [not applicable] choice for each criterion.

Finally, we found it helpful to emphasize that while we cannot change the rubrics mid-year, we welcome feedback and are open to changing future rubric iterations based upon faculty input. This keeps the faculty engaged and ensures the rubrics are as meaningful as possible.

2. Labeling Criterion Levels

Originally, we drafted rubrics and labeled each criterion level with word descriptors such as: needs significant help; developing; competent; and aspirational. Faculty found those labels more confusing than helpful. We thus changed the continuum labels to: level 1, level 2, etc. This change made it easier for faculty to focus on the descriptors along the continuum, rather than the achievement labels. It also eliminated any concerns about how the data collected could be used in the future, either internally or externally, to describe the quality of current and future graduates.

3. Data Compilation and Report Format

We chose a wide variety of 1L and upper level courses to get a robust data sample. In each course assessed, the professor completed a rubric for each student. Professors used anonymous exam numbers for the rubrics, just like for grading.

Initially, each rubric submitted was a data point. However, we realized that some students were taking multiple courses used in our data collection while others took only one course. To address the issue of “double counting” some of the same students, we changed our data entry system so that each student became a data point.

To the extent students took multiple courses where the outcome was measured, and they were rated differently by different professors, we averaged their score. Thus, if a student was at a Level 2 in issue spotting in Con Law II and a level 3 in issue spotting in Administrative Law, the student was entered into the program as a 2.5 for issue spotting. That also allowed us to have a more granular final report because instead of having four levels, we had seven.

The charts below illustrate what final data compilation might look like using that data entry system.

ACchart

ACC3

After experimenting with developing a software program to compile the data, we discovered it was cheaper, and significantly simpler, to use excel for data entry and basic data compilation. The excel option also allows for future entry into SPSS for additional correlations or data analysis.

As we move forward in assessing additional outcomes this year, we are experimenting with moving from hard copy to electronic rubrics to ease the administrative burden of data entry of hard copy rubrics.

There are multiple software options, such as Qualtrics, that allow for the same questions included in hard copy rubrics to be organized electronically for reports to be run quickly and efficiently.

4. Using the Report Data to Improve Learning

After compiling the data, the assessment committee reported out the analysis in a short, factual report to the faculty using the chart format above and some additional explanatory narrative.

Throughout the reporting process and ensuing discussions about how to use the data, we reminded faculty that the point of outcome measures is to improve student learning [something we all care about].

We also were very upfront about issues with methodology that produced imperfect results, and we reminded faculty that our goal was an overview, not a publishable paper. Reminders about why we are engaging in the process and transparency about imperfections in the process went a long way toward moving the discussion forward.

We used the report as a starting point for a big picture discussion. After briefly reviewing the report with the faculty, we asked the faculty to break out into small groups and answer questions such as: given the data on 1Ls, are we satisfied with where our 1Ls are at the end of the first year? If not, what changes should we consider to help improve their learning?

By engaging the faculty in answering specific questions, we got great feedback that we turned into recommendations/action steps that led to further discussions. Eventually we adopted action steps that we have begun implementing in the hope that we can improve student learning. For example, based upon the data and the experience using the rubrics, faculty agreed to develop criterion-referenced rubrics for their own courses so that students had more information than simply a curved grade by which to assess their progress.

Conclusion

Institutional outcomes assessment is a new process for most law schools. It is also an iterative one. We learn as we go along and make changes as necessary. At GSU, we changed our data compilation methods and tweaked the rubrics. We expect to continue rubric revision as we become more familiar with the process.

What we have learned is that the rubric assessment process is fairly easy to implement, cost-effective, and can provide us useful information as we continually strive to improve our students’ learning.

What’s in a Name? Teaching Implicit Bias

Every semester I weave into my classrooms several opportunities to teach about implicit bias. I have shown videos like this and led discussions on articles like this.

Last week in my Family Law Clinic seminar, we discussed Peggy McIntosh’s Unpacking the Invisible Knapsack, which describes the author’s quest to overcome her biases stemming from white privilege. A student shared their pain and frustration over college and law professors never using their full name, and often mispronouncing the parts of their name the professor is willing to speak out loud. “It’s dehumanizing,” my student said.

Those words have haunted me all week. Names are fundamental parts of human identity. Why can we, as educators–members of an elite profession–not get this right? Why is it not a norm in higher education for professors and teaching assistants to learn to pronounce every student’s name?

Also this week, I read in a memo from a colleague a to-do item along the lines of “practice pronouncing graduates’ names.” The colleague was sharing with me tips for the job I will soon begin: associate dean for academic affairs. One privilege of this job is reading the names of all Penn State Law graduates at the annual commencement ceremony. It was profoundly touching to learn that my colleague takes the time to practice every graduate’s name–and they felt it important enough to share with me as one of a handful of their significant monthly action items.

I give all my students the opportunity to share the pronunciation of their name with me on the first day of class, on note cards I keep with me at every class. An earlier post explained more about the note card system, which I learned from fellow blogger Paula Schaefer. Pronouncing each student’s name is challenging, and I sometimes falter. Last semester I began writing the pronunciations on my seating chart, to minimize my fumbling through the note cards. This is my seventeenth year of teaching. My only regret is not starting this earlier. It enriches my classroom, and it enriches me. It bakes into my pedagogy an indirect lesson about implicit bias, a lesson I re-learn every time I call on a student and say their name, whether it is Ainslie or Zhao-Ji.

Leadership Courses: Paving the Path for Future Attorneys

Written by: Dean Rosemary Queenan, Albany Law School; and Dean Mary Walsh Fitzpatrick, Esq.

 

There is a call to action to provide students with the opportunity to build leadership skills. This call originates, in part, from the changing legal services environment and the recognition that lawyers need to know more than the law: they need to master many disciplines that are commonly and collectively referred to as “leadership” skills. Broken down into its separate parts, leadership may include communication, teambuilding, organization, presentation, active listening skills, and a cadre of emotional intelligence competencies.

To answer the call, Albany Law School has developed and added to its course offerings a new Lawyers as Leaders course, which is being taught collaboratively by Mary Walsh Fitzpatrick, Assistant Dean for the Career and Professional Development Center and Rosemary Queenan, the Associate Dean for Student Affairs. The course will use skills-building exercises and constructive feedback to allow students to practice leadership skills. Students will create their own organizations and will be assigned to take on leadership roles in performing certain tasks including identifying a vision for their organization, managing and working with teams, making difficult decisions, navigating difficult conversations, presenting and communicating effectively and problem solving.

Our first class focused on a discussion of the work of Carol Dweck, Ph. D., Peter Senge, Ph. D., and Daniel Goleman, Ph. D., on mindset, emotional intelligence, and leadership styles, in the context of our broader discussion of what makes a great leader. With this introduction, students were asked to assess and identify their own leadership styles and emotional intelligence attributes. Each organization was also asked to research a leader in business or law and present on that leader’s failures and successes in leadership.

We are looking forward to this first-of-its-kind course to be offered at Albany Law School and are confident that every student will benefit in some way from the experience. Stay tuned, as we will provide updates on our progress and outcomes along the way!

Letters raise concerns about changes to the bar pass accreditation standard

Early next week, the ABA House of Delegates will again vote on whether to approve a revised bar passage accreditation standard [Standard 316]. The Society of American Law Teachers and the ABA Diversity Entities both have written to the ABA House of Delegates setting forth significant concerns about the proposed standard change.  Both letters are worth a full read.

Amongst the issues the letters raise about the proposed change are the following:

1.  There is incomplete data about how it will affect HBCU’s and other law schools with significant enrollment of people of color;

2.  It fails to account for state bar exam cut score differences and differences in state bar exam pass rates;

3.  It may result in even greater reliance on LSAT scores in the admissions process despite studies showing the scores’ limited predictive value for academic or bar exam success at many schools and despite warnings from the LSAC about how to use the scores properly in the admissions process;

4.  It may negatively impact schools willing to take a chance on students who are poor standardized test takers but who will be excellent lawyers and leaders if given the opportunity to attend law school and the coaching necessary to pass the bar exam;

5.  It does not consider the effect of transfer students on bar pass rates for schools that admit students who otherwise would not be admitted to law school, who perform well, and who then  transfer to other institutions;

6.  It eliminates some important aspects of the current Standard that take into account varying state pass rates, a school’s mission, the transfer issue, and the fact that improving bar passage is a complex and nuanced issue that requires study and experimentation [something currently underway at many schools];

7.  Now is not the right time for change given current studies about the validity of the bar exam as a licensing method and work being done to explore law licensing assessments that better measure who will be a competent attorney.

Proponents of the proposed change to Standard 316 believe it is necessary to protect consumers from law schools that admit students without devoting the necessary resources to ensure bar passage or that admit and retain students who have no chance of obtaining a law license.  The letters cited acknowledge the importance of the consumer protection issue but argue that issue can, and should, be addressed in other ways.

If you have concerns about the proposed change to Standard 316, contact your state ABA delegate.  The delegate information starts on page 13 of the ABA 2018-2019 Leadership Directory.

%d bloggers like this: