Gathering Institutional Learning Outcomes Data

As law schools engage in outcomes assessment a key question involves how to collect institutional data on student achievement.  In A Simple Low Cost Institutional Learning Outcomes Assessment Process, I suggest one way to engage in data collection that requires relatively little additional law faculty time, relatively minimal expense, and does not require faculty to change how we teach or assess in our own courses.

Law school institutional learning outcomes require measuring nuanced skills that develop over time. Rather than look at achievement just in our own courses, institutional outcome-measures assessment requires collective faculty engagement and critical thinking about our students’ overall acquisition of the skills, knowledge, and qualities that ensure they graduate with the competencies necessary to begin life as professionals. Even for those who believe outcomes assessment is a positive move in legal education, in an era of limited budgets and already over-burdened faculty, the data collection necessary to engage in the new mandated outcomes assessment process raises cost and workload concerns.

To address those concerns, the article describes a process being used by Georgia State University College of Law [GSU COL] to collect institutional learning outcomes data.  GSU COL has developed a rubric method to assess a wide array of learning outcomes.

We modeled our process on work being done both by the American Association of Colleges and Universities [AAC& U] Values Rubrics Project and medical educators’ Milestones Project .  Those educators use rubrics to assess a wide range of nuanced skills such as critical thinking, written and oral communication, problem-solving, intercultural competence, teamwork, and foundations and skills for life-long learning.

Below, I briefly describe  GSU COL’s process for collecting institutional learning outcomes data.

The Institutional Data Collection Process

After identifying our institutional learning outcomes, we developed a five step institutional outcomes assessment process to collect data from GSU COL faculty.  The faculty data focuses on law student performance in various courses.

  1. Draft rubrics

First, we engaged our assessment committee and, in some cases, ad hoc faculty committees, in drafting rubrics. The rubrics had to be general enough that they were usable across a wide range of courses and adaptable to various types of course assessments. To draft the rubrics, we looked to our own experience and other sample rubrics such as those developed by AAC&U and medical educators as well as those developed by legal educators.  The article’s appendix contains GSU COL’s draft rubrics for our eight learning outcomes.

  1. Pilot test rubrics

Second, we identified courses that would use the rubric – courses where the skills being measured were already being assessed.  For example, for the basic legal knowledge and analysis outcomes, we chose first year and upper level doctrinal courses.  For self-reflection and client interaction outcomes, we chose clinics, etc.

We are pilot testing each rubric with faculty who will use the rubric and using their feedback to refine the rubric.  Because our assessment process is cyclical, each year, we pilot two rubrics and use two rubrics for actual data collection.  Thus, our rubric development process remains a work in progress and it engages a significant number of faculty members.  This helps ensure validity, engages faculty outside the assessment committee, and, hopefully, builds faculty buy-in.

  1. Use the rubrics

Third, every year, we ask faculty in designated courses to assess and grade as they usually do, adding only one more step – completion of a short rubric for each student.  Most faculty members have said this process adds very little additional time to grading.

Given our different outcomes and the cyclical nature of our assessment process, each year, different faculty will use the rubrics.  For example, one year, legal knowledge and analysis rubrics will be completed by doctrinal faculty.  The next year, legal research and writing faculty as well as  seminar faculty who assign papers will complete rubrics focused on legal research and writing, etc.  Thus, we spread the workload and engage as many faculty members as possible in the institutional outcomes assessment process.

  1. Enter the data

Fourth, we enter the rubric data from each course into a computer. This data entry process can simply involve an Excel spreadsheet, an SPSS program, or it can be more complicated.  For example, we worked with GSU university computer programming graduate school GRAs to develop software compatible with the university computer system to allow us to manipulate the data in numerous ways. We currently are working on developing the software program so that it can be used by other institutions.

  1. Use the data to analyze student learning and make changes if necessary

Finally, we are using the data to prepare reports about institutional level student outcome achievement.  In order to increase the validity of our findings, our reports contain information collected from multiple sources. For example, for each institutional outcome we have data from the rubrics faculty complete and externship site supervisor evaluations. Additionally,  LSSSEE survey data has information relevant to many of our outcomes.  The results from all that data are included in the faculty learning outcomes assessment report.

This Fall, our faculty will discuss our findings on the first two outcomes we measured – legal knowledge and legal reasoning and analysis.  While I noted at the outset that the data collection process does not require faculty to change how we teach or assess, our discussions in light of the data we have gathered may lead us to collective decisions that  some of us will adjust our teaching and assessment processes in an effort to improve student learning.  That is the entire point and purpose of the learning outcomes measurement process. However, before we can begin that work, we had to figure out how to get  information that allows us to have informed discussions.  The steps summarized above, and described in more detail in the article, are one way to do that.

Other resources

The data collection method above can be used both to measure institutional and even course level learning outcomes.  However, multiple ways to collect data exist.  Other good resources with concrete data collection methodologies include Andrea Funk’s excellent book: The Art of Assessment, and Lori Shaw and Victoria Van Sandt’s seminal work, Student Learning Outcomes and Law School Assessment.

 

Advertisements

How You and Your Students Can Benefit From Stone Soup Next Year

The University of Missouri Law School started the Stone Soup Project about a year ago to incorporate more knowledge about actual practice in legal education.

Stone Soup contributes to a more balanced educational diet, adding context of disputes and more focus on parties.  Readings on legal doctrine generally are extremely acontextual.  Of course, students get value in reading excerpts of appellate case reports to learn about legal doctrine and analysis.  Similarly, students get value in reading about practice theory.

But I think that most law students get too little education about how cases actually look to lawyers.  In real life, cases are full of facts, evidence, uncertainty, risk analysis, interests, relationships, and emotions, which provide context that is systematically stripped out of most of our teaching materials.

And parties – central characters in lawyers’ work – typically are portrayed as cardboard figures who are included merely to demonstrate our teachings, not as the principals, who lawyers serve.

Readers of this blog know this.  People – maybe including you – have been saying this for a long, long time.  Indeed, this has been a major motivation for clinical and other practice-oriented instruction.

Stone Soup is another systematic effort to provide a more balanced educational diet for students by including more of these perspectives in our teaching.

How Stone Soup Works

Since we started the Project about a year ago, we have engaged almost 1000 students in 40 classes covering 12 subjects, taught by 32 faculty from 25 schools in 3 countries.

Faculty generally have assigned students to conduct interviews about actual cases and/or practitioners’ backgrounds, philosophies, and practices.  Some faculty assigned students to observe court proceedings or mediations.  You can tailor an assignment to fit your educational objectives.

Most assignments were in traditional ADR courses, but faculty also used Stone Soup assignments in other courses including Access to Justice, Evidence, Relational Lawyering, Resolving Community Civil Rights Disputes, and Trusts and Estates.  Faculty could use them in almost any course, such as Labor Law, Employment Discrimination, Professional Responsibility, Civil Procedure, and Criminal Law, among many others.

Stone Soup faculty assessed their courses, identifying what worked well, what students learned that they would not have learned without the assignment, and what faculty would do differently in the future.  Here’s a collection of their assessments.

Faculty consistently reported outstanding results that far exceeded our expectations.  Stone Soup has provided many benefits including:

  • increasing students’ exposure to the real world of practice
  • helping students develop critically-important interviewing and analysis skills
  • identifying how theory does and doesn’t map well onto actual practice
  • supplementing faculty’s knowledge, especially for faculty who haven’t practiced in the subjects they are teaching – or haven’t practiced at all
  • increasing students’ and faculty’s enjoyment of the courses

Faculty who used Stone Soup assignments in their courses this year generally plan to use Stone Soup again with little or no change.

How You Can Use Stone Soup

The first year’s experiences yield some general suggestions for using Stone Soup.  In particular, faculty should require students to complete interviews or observations as soon as appropriate in a course, and should schedule time in class to discuss what students learned.  Discussing insights from these assignments early in a semester provides a base of experience that everyone can refer to during the rest of the course.

Here’s a table identifying characteristics of Stone Soup courses and including links to faculty assessments of the courses.  The table demonstrates the incredible creativity of faculty in tailoring assignments to fit their instructional goals and circumstances.  For each course, it shows:

  • Class size
  • Description of the Stone Soup assignment
  • Whether the assignment was required, one option of an assignment, or extra credit
  • Assigned paper length, if any
  • Due date
  • Percentage of grade, if any
  • Whether the results of the assignment were discussed in class

Some faculty like the Stone Soup idea generally but wonder if it work in their courses or feel hesitant for other reasons.  This post identifies some colleagues’ concerns and responses to those concerns.  In particular, the assignments need not add much, if any, workload, students generally can find interview subjects without faculty assistance, and Stone Soup can work well in almost any law school course.

If you would like more information, you can read this report on the Project’s first year and/or get in touch with me.

If you would like to join the roster of colleagues using a Stone Soup assignment next year, please let me know the courses(s) and semester(s) in which you would use it.

Deadline Extended! SALT Conference at Penn State Law

Do you have thoughts on how legal education can respond to a changing society? Are you using innovative teaching methods you care to share? Have you ever wondered what amazing intellectual and social justice work goes on at Penn State, even during football season? Do you love crisp autumn days with stunning foliage, in a unique college town smack in the middle of the Northeast Corridor? Then this year’s SALT conference is for you!

Join us at Penn State Law as we host the Society of American Law Teachers’ (SALT) 2018 Teaching Conference.  Registration is available here and the CFP is  here. “We are” looking forward to welcoming you to Happy Valley!

When you finish grading your students, grade your own performance.

At this time of year, many law professors feel a certain sense of despair. After pulling themselves out from under a mountain of final exams and papers, they must eventually turn their attention to their students’ evaluation of the course. This year, consider your students’ performance and feedback to decide whether you’re making the most out of the time and effort you’ve put into designing your course and grading your students. Engage in practices that will lead to better outcomes next fall.

I know that once we finish grading, we are Ready to Be Done Already! But take some additional time, while the students’ work is still fresh on your mind, to assess their performance in the aggregate. What learning outcomes did your students struggle with the most? Did most students fail to organize their answers? Where were most students missing the point? What separated the strongest performances from the average performances? What was the correlation between the length of a student’s answer and the number of points the student scored? By asking and answering these types of questions, you are embarking on assessment of the class itself, a necessary component to improving your own performance and your future students’ outcomes.

When you’ve sufficiently investigated the students’ performance in the class, recall the formative assessments you used in the course. Did they work? Were students performing better in areas where they received feedback? Were you aware of areas where your students struggled on the final? If not, how can you design formative assessment to address the students’ areas of weakness?

Next, read your students’ evaluations of the class. Are there common themes? Do they point out any weaknesses that may account for their struggles? Is there a disconnect between what they critique and what you were trying to accomplish? For example, students used to complain that I provided different feedback on the various drafts of their papers. Of course, that was my intent – to start with global and structural changes and then move into more specific areas of improvement. Too much change all at once is hard for anyone to handle, especially students who are new to legal reasoning, research and writing. But my students were confused. They weren’t aware of my plan because I never shared it with them.

Systematic evaluation of your own performance allows you to develop better approaches. For example, because students complained about my differing feedback on different paper drafts, I began class the next semester by explaining the science behind how we learn to write. I discussed different stages of the writing process, and I explained why I would be requiring different drafts and assessing different areas of the writing process in each draft. My up-front explanation gave students a schema and a sense of predictability and control over the outcome. Where prior students saw my comments as contradictory, new students saw them as different developmental stages in the process of learning something new.

Finally, are there areas of weakness that students fail to address year after year? For example, do students fail to prepare for and pay attention in class? And is that lack of preparedness resulting in weak performances? If so, design ways to engage students and hold them accountable. For example, try asking them to set the expectations for the class on the first day of class. You may start by asking them what conditions create the best learning environments? What is the professor’s role? What must the students do? You might be surprised by how readily they provide good answers. Next, ask them what methods are best for holding them accountable. Use their own answers to draft rules of the class and the methods by which students will be held accountable. Again, by discussing the process first, students know the expectations, they know that these are the common expectations, and they have strategies for holding themselves and others accountable. For example, if the class has agreed to limit their laptop use to taking notes, a student who asks another student to stop surfing the web during class is doing so because they want to uphold the rules they have designed to make their classroom time specifically targeted to learning the material and becoming better students.

Give these techniques a try. If they work to make teaching more enjoyable, that’s great. If not, they at least provide you with a concrete starting point the next year when students come to you seeking insight on why they failed to perform to their expectations.

Inspirational Office Art

written by Melissa Breger, Albany Law School

It was late April in the Spring semester of 2017. The course was Gender and the Law. It was that time at the tail end of the semester when the seriousness of the course material weighed heavily on the shoulders of the law students. Students were asking: Could the legal system effectuate real change in society? Can we ever truly achieve gender equality?

That May in 2017, I decided it was time to have the law students engage in a motivational exercise and put theory into action.

Each law student was provided with two index cards.  One was entitled ACTION ITEM and the other was entitled TAKE-AWAY. The students were asked to complete the cards in their own handwriting and in their own words. I explained to the students that the cards would be placed onto a poster that would hang in my office. The Action Item was to describe a concrete step forward in the area of Gender Equality that the students hoped we could achieve. I had them tie this action item to their specific research and final paper in the class. If the goal had been achieved by the next time they saw the poster, they could remove the card from the board.  (Cards were taped loosely with decorative metallic tape). The Take-Away item was to describe what each student would take away from the course and hopefully pass forward.

Once the cards were completed, I had the students bring the cards to our last class. For this class, I reserved a free conference room in the back of a nearby coffee and bagel shop. My (mostly, but not entirely female) students apparently had named this our “Empowerment Brunch.”

I had each law student “present” their cards and tape the cards onto a black poster board. The end result was an inspirational poster board that the students can re-visit whenever they visit their alma mater.

Ahead of class, I had explained to the students in an email:  “During this class, we will engage in a BRAINSTORMING SESSION about how to CHANGE THE WORLD. To that end, please bring with you your two INDEX CARDS filled out in advance. Remember the TAKE-AWAY card is what will you take away from this course (perhaps from the readings, the presentations, the classes, other).  What will you take with you for years to come (and perhaps pass forward)? Remember the ACTION ITEM card is based upon the research you conducted this semester – what do you hope we can accomplish specifically?  What is the one action item that could solve or ameliorate your legal dilemma/question?

I will make up a poster board with our cards and other graphics and keep it on display in my office.   In future years, when you come visit me—perhaps we will see real progress on some of these action items. After a semester of heavy coursework, let’s stay positive and push this ball forward. We are all relying on YOUR GENERATION to change how the law treats gender going forward.”

Some of the students’ Action items would likely actualize in the near future, such as “Get three people a year to watch a women’s sports events.” Others were loftier, but so important to articulate: “I want to dedicate my legal career to public service to help women, transgender and non-gender conforming individuals to gain full equality under the law.”

In terms of the Take-Aways, the cards were varied and proved quite moving as well, such as: “The law touches nearly every aspect of women’s lives,” and “Discussion about equality promotes equality.”

It was a terrific final class full of motivating conversation and plenty of dreaming. This poster proudly hangs in my office and still inspires me today.

Using Practical Simulations in Large Required Courses (Evidence)

written by Christian Sundquist, Albany Law School

I have taught the required course in Evidence at Albany Law School for some twelve years since first starting out in legal education. Over the years, the size of my class has fluctuated from 40 to over a hundred students (someone went on sabbatical!). During that time I have introduced a number of different teaching methodologies as my experience and confidence as a teacher grew, and as novel teaching pedagogies have been introduced.

I have always provided opportunities for my students to engage in any number of practical exercises and simulations in my courses – whether the course be Advanced Evidence, Federal Courts, or Technology, Privacy and the Law. The exercises often utilize professional actors to perform witness roles, and are geared towards cultivating core advocacy abilities, such as: trial skills, witness examination, opening and closing arguments, the presentation of evidence, client interviews and ethical considerations, appellate arguments, negotiation and legislative drafting. For example, students in my Immigration course have the opportunity to (a) interview a mock client seeking asylum (played by a professional actor) in the United States, (b) perform attorney roles during a two-day mock removal (a.k.a. deportation) hearing involving professional actors, and (c) work in teams to draft a legislative immigration reform initiative. Similarly, when I taught Federal Courts students were expected to (a) represent a mock client at the pre-trial stage in a simulated federal action (including perfecting service of their drafted motions for summary judgment), (b) engage in appellate arguments on matters of justiciability and federalism, and (c) examine witnesses (performed by professional actors) during a simulated Section 1983 civil rights case.

I have found that integrating experiential learning opportunities in class have led to significantly improved learning outcomes for students. Students are simply more engaged, more energized, and more passionate about learning when they are asked to apply the legal doctrine and theory they have learned in class to practical scenarios. The difficulty, of course, has been providing opportunities for students in LARGE classes to practically apply the material in a meaningful way, without detracting from the breadth of the lesson plan.

Back to the basic Evidence course: while I have always employed a variety of teaching methods in class (including, inter alia, class discussion and questioning, case analysis, problem-solving using hypotheticals, writing exercises involving draft motions in limine, and demonstration exercises using a few students to illustrate basic trial concepts), I have usually shied away from integrating full-scale practical exercises due to the sheer size of the class.

Last Fall was different.  While my class size was still fairly large (at around 45 students or so), I decided to provide ALL students with the opportunity to apply the law and policy they learned in class through a series of “practicums” focusing on high-profile cases.  For example, one such practicum focused on the complex character evidence issues stemming from the Bill Cosby prosecution (now, of course, he has been convicted!), another focused on the interesting relevance issues presented during the Conrad Murray (physician of Michael Jackson) prosecution, and two other exercises focus on hearsay and forfeiture doctrine issues presented by other famous recent cases.  In terms of methodology, I created case files for each “practicum” (containing, for example, instructions for the practicum, copies of pleadings, deposition transcripts, etc..) and also assigned relevant casebook readings, secondary materials, and Rules from the Federal Rules of Evidence where appropriate. During each practicum, approximately 3 to 5 students were assigned to one trial team (e.g., Prosecution, Defense, etc..), such that between 6 to 10 students had active attorney roles to perform during each exercise.  Another 2 to 4 students would serve as mock judges during each exercise (some of the exercises took the form of a pre-trial motion in limine hearing, others took the form of appellate arguments, and so forth).  Each simulated practicum would last approximately one hour, with the remaining 40 minutes of class devoted to a group discussion of the legal issues (students that did not serve as “attorneys” or “judges” during the exercise were required to help lead discussion during this segment). Notably, class participation (which included participation in the practicums) was graded in the course (with the syllabus including a rubric for assessing performance).

The results of my “experiment” were generally quite positive.  Similar to my other courses, I felt as a greater percentage of the class had a stronger understanding of core legal concepts following the practicums (such as hearsay, character evidence, and relevance).  I believe my students were excited to take part in the practicums, and appreciated going through a novel way of learning the material (beyond the normal class discussion, student questioning, and working through problem sets).  A number of my students commented during and after class that they were more practical learners, and that the exercises helped them finally understand evidentiary concepts that hitherto had been eluding them!

That said, employing practical exercises in such a large class had its limitations as well.  First, I learned after the fact that some the “non-active” students (who were not performing attorney or judge roles) were less engaged in classes that were devoted to “practicums,” despite the course requirement that the lead discussion during the second part of such exercises.  The next time I use “practicums” in the Evidence class, I will likely require each non-active student (or small groups of students) to submit a reaction memo to the exercise as well (but I welcome other suggestions!).  Second, the use of practicums had a small negative impact on the breadth of subjects we were able to reach during the semester.  I would always assign not only the “Practicum Case Materials” as homework in advance of an exercise, but also additional case readings on concepts we hadn’t reached yet in class.  Discussion during the second half of the practicum (e.g., the last hour of class) would then be devoted to not only analyzing what occurred during the exercise, but also addressing how the new case readings applied (or didn’t apply) to the case file at issue.  The next time I integrate such full-scale practical exercises in Evidence, however, I plan on “flipping the classroom” in advance of each practicum to ensure a broader coverage of evidentiary subjects.  That is, I plan on assigning a pre-recorded video lecture of any new material in advance of a scheduled practicum that summarizes any new concepts or rules raised by the readings.  My hope is that by doing so students will be more comfortable with the material, and that the class discussion following the exercise will be much improved.

All in all, I believe that integrating a series of involved practical exercises in a large, required course was a success.  My experience perhaps wasn’t perfect the first time around, but I look forward to making some tweaks to my teaching methodology and trying again next Fall! The success of using “practicums” in the basic Evidence course, however, may have had one unintended consequence: my Fall 2018 Evidence course is oversubscribed with nearly 80 students and a long waiting list!  Back to the drawing board….

Assessment of Program Learning Outcomes: Law Schools Move into the Twenty-First Century

In a relatively brief period, law school pedagogy has changed a great deal.  Christopher Columbus Langdell’s nineteenth century approach of case-based analysis featuring Socratic dialogue in class still forms the basis for many law professor’s approach.  The MacCrate Report in the early 1990s, however, led to far more clinical and practical opportunities for law students.  Even more recent publications, the Carnegie Institute’s Educating Lawyers and the Clinical Legal Education Association’s Best Practices for Legal Education have led to greater focus on balancing the Langdellian model with even more experiential education and development of students’ professional identity.

Even more recent reforms in legal education are designed to ensure the law school’s program of learning is clear about the goals the school has for students and can authenticate, through assessment methods, whether graduates have achieved these goals upon leaving school.  These reforms flow from new ABA standards that now require law schools to adopt and publish Program Learning Outcomes (ABA Standard 302). The ABA standard provides certain outcomes that all law schools need to include in their outcomes, but wisely leaves open to law schools the ability to adopt additional outcomes consistent with the school’s mission. In addition, the new standards require schools to evaluate on an ongoing basis the law school’s program, the degree to which students are meeting the law school’s learning outcomes, and the law school’s methods of assessment (ABA Standard 315). Another standard requires law schools to incorporate formative assessment into its program of instruction (ABA Standard 314). Again, most schools in other discipline have used such proven educational methods for many years. Instead of the classic “do or die” final exam, formative assessment requires some form of teaching that allows students to determine whether they are learning the concepts as they proceed. Most teachers would also tell you that it allows the teacher to see whether students are grasping concepts—or not.

For most graduate schools, program learning outcomes, ongoing program assessment, and formative assessment are nothing new; accrediting bodies for other areas of graduate education have required outcomes, program assessment, and formative assessment for decades.

Assessment gurus will tell you to try to develop a culture of assessment at your school. At first that seemed like a daunting task. When presented with the rationale for learning outcomes and assessment of these, however, any fair-minded person would have a hard time arguing that they are not sound ways to ensure a school is delivering on its promises (the “learning outcomes”).  Explaining the rationale helps build faculty buy-in and support.

As someone who started from scratch in dealing with program outcomes and program assessment, I can attest that anyone can implement these pedagogical practices.  I encourage anyone presented with the task of developing learning outcomes and program assessment at one’s school to follow a few suggestions:

  1. Find an expert in assessment. If your law school is associated with a university, you will find that the university has a director of assessment. Take her or him to lunch.  That person will be your best friend in the process.
  2. Read The Rubric Meets the Road in Law School Program Assessment of Student Learning Outcomes as a Fundamental Way for Law Schools to Improve and Fulfill Their Respective Missions. Learn the evolution of best practices in assessment and adoption of those into law school program assessment. Download at http://bit.ly/therubricmeetstheroad
  3. Read Student Learning Outcomes and Law School Assessment (Carolina Academic Press 2015). The article suggested in the download above refers not only to Lori Shaw and Vicki VanZandt’s book but also to work by Trudy Banta and others in general educational assessment. However, Shaw and VanZandt’s book is essential for anyone responsible for assessment in a law school. If you want to go more deeply into assessment theory and practices you can read Trudy Banta’s work. Shaw and VanZandt, however, not only instruct on assessment methodology and best practices; they also do it in the context of a law school program of learning.
  4. Involve your faculty in the process of assessment. Regent Law involves our entire faculty in the first assessment of program learning outcomes related to research and writing.  Because every professor supervises independent studies or academic legal writing for law review students, the rubrics we developed to assess student’s competency in research and writing included not only upper-level writing courses such as appellate advocacy and advanced legal research and writing.
  5. Do not be afraid to engage in self-assessment of your law school’s program of learning and its learning outcomes. Professionals who seek feedback on ways to improve and are willing to incorporate changes tend to be the most successful in any field. The same should apply to institutions. Accrediting bodes like the ABA appear to be looking at whether the institution is evaluating its program with sound methods, not whether the program is perfect. As I have come to understand the process, a program that does not undergo some modification as the assessment process continues from year to year is likely a stagnant program.

Learning outcomes are vital for everyone from prospective students to alumni. Prospective students should be able to look at your learning outcomes and know that attending your school means graduating with competency in those outcomes. Graduates should be evaluated by the school to determine if they have achieved competency in the outcomes adopted by the school.

Creating learning outcomes and developing a program of assessment is vital to improving legal education at any law school.

%d bloggers like this: