Fall is here and in addition to the start of the academic semester, the NYSBA Committee on Legal Education and Admission to the Bar is in high gear. So much has been written about changes in legal education in a short period of time, it can be difficult to keep track of the books, articles, columns, posts, etc. Thanks to Touro Law librarian Laura Ross, a working bibliography on legal education reform has now been posted to SSRN for all to access. http://ssrn.com/abstract=2500987 This is an ongoing work-in-progress, and Laura welcomes emails with suggestions for additions to the list. Entries in the bibliography provide great starting points for discussion among faculty and law school constituents about the present but more important, the future, for individual law schools. We hope you will use this to inform your teaching, scholarship and service to the school and community moving forward. Those of us fortunate to be a part of the Academy have a wonderful opportunity at this moment in time to respond to a rapidly changing legal profession by making deliberate and informed reforms in the way we educate the next generation of lawyers.
The recently passed ABA accreditation standard 302 requires schools to report student learning outcomes. A learning outcome has been defined as something a student can do now that she could not do before [or that she can do better than she did before].
One classic way to measure learning is to give pre-tests. When the class begins, students are tested on key aspects of learning the professor hopes the students will achieve during the semester. Pre-test results can be compared to end-of-course results to see if, in fact, students’ learning improved. They also can be used by professors to help identify students’ strengths and weaknesses at the outset and to adjust our teaching accordingly. UNM Dean David Herring’s work on measuring cross-case reasoning is an excellent example of how professors can use pre/post tests to measure learning and improve teaching. papers.ssrn.com/sol3/papers.cfm?abstract_id=2387855
While pre-tests may provide learning outcome information, the more intriguing aspect of pre-tests is that they may, themselves, be a learning tool. A recent NY Times article reports studies indicating that pre-tests actually improve final exam performance. http://www.nytimes.com/2014/09/07/magazine/why-flunking-exams-is-actually-a-good-thing.html?emc=eta1
The studies’ authors have multiple theories about why pre-tests improve learning. First, they hypothesize that pre-tests help students identify how they will have to think about and synthesize the material. Students begin the course with that information in hand and it shapes their studying.
Another theory is that we suffer from “fluency illusion” – we believe that we truly grasp the material because we have read and highlighted. A pre-test exposes weaknesses in both knowledge and application.
Additionally, there are biological explanations for why pre-tests improve student learning. The brain works via developing networks of associations. Pre-testing primes the brain to develop associations for the material in the pre-test so that when it is later covered in class, the brain can more easily link the new information to existing information.
In the studies presented in the NY Times article, the pre-tests were particularly helpful with multiple choice test performance, and a key to improved performance was providing students with the correct information shortly after they had taken the pre-tests
The value of pretests may depend upon the type of course and the skills and knowledge tested. Yet the idea has intriguing possibilities. Would a pre-test before we covered hearsay improve student learning of that difficult topic? Would a course pre-test on reading/interpreting statutes result in better student performance of this skill at the end of the semester? Would providing 1Ls with a mock exam and an annotated model answer shortly after they began law school improve overall first year exam performance?
Data from other disciplines suggests pre-testing primes students to learn the material and it provides teachers with data we can use to see if the learning occurred. The value of pre-tests in legal education is an idea that certainly merits further study.
As the newly revised ABA accreditation standards 301 and 302 now require law schools to clearly articulate and publish their learning outcomes for their students, so individual faculty members must do likewise. Yet it is not uncommon to see these learning outcomes statements that read like the table of contents of the textbook used to teach the course. To truly be effective in driving learning and teaching, learning outcomes must be targeted, concrete, measurable and active (not “learning about” but “learning how to”).
How do we most effectively choose and articulate these learning outcomes? In MAKING LEARNING WHOLE: HOW SEVEN PRINCIPLES OF TEACHING CAN TRANSFORM EDUCATION 83-89 (2010)., educational specialist David Perkins emphasizes that learning is most effective if learners “work on the hard parts.” Similarly, the UNDERSTANDING BY DESIGN framework, originally developed by Grant Wiggins and Jay McTighe, emphasizes beginning the search for course goals by looking for the “Big Idea” in the course. These are the ideas or themes that can be used throughout a legal career and that require a lot of work to master.
One of the most effective ways to uncover these “big ideas’ or ‘hard parts” is to focus first on unlearning outcomes – that is, preventing and addressing predictable misunderstandings in the course. Thus, for example, much of the first year of law school is devoted to “unlearning” the positivist philosophy of students who believe the law is resolutely determinate. These fundamental misunderstandings are persistent, difficult to overcome and block learning of new ideas. Students construct knowledge by building on prior understandings. If those prior understandings are incomplete or incorrect, new learning will be flawed as well. As summarized by NATIONAL RESEARCH COUNCIL, COMMITTEE ON DEVELOPMENTS IN THE SCIENCE OF LEARNING, HOW PEOPLE LEARN: BRAIN, MIND, EXPERIENCE, AND SCHOOL: EXPANDED EDITION 11 (2000), “teachers need to pay attention to the incomplete understandings, the false beliefs, and the naive renditions of concepts that learners bring with them to a given subject.”
In her new book, Building a Better Teacher: How Teaching Works (and How to Teach It to Everyone) ( 2014), Elizabeth Green reviews the research concluding that effective teachers (as measured by student learning gains) are those who are able to identify the reasons that students misunderstand and help them to unlearn those misunderstandings.
Some of the most fundamental misconceptions that students bring to a subject from their own experience (or from bad course outlines passed around from prior semesters) must be discovered in the classroom. Brief classroom assessment devices such as “minute papers” or statements for the students to complete can easily generate a range of incorrect or incomplete understandings for any given topic. The mission to discover student errors leads faculty to many of the best practices in teaching: regular interaction with students, frequent and meaningful feedback, and active learning strategies.
The power of an “unlearning” perspective on assessment improves student learning, but also quickly leads faculty to a deeper understanding of what assessment of student learning oucomes means. Assessment is not an end-point, a box to be checked, reported and forgotten, but is an iterative process of discovery and experiment that drives students and faculty learning alike. Assessment tools (such as quizzes, socratic dialogue, essays, simulations, and reflections) might be used to unearth student misconceptions. These misconceptions then become the basis for the learning outcomes around which one can build a course and assessments then can be used to determine the extent to which one is successfully dislodging misunderstanding and misconception and replace it with a solid framework mastery.
Many professors use simulation exercises in their teaching; not as many have ever taught a simulation course. What does it mean? What is required?
To meet Standard 303’s criteria under the new, six-credit experiential requirement, a simulation course must:
- be primarily experiential in nature
- integrate doctrine, theory, skills, and legal ethics
- engage students in performance of one or more of the professional skills identified in Standard 302
- develop the concepts underlying the professional skills being taught
- provide multiple opportunities for performance and
- provide opportunities for self-evaluation.
Additionally, under Standard 304, “[a] simulation course provides substantial experience not involving an actual client, that
(1) is reasonably similar to the experience of a lawyer advising or representing a client or engaging in other lawyering tasks in a set of facts and circumstances devised or adopted by a faculty member, and
(2) includes the following:
(i) direct supervision of the student’s performance by the faculty member;
(ii) opportunities for performance, feedback from a faculty member, and self- evaluation; and
(iii) a classroom instructional component.”
These two standards provide a relatively detailed list of requirements, but the very first item seems the least well defined. What does it mean for a course to be “primarily experiential in nature?” If a two-credit seminar course is enhanced with an additional hour of simulation activities, meeting all of the other listed requirements, is the resulting three-credit course “primarily experiential in nature?” All three credits?
Maybe the answer depends upon the degree to which the simulation is integrated into the teaching of doctrine. A course can ask students to think about the implications of doctrine from the perspective of the role they are assigned to play. If woven throughout the course, references to the simulation can enrich students’ understanding of the content, which they will then apply in the performance aspect of the course. Still, assuming the two credits of content are still being taught, is this course “primarily experiential in nature?” Or does this requirement mean simulation courses must be advanced-level options for students who have already completed a course introducing the content, such that the primarily experiential application of doctrine can take place? I don’t think that’s what it should mean.
Approaching simulation courses from design principles instead, several authors ask us to think carefully about the goals of our simulation courses and the ways in which we assess student performance. See, e.g., Roy Stuckey, Teaching with Purpose: Defining and Achieving Desired Outcomes in Clinical Law Courses, 13 Clinical L. Rev. 807 (2007); Paul S. Ferber, Adult Learning Theory and Simulations – Designing Simulations to Educate Lawyers, 9 Clinical L. Rev. 417 (2002); Jay M. Feinman, Simulations: An Introduction, 45 J. Legal Educ. 469 (1995). The Carnegie Report says, “Doctrinal teaching goes on informally as students engage the simulated cases, so that assignments used to teach practical lawyering skills also reinforce their learning of legal analysis.” Stuckey, supra at 823, citing Carnegie at 226-27. But surely doctrinal teaching can also take place more formally in a simulation course, provided it is integrated with the simulated role that makes the course primarily experiential.