These papers have been floating around for awhile now, but it’s worth reposting again (pdf alert):
- “Competing Approaches to Predicting Supreme Court Decision Making”
- “The Supreme Court Forecasting Project”
Needless to say to anyone familiar with the research, it’s clear that the number crunching indicates that two simple things. A) judges render decisions in a predictable way over time based on a not-overly complex set of factors, and B) quantitative approaches tend to systematically beat out legal “experts” at predicting which way a case will go (i.e. experience does not necessarily win out over calculation).
Most of the work has been done with the Supreme Court, which is data rich, but unfortunately pretty difficult to apply for 99.9% of work being done by lawyers. But what’s exciting about judicial prediction is that you might eventually be able to turn entire parts of the legal business into a numbers game, particularly as processing cases become more automated.
Example: say you wanted to aggregate a sufficient caselaw to challenge a particular doctrine — we could work to identify and aggregate a portfolio of cases to test against judges that we know statistically tend to give a thumbs down on a particular argument/doctrine/issue. So long as the rate of this targeting happens faster than the random cases entering the system, you could engineer an appellate body of cases that support an eventual argument one way or the other. That’d be pretty neat (though expensive). Even if the strategy wasn’t activist in nature, in the very least a model like this could give a prediction around the support around one argument or another over time, given the average case inputs into the system.
Thinking through the infrastructure to set something like this up, so RRH is currently looking for datasets that might be relevant to doing this kind of processing. Stay tuned!