File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: How do judges use large language models? Evidence from Shenzhen

TitleHow do judges use large language models? Evidence from Shenzhen
Authors
Issue Date2-Jan-2025
PublisherOxford University Press
Citation
Journal of Legal Analysis, 2024, v. 16, n. 1, p. 235-262 How to Cite?
Abstract

This article reports on the systematic use of a large language model by a court in China to generate judicial opinions-arguably the first instance of this in the world. Based on this case study, we outline the interaction pattern between judges and generative artificial intelligence (AI) in real-world scenarios, namely: 1) judges make initial decisions; 2) the large language model generates reasoning based on the judges' decisions; and 3) judges revise the reasoning generated by AI to make the final judgment. We contend that this pattern is typical and will remain stable irrespective of advances in AI technologies, given that judicial accountability ultimately rests with judges rather than machines. Drawing on extensive research in behavioral sciences, we propose that this interaction process between judges and AI may amplify errors and biases in judicial decision-making by reinforcing judges' prior beliefs. An experiment with real judges provides mixed evidence.


Persistent Identifierhttp://hdl.handle.net/10722/353494
ISSN
2023 Impact Factor: 3.0
2023 SCImago Journal Rankings: 0.546
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorLiu, John Zhuang-
dc.contributor.authorLi, Xueyao -
dc.date.accessioned2025-01-18T00:35:26Z-
dc.date.available2025-01-18T00:35:26Z-
dc.date.issued2025-01-02-
dc.identifier.citationJournal of Legal Analysis, 2024, v. 16, n. 1, p. 235-262-
dc.identifier.issn2161-7201-
dc.identifier.urihttp://hdl.handle.net/10722/353494-
dc.description.abstract<p>This article reports on the systematic use of a large language model by a court in China to generate judicial opinions-arguably the first instance of this in the world. Based on this case study, we outline the interaction pattern between judges and generative artificial intelligence (AI) in real-world scenarios, namely: 1) judges make initial decisions; 2) the large language model generates reasoning based on the judges' decisions; and 3) judges revise the reasoning generated by AI to make the final judgment. We contend that this pattern is typical and will remain stable irrespective of advances in AI technologies, given that judicial accountability ultimately rests with judges rather than machines. Drawing on extensive research in behavioral sciences, we propose that this interaction process between judges and AI may amplify errors and biases in judicial decision-making by reinforcing judges' prior beliefs. An experiment with real judges provides mixed evidence.<br></p>-
dc.languageeng-
dc.publisherOxford University Press-
dc.relation.ispartofJournal of Legal Analysis-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.titleHow do judges use large language models? Evidence from Shenzhen -
dc.typeArticle-
dc.identifier.doi10.1093/jla/laae009-
dc.identifier.scopuseid_2-s2.0-85214653662-
dc.identifier.volume16-
dc.identifier.issue1-
dc.identifier.spage235-
dc.identifier.epage262-
dc.identifier.eissn1946-5319-
dc.identifier.isiWOS:001388137600001-
dc.identifier.issnl1946-5319-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats