top of page

"The Application of Artificial Intelligence in Law" by Gellert B. Pal

  • Writer: ILSA HHS LAW JOURNAL
    ILSA HHS LAW JOURNAL
  • Mar 13
  • 8 min read

1.     Introduction

Artificial intelligence (hereinafter: “AI”) has advanced rapidly in recent decades, influencing various aspects of life, including the legal field. This paper explores the evolving connection between AI and law. The discussion will analyse the advantages and disadvantages of using AI in legal practice. Among the benefits, focus will be placed on the relationship between the judiciary and AI and the role AI plays in providing legal advice. Conversely, the disadvantages section will examine ethical dilemmas arising from using AI in legal settings. Finally, the paper will conclude with a summary of the main points.

2.     Defining Artificial Intelligence

AI is a technology that has been integrated into our lives, with applications ranging from smartphones to self-driving cars, smart machines in agriculture to industrial robots, from operating theatres to courtrooms. As AI’s role in our lives continues to expand, it is of paramount importance to understand its meaning, which I aim to clarify through the following definitions:

According to Article 3 of the AI Act, an  ‘AI system’ means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.[1] Furthermore, as the Oxford English Dictionary states, AI entails the capacity of computers or other machines to exhibit or simulate intelligent behaviour.[2] From a regulatory perspective, AI can be classified into the category of technologies with unpredictable effects. [3] AI falls into this group because the human brain, with its cognitive limitations, cannot process the huge amount of information that AI can. This allows the AI to consider possibilities that its creators cannot foresee. This phenomenon, known as the "Black Box Effect," refers to AI's inherent nature to produce results that are unexpected or unexplainable.[4] The characteristics of AI also include autonomy and adaptivity.[5] Autonomy refers to the system's ability to perform tasks in complex environments without the user's continuous control, while adaptivity reflects AI’s capacity to enhance its performance through learning from experience.[6]

3.     AI as a judicial assistant

Numerous examples highlight the significant advantages of incorporating AI into legal practice, such as faster and more accurate decision-making and improved access to justice.[7] However, one of the most notable advancements is AI's ability to predict legal outcomes, which raises an important question.[8] If AI can effectively handle legal disputes, then why do we still need human judges?[9] The answer: human judges are still necessary, as in difficult, more complex cases, practice shows that the AI can only provide a list of possible solutions concerning the case.[10] A case that can be classified as complex involves numerous pre-trial motions, multiple witnesses, a vast amount of evidence, connections to other proceedings, and requires extensive court oversight.[11] In contrast, a simple case is usually uncontested, routine in nature, and may not even require the involvement of a lawyer.[12]

Another important factor to consider before relying on AI for decision-making is the level of conflict between the parties. When there is significant animosity, rivalry, or competition, and a high-stake situation is involved, which may include financial matters, parties are often unwilling to cooperate and may resort to aggressive or obstructive tactics.[13] In such situations, a judge's role extends beyond making rulings; being also responsible for managing and resolving these conflicts. It is more likely that the parties will respect the reasoning of a human judge over an algorithm.

AI is best suited for handling low-conflict and easy cases, as it can resolve them effectively and automate processes to benefit both the parties and judges.[14] The data available for AI may be insufficient to generate accurate predictions in complex cases.[15] Therefore, complex cases require the expertise of a human judge.

While AI may not yet be capable of taking the judge's seat in difficult cases, it can play a valuable role as an assistant. AI can help by reviewing documents, comparing similar cases, drafting decisions, providing essential support in complex and lengthy cases.[16] Ultimately, however, the responsibility for making the final judgment rests with the human judge, not the algorithm.

4.     AI in legal advising

By integrating AI into their workflows, legal professionals gain access to intelligent tools that enhance accuracy, reduce costs, and streamline day-to-day tasks. These tools can process and analyse vast amounts of data in a short period of time, enabling lawyers to make quicker decisions and deliver higher-quality advice to their clients while taking human error out of the equation.

Notable examples of using AI in perfecting legal advising are Lexis+ AI and Lex Machina, both developed by LexisNexis. On the first hand, Lexis+ AI assists lawyers in drafting documents, summarizing cases, and conducting more efficient legal research.[17] On the other hand, Lex Machina offers lawyers data insights on case outcomes, judge rulings, and litigation trends.[18]

Additionally, AI also offers innovative solutions that could eliminate the need for legal professionals entirely, such as the service developed by Mr. Joshua Browder, called DoNotPay.[19] This application listens to courtroom proceedings, analyses the information in real time, and advises users on how to respond.[20] Mr. Browder claimed that his app has already assisted in over two million cases.[21] Mr Browder’s service could have marked a significant shift in access to justice, as it allows individuals to receive legal advice without paying the high costs of hiring a lawyer. However, DoNotPay faces a lawsuit due to making claims of its legal abilities without evidence to back it up.[22]

While the accuracy, reliability and technical aspects of such applications still require refinement, Lexis+ AI and Lex Machina exemplify how legal services can be revolutionized.

5.     AI and ethical dilemmas

The previous sections highlight the benefits AI can offer. However, in the interest of balance, it also needs to be considered that the use of AI systems can cause extreme harm. Without consciously programmed into it by its designers, AI systems have the potential to discriminate on racial grounds, violate privacy and confidentiality rules, and contribute to human rights violations.

For instance, judicial decision prediction relies on analysing vast amounts of court rulings, with AI systems sorting and classifying data to forecast the outcomes of similar cases. While the accuracy of such predictions improves with the volume of case data available, the process inherently involves handling sensitive personal information, introducing significant risks. There have been instances where AI-based legal services have malfunctioned, leading to the leakage of confidential client information.[23] These breaches have resulted in substantial fines and have drawn attention to the ethical challenges posed by such systems. Therefore, it is important that AI-based algorithms are designed in a way that respects the requirements of privacy and security of personal data and that does not violate the rights of customers.

Moreover, challenges such as opacity and unpredictability raise significant concerns. When a judge relies on AI to assist in deciding a case, maintaining transparency is crucial. This involves documenting and explaining how AI was used, from the initial stages of the process to the resolution of the case.

6.     The COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) algorithm

The use of algorithmic risk assessors to estimate the likelihood of an individual committing a crime in the future is increasingly common among prosecutors, police, and judges.[24] These algorithms are favoured for their efficiency and perceived accuracy, often surpassing human judgment while aiming to eliminate irrational biases that can influence decision-making. However, it is increasingly recognised that AI is not immune to discrimination and can perpetuate or even amplify existing biases.

In 2016, the COMPAS algorithm, designed to predict the likelihood of an individual re-offending based on past data, faced significant criticism for racial bias.[25] Studies revealed that the algorithm was more likely to predict re-offending for Black defendants compared to White defendants, highlighting racial disparities in its predictions.[26] In response to such issues, critics have proposed three potential solutions: first, excluding input factors closely related to race; second, redesigning algorithms to equalise predictions across racial groups; third, rejecting the use of algorithmic methods entirely.[27] The root of the problem lies in the reliance on biased data to make future predictions, which risks amplifying inequalities.

For instance, consider two individuals, Person A and Person B, of different ethnic backgrounds. If data shows that, over the past decade, Person A's ethnicity accounted for 37% of crimes while Person B's accounted for only 7%, the algorithm is likely to predict that Person A has a higher probability of reoffending, regardless of how law-abiding their life has been. This example illustrates how such systems rely heavily on biased data, often disregarding individual circumstances.

The 2016 COMPAS incident highlighted how probability estimation systems can reinforce inequalities and discrimination.[28] If these algorithms are trained with biased data, they inevitably produce biased and unfair outcomes. To prevent this, such systems must be designed with fairness, transparency, and accountability at their core, ensuring that discrimination is undermined rather than amplified.

7.     Conclusion

Artificial intelligence is rapidly and continuously transforming the legal sphere. As demonstrated throughout this paper, AI offers numerous benefits: it enhances access to justice, minimises human error, and increases the speed and efficiency of legal proceedings. However, despite its substantial advantages, AI also presents serious challenges. Issues such as racial discrimination, breaches of data protection rules, and violations of human rights highlight the need for improvements in the design and regulation of these systems. While AI may efficiently handle simpler, repetitive tasks, its flaws and the stringent regulatory standards it must meet mean that it is far from replacing the nuanced and complex roles of human legal professionals.


[1] EU Artificial Intelligence Act 2024 OJ L, 2024/1689 1.

[2] ‘Artificial Intelligence’ (Oxford English Dictionary, 2025) <https://www.oed.com/dictionary/artificial-intelligence_n?tab=meaning_and_use#38531565> accessed 21 December 2024.

[3] András Tóth, 'The Paradox of Regulating Artificial Intelligence and Fundamental Questions of Its Legal Implications' 2 Infocommunications and Law, page 4 (original in Hungarian) (ResearchGate) (2019).

[4] ibid.

[5] Erlis Themeli and Stefan Philipsen, 'AI as the Court: Assessing AI Deployment in Civil Cases' (2021) SSRN <https://ssrn.com/abstract=3791553> page 4, accessed 27 December 2024.

[6] ibid.

[7] Dr Antony Lawrence and Dr Amelia Antony (eds), 'AI for a Smarter Future: Transforming Industries, Society & Governance' (2024) page 41 Paul Shikshan Sansthas accessed 25 December 2024.

[8] Dr Aastha Budhiraja and Dr Kamlesh Sharma, 'Machine Learning Infused Approach for Advancing Legal Predictive Analytics' (2024) vol 31, 8s, page 3 Communications on Applied Nonlinear Analysis <https://doi.org/10.52783/cana.v31.1506> accessed 22 December 2024.

[9] ibid. page 27.

[10] Erlis Themeli and Stefan Philipsen, 'AI as the Court: Assessing AI Deployment in Civil Cases' (2021) Communications on Applied Nonlinear Analysis SSRN <https://ssrn.com/abstract=3791553> accessed 26 December 2024. page 2.

[11] ibid. page 4.

[12] ibid. page 17.

[13] ibid. page 10.

[14] ibid. page 11.

[15] ibid.

[16] ibid.

[18] ‘About’ (LexisNexis, 2024) <https://lexmachina.com/about> accessed 27 December 2024.

[19] ‘About Us’ (DoNotPay, 2024) <https://donotpay.com/about/ accessed> accessed 28 December 2024.

[20] CBS News, 'Robot Lawyer Won’t Argue in Court After Jail Threats, DoNotPay Says' (CBS News, 26 January 2023) <https://www.cbsnews.com/news/robot-lawyer-wont-argue-court-jail-threats-do-not-pay/> accessed 28 December 2024.

[21] Joshua Browder, Speaker Profile (MMA Global) <https://www.mmaglobal.com/speakers/joshua-browder> accessed 28 December 2024.

[22] 'DoNotPay' (Federal Trade Commission, 2024) <https://www.ftc.gov/legal-library/browse/cases-proceedings/donotpay> accessed 21 February 2025.

[23] Reuters, 'Lawyers Using AI Must Heed Ethics Rules, ABA Says in First Formal Guidance' (Reuters, 2024) <https://www.reuters.com/legal/legalindustry/lawyers-using-ai-must-heed-ethics-rules-aba-says-first-formal-guidance-2024-07-29/> accessed 27 December 2024.

[24] Sandra G. Mayson, 'Bias In, Bias Out' (2019) 128 Yale Law Journal 2218, 5 <https://ssrn.com/abstract=3257004> accessed 28 December 2024.

[25] Julia Angwin and others, 'Machine Bias: There’s Software Used Across the Country to Predict Future Criminals. And It’s Biased Against Blacks' (ProPublica, 23 May 2016) <https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing> accessed 26 December 2024.

[26] Shira Schneider, Algorithmic Bias: A New Age of Racism (S Daniel Abraham Honors Program, Stern College for Women, Yeshiva University, 30 April 2021) 18 <https://repository.yu.edu/server/api/core/bitstreams/a9059591-d8f9-4336-b558-159749759389/content> accessed 27 December 2024.

[27] Sandra G. Mayson, 'Bias In, Bias Out' (2019) 128 Yale Law Journal 2218, 8 <https://ssrn.com/abstract=3257004> accessed 28 December 2024.

[28] Julia Dressel and Hany Farid, ‘The Dangers of Risk Prediction in the Criminal Justice System’ (MIT Case Studies in Social and Ethical Responsibilities of Computing, 2021) page 3 <https://farid.berkeley.edu/downloads/publications/scienceadvances17> accessed 28 December 2024.

 
 
 

Comments


ILSA Logo (1).png

International Law Students Association

© 2023 ILSA
bottom of page