Legal News

Threat Of AI In Legal Research

Threat Of AI In Legal Research

The firm has admitted that usage of AI chatbots like ChatGPT are being used for the purpose of developing cases in the establishment

The judge, who was present during the hearing of this firm, stated that the court faced ‘unprecedented circumstances.’ This was after the filing was determined to have used references of legal cases that were non-existing.

The lawyer who has been charged in the court of law stated that he was not aware that the research from the AI tool would provide false information.

ChatGPT, the popular AI chatbot, is known to create original information at the request of the user. However, it comes with a warning statement of ‘producing inaccurate information.’ 

The original case involved an individual taking an issue to the court regarding personal injury towards an airline company. The legal team was observed to have submitted a brief that included a variety of previous cases. 

This was done in the aspect of attempting to prove by using precedents to make the case move forward. However, the attorney representing the airline stated that the cases that were presented were nowhere to be found. 

The judge presiding over the case determined that six of the cases that were referenced in the brief seemed to be bogus. The quotes and internal citations of the case also seemed to be false. 

This resulted in the questioning of the legal team that was responsible for developing the case. 

Over thorough investigation, it was noted that several cases were not filed by Peter LoDuca, the attorney for the plaintiff. But has been filed by Steven A Schwartz, who is a colleague of Mr.LoDuca in the firm. 

Schwartz has been an attorney for more than 30 years and admitted to making use of AI tools like ChatGPT for legal research. In a written statement, Schwartz stated that LoDuca had not been part of the research before presenting the brief at the court. 

The usage of ChatGPT as an AI tool has been on the rise since its launch in November 2022. The tool has the ability to mimic human writing styles and has the 2021 Internet as its dataset. The tool has vast abilities to develop information at the request of the user. 

On the contrary, ChatGPT states the risk of spreading false information and bias. These two factors are a big threat within the legal realm, as false information and bias could hinder the process of providing justice. 

So, it is crucial for attorneys across the United States to effectively re-check the information that is provided by the AI platform ChatGPT. 

Verification of facts regarding legal research is vital for the purpose of stating real and true facts before the court of law. This will further lead to providence of appropriate justice.

What's your reaction?

Excited
0
Happy
0
In Love
0
Not Sure
0
Silly
0
Jyoti Jha
Jyoti Jha is a freelance SEO content writer for tech , health, and education-related content. With 5 years of experience in the industry, I am creating high-quality content that captivates readers and delivers value.

    You may also like

    Leave a reply

    Your email address will not be published. Required fields are marked *