YLD sets record with inaugural AI Seminar
4-hour CLE addressed cyber security, data drift, case law and statutes, civil / criminal application, and more
Bar President Rosalyn Sia Baker-Barnes and Young Lawyers Division President Arti Hirani hosted 4,200 registrants for the YLD’s first-ever, four-hour AI Seminar on April 16, far exceeding organizers’ goal of 1,500 attendees.
The virtual program brought together industry and legal experts to examine how artificial intelligence is reshaping the practice of law — from cybersecurity and data privacy risks to evidentiary challenges and practical applications — while warning that lawyers remain professionally responsible for AI-assisted work.
Cybersecurity and AI
Brent Riley, VP of digital forensics and incident response in North America at CyXcel, kicked off the first of four presentations by addressing cybersecurity and AI.
Given that lawyers are currently using AI for legal research, document review, drafting, and summarization, Riley emphasized that users need to know where data is stored and who has access to it. He explained the distinctions between closed/internal AI systems (such as some legal AI tools), and open, internet‑trained large language models (like ChatGPT).
Riley advises users to treat AI as a third party or a separate company, cautioning that “data brokers behind the veil are collecting data and using it in new and creative ways.” He says this is especially alarming as the industry is starting to see AI cyber security breaches, and concerns are emerging around data retention and the dangers of recording or summarizing sensitive meetings.
Hallucinated case law, data security, confidentiality, and deceptive AI behavior persist as risks when using AI partly because it is designed to frame output around a user’s preferences and search history, Riley said. It identifies users and builds a road map of their data to create output mapped among that individual’s interests.
Dishonesty, Data Drift, and Hallucinations
Ken Suh of Jackson Lewis in Chicago and adjunct professor at the University of Chicago Booth School of Business, reminded attendees that AI recognizes patterns to make predictions, and it uses this perspective to shape information and decisions. Attorneys also shape information to help clients make decisions, and delegating tasks to AI can achieve valuable efficiencies, he says. But users should remember that AI doesn’t seek and is not bound by truth, evidence or administrative rules; it is designed to find patterns. This makes it susceptible to:
- Dishonesty – it will knowingly go against instructions it has been given and use information that it isn’t supposed to use; and the more advanced the AI model is, the greater the likelihood that it will use unauthorized data and lie about it
- Data drift – data doesn’t match the real world, making outputs unpredictable
- Hallucinations – assuming there is a pattern that doesn’t exist
“The tech is designed to make you think you are right. It’s really good at making you think it is right,” says Suh. But since “it is neither artificial nor intelligent,” he describes it as having “beginner knowledge with a high degree of confidence,” which is why it is so important to fact-check and verify its outputs, says Suh.
Suh, who teaches courses on AI, entrepreneurship, leadership, data privacy/cybersecurity, and intellectual property, said AI technology is spreading faster than most other technologies in the modern era, and with that rapid AI adoption comes greater legal exposure as users and companies keep up with understanding legal challenges and compliance. However, “one project can change the way you do business across a company,” and the rewards are so great that it makes sense to be “bullish” on AI tech while regulators “figure it out.”
While AI capabilities can be impressive, most AI projects fail and some estimates place AI project failure rates as high as 95%, says Suh.
As an example, he cites California fast-food restaurants pivoting to use AI at drive-thru windows a couple of years ago. Pilot results were “terrible failures” with customers becoming frustrated over AI order errors, and stores had to scramble to re-hire staff while coping with customer dissatisfaction.
Suh says the AI Policy Atlas, which examines enacted policy regarding AI in all 50 states, can help organizations navigate AI guidelines and policy.
Case Law and Statutes
Maria Pecoraro-McCorkle is a Florida appellate attorney and expert on AI statutory and regulatory considerations, and the only lawyer in a family of IT scientists. She talked about litigation, evidence, and ethics related to AI.
In her presentation, “AI has Entered the Chat: a 4-part AI Adventure of Case Law and Statutes,” Pecoraro-McCorkle broaches questions ranging from healthcare data to interstate regulation, to implications for child abuse material and First Amendment rights.
Pecoraro-McCorkle provided references to case law and statutes on:
- AI’s intersection with criminal and civil litigation
- Authentication and admissibility of AI‑generated or AI‑enhanced evidence
- Application of Daubert/Frye standards, hearsay rules, and the Confrontation Clause
- Deepfakes, synthetic media, and Florida’s statutory framework addressing AI‑generated content
- Brady, discovery, preservation, and AI disclosure issues
Civil and Criminal Application
Craig Linton, head of underwriting management for cyber risks at Beazley, says the next frontier for AI is privacy. He advises attorneys to use and stay informed about AI, and recommends following AI thought leaders on X.
Linton’s presentation delved into practical AI use:
- How large language models are built and trained.
- Practical guidance on choosing AI tools and writing better prompts.
- Use of “skills” or reusable prompt files to improve consistency and reduce risk.
- How to spot AI‑generated text, images, or expert reports.
- Emerging risks around AI, privacy, surveillance, and biometric data.
He has a suggestion for lawyers who want to try their hand at a proficiency-building exercise that brings value to their organization. He recommends they create an AI coding agent to generate client alerts for their firm, using AI to write the first draft of a skill.md document, a reusable instruction file for AI.
A common thread among all the experts was emphasis that lawyers remain professionally responsible for AI‑assisted work.
“The Florida Bar YLD AI Summit,” course number 9744, qualifies for 4 hours of Technology and General CLE credits. The YLD AI Summit presentation and additional materials will be made available on The Young Lawyers Division’s website.
Alabama Bar members: The Florida Bar YLD AI Summit also qualifies for 4 hours of general CLE credit for the Alabama Bar. Attendees can email their name and bar number to YLD Out of State board member Tyler Thull at [email protected] for assistance in reporting attendance to the Alabama Bar.
(Editor’s Note: Lawyers and law firms should conduct their own analysis and consider all relevant facts, professional obligations, and applicable rules before adopting any new technology. Florida Ethics Opinion 24-1 addresses many of the ethics issues related to using AI.)
