Wealthy Clients Use AI, Lawyers Warn of Risks

CNBC reports that high-net-worth clients are increasingly using AI chatbots such as ChatGPT and Claude to research estate planning, prenups, and tax strategies. Lawyers quoted by CNBC, including Tasha Dickinson of Day Pitney and Robert Strauss of Weinstock Manion, say they regularly receive client questions prompted by AI and that clients sometimes upload trust documents to AI systems and return with suggested edits. CNBC also reports that a recent court ruling found that asking Claude or ChatGPT for legal advice can be used against a client in court. Lawyers told CNBC that AI can produce plausible but inappropriate recommendations for specific client circumstances and that the trend is generating additional client counseling and defensive work.
What happened
CNBC reports that wealthy clients are increasingly turning to AI chatbots to research estate plans, prenups, and tax strategies. CNBC quotes lawyer Tasha Dickinson saying she gets calls every week from clients referencing advice from ChatGPT, Claude, or other chatbots, and recounts a case where a Florida client suggested a community property trust after consulting AI even though the client's wife had died. CNBC also quotes Robert Strauss saying clients have uploaded trust documents to AI systems and returned with suggested edits that force lawyers to explain why those AI recommendations are inappropriate for the client. CNBC reports that a recent court ruling found that seeking legal advice from Claude or ChatGPT can be used as evidence in court.
Editorial analysis - technical context
Industry-pattern observations: Large language models produce fluent, confident outputs but can hallucinate or omit material legal constraints. Systems that accept document uploads create an evidence trail that may be discoverable, and outputs are typically unsecured from an attorney-client privilege perspective. These are broader characteristics of current consumer-facing LLM services rather than assertions about any single firm.
Context and significance
Editorial analysis: For high-net-worth estates, small factual mismatches or legally inapplicable strategies can create material consequences. Public reporting frames this story at the intersection of model reliability, data privacy, and evidentiary risk; those themes already shape enterprise adoption of LLMs. The CNBC article highlights how consumer use of LLMs is shifting part of the workload toward verification and client education rather than replacing specialist counsel.
What to watch
Editorial analysis: Observers should track:
- •further court decisions that clarify whether and how AI interactions are admissible or waivable
- •vendor policies on document uploads and retention
- •law-firm disclosures or new client intake practices addressing AI-sourced advice. Industry commentators and regulators will likely use such rulings to refine guidance on privilege, discoverability, and ethical use of AI in legal workflows
Scoring Rationale
The story highlights practical legal and evidentiary risks from consumer LLM use that affect data handling and model-output reliability. It is notable for practitioners but not a frontier-model or regulatory shock.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problems


