×
Law firm brings the gavel down on AI usage after widespread staff adoption
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Generative AI tools like ChatGPT and DeepSeek have seen rapid adoption in professional settings, raising concerns about data security and proper usage protocols. Hill Dickinson, a major international law firm with over 1,000 UK employees, has recently implemented restrictions on AI tool access after detecting extensive usage among its staff.

Key developments: Hill Dickinson’s internal monitoring revealed substantial AI tool usage, with over 32,000 hits to ChatGPT and 3,000 hits to DeepSeek within a seven-day period in early 2024.

  • The firm detected more than 50,000 hits to Grammarly, a writing assistance tool
  • Much of the detected usage was found to be non-compliant with the firm’s AI policy
  • Access to AI tools will now require explicit approval through a request process

Regulatory perspective: The UK’s Information Commissioner’s Office advocates for responsible AI adoption rather than outright restrictions.

  • The ICO warns against driving staff to use AI “under the radar” through blanket prohibitions
  • Organizations are encouraged to provide AI tools that align with their policies and data protection requirements
  • The Solicitors Regulation Authority has highlighted concerns about digital skills gaps that could pose risks for firms and consumers

Industry context: The legal sector is actively incorporating AI technologies while grappling with implementation challenges.

  • A September survey by Clio found 62% of UK solicitors expected increased AI usage in the following year
  • Law firms commonly use AI for document drafting, contract analysis, and legal research
  • Hill Dickinson states it aims to “positively embrace” AI tools while ensuring safe and proper use

Policy implementation: Hill Dickinson has established clear guidelines for AI tool usage within the firm.

  • The policy prohibits uploading client information to AI platforms
  • Staff must verify the accuracy of AI-generated responses
  • Some access requests have already been approved under the new system

Looking ahead: While the UK government views AI as a transformative technology that can free workers from repetitive tasks, the legal sector’s experience highlights the need for careful balance between innovation and risk management. The implementation of structured AI policies by firms like Hill Dickinson may serve as a template for other professional services organizations navigating similar challenges.

Law firm restricts AI after 'significant' staff use

Recent News

Hugging Face launches AI agent that navigates the web like a human

Computer assistants enable hands-free navigation of websites by controlling browsers to complete tasks like finding directions and booking tickets through natural language commands.

xAI’s ‘Colossus’ supercomputer faces backlash over health and permit violations

Musk's data center is pumping pollutants into a majority-Black Memphis neighborhood, creating environmental justice concerns as residents report health impacts.

Hallucination rates soar in new AI models, undermining real-world use

Advanced reasoning capabilities in newer AI models have paradoxically increased their tendency to generate false information, calling into question whether hallucinations can ever be fully eliminated.