Lawyers

  • Manish Modak

    Partner

    BA. LL.B

    manishmodak@astrealegal.com

    Expertise IT, Retail,Due Diligence, Licence and Registration, Transaction, Asset Management, FDI, Risk, Assessment, Election Laws, Corruption and Bribery Laws, Adoption, Legal Strategy

  • Somnath De

    Sr. Associate

    B.A.LLB (Hons.), DCL, C.C.I,C.F.I

    somnath@astrealegal.com

    Practices Cyber Laws,Internet Transaction, E-Commerce, Software and Computer Rights, Domain Dispute, Identity Theft,

Legal Framework and Case Law Analysis on AI-Related Disputes

Introduction

As Artificial Intelligence (AI) advances, it brings a host of legal challenges across various domains, including intellectual property, data privacy, product liability, and criminal justice. Courts and regulatory bodies worldwide are grappling with how to apply existing legal frameworks to AI-related disputes.

This article examines key legal concerns, landmark case law, and global regulatory efforts shaping AI governance.

I. Intellectual Property Rights and AI

Case:Thaler v. United States Patent and Trademark Office (USPTO)

Facts: Dr. Stephen Thaler, the creator of the AI system “DABUS,” sought to designate the AI as an inventor in patent applications. The USPTO rejected the application, arguing that only natural persons qualify as inventors.

Legal Issue: Can an AI system be recognized as an inventor under U.S. patent law?

Court Ruling: The U.S. District Court and Federal Circuit Court upheld the USPTO’s decision, affirming that only humans can be named inventors.

Implications:

  • Highlights the lack of legal recognition for AI-generated inventions.
  • Raises the need for legislative reforms to address AI’s growing role in intellectual property.

II.Data Protection and Privacy Concerns

Case: Google DeepMind and the UK’s Information Commissioner’s Office (ICO)

Facts: Google DeepMind partnered with the UK’s National Health Service (NHS) to develop an AI-powered app for early kidney disease detection. However, the ICO found that patient data was shared without proper consent, violating UK data protection laws.

Legal Issue: Did Google DeepMind breach the UK Data Protection Act and the General Data Protection Regulation (GDPR)?

Regulatory Findings:

  • The ICO ruled that Google DeepMind failed to obtain informed patient consent, setting a precedent for stricter compliance in AI-driven healthcare solutions.

Global Impact:

  • Influenced stricter GDPR enforcement and inspired similar data protection measures in jurisdictions such as the California Consumer Privacy Act (CCPA).

III. Product Liability and Autonomous Systems

Case: Uber Self-Driving Car Fatality (2018)

Facts: In 2018, an autonomous Uber vehicle struck and killed a pedestrian in Arizona during a test drive, raising critical legal questions about liability in AI-driven accidents.

Legal Issue: Who bears liability—the manufacturer, software developers, or human operators?

Court & Regulatory Response:

  • Uber was not criminally charged, but the safety driver faced prosecution.
  • The case led to stricter regulations on autonomous vehicle testing.

Global Regulatory Trends:

  • The European Union (EU) and other jurisdictions are drafting comprehensive liability laws for self-driving technology.

IV. AI in Criminal Justice: Predictive Policing and Sentencing

Case: State v. Loomis (2016)

Facts: The COMPAS algorithm, an AI-powered risk assessment tool, was used in sentencing decisions in the United States. Defendant Eric Loomis challenged its use, alleging lack of transparency and algorithmic bias.

Legal Issue: Can AI-based predictive models be used in sentencing without violating due process rights?

Court Ruling:

  • The Wisconsin Supreme Court upheld COMPAS but acknowledged concerns about its potential for bias and lack of explainability.

Broader Concerns:

  • AI-driven risk assessments have been criticized for bias, disproportionately affecting marginalized communities.
  • The EU AI Act is now addressing AI’s role in law enforcement and judicial decisions.

V. Ethical and Regulatory Frameworks for AI Governance

Global regulators are responding to AI’s rapid growth with new policies to ensure safety, transparency, and accountability.

1. EU AI Act

Establishes a risk-based regulatory framework categorizing AI applications by potential harm.
Imposes strict rules on high-risk sectors such as healthcare and law enforcement.

2. OECD AI Principles

Provides global guidelines for responsible AI development.
Emphasizes transparency, accountability, and human-centric design.

3. UNESCO AI Ethics Recommendations

Establishes global ethical standards to ensure AI respects human rights and fundamental freedoms.

As AI continues to transform industries, it presents new legal challenges in intellectual property, data privacy, liability, and criminal justice. The lack of a unified global framework has led to varying legal interpretations, requiring governments to update regulations to balance innovation with accountability.

Going forward, collaboration among governments, industry stakeholders, and legal experts will be crucial in developing AI laws that ensure fairness, safety, and compliance with fundamental legal principles.

References

European Parliament, ‘EU AI Act: First Regulation on Artificial Intelligence’
OECD, ‘OECD AI Principles’
UNESCO, ‘Recommendation on the Ethics of Artificial Intelligence’

Note: This publication is provided for general information and does not constitute any legal opinion.This publication is protected by copyright. © 2024,Astrea Legal Associates LLP