Lawyers

  • Manish Modak

    Partner

    BA. LL.B

    manishmodak@astrealegal.com

    Expertise IT, Retail,Due Diligence, Licence and Registration, Transaction, Asset Management, FDI, Risk, Assessment, Election Laws, Corruption and Bribery Laws, Adoption, Legal Strategy

  • Urwi Keche

    Partner

    BA. in Law, LL.B, LL.M (Administrative and Constitutional Law)

    urwikeche@astrealegal.com

    Practices Property due Diligence, Trade Mark, Copy Right, Legal Drafting, Medico Legal Matters, Arbitration

pdf_icon-thumbnail

ICMR Issues Landmark Ethical Guidelines to Govern AI Use in India’s Healthcare and Biomedical Research

The Indian Council of Medical Research (ICMR) serves as India’s primary authority for governing AI use in healthcare and biomedical research. In March 2023, ICMR released “Ethical Guidelines for Application of Artificial Intelligence in Biomedical Research and Healthcare,” establishing a framework that ensures AI tools in medicine are safe, fair, transparent, and patient-centric. This overview covers how ICMR’s governance will impact India’s healthcare sector.

The 10 Core Ethical Principles

ICMR established 10 foundational principles that guide all AI development and deployment in healthcare:

  • Accountability and Liability – Clear responsibility when things go wrong
  • Autonomy – Patients and doctors retain decision-making authority; AI assists but doesn’t replace
  • Data Privacy – Patient health information is protected and anonymized
  • Collaboration – Interdisciplinary cooperation among stakeholders
  • Safety and Risk Minimization – All AI tools must be proven safe before clinical use
  • Accessibility and Equity – AI benefits must reach everyone, not just wealthy urban populations
  • Data Optimization – High-quality, unbiased, representative datasets
  • Non-Discrimination and Fairness – AI must work fairly across all populations, genders, and regions
  • Trustworthiness – AI solutions must be transparent, reliable, and valid
  • Clinical Validation – Rigorous testing and approval before doctors use AI on patients

Key Changes for Healthcare Stakeholders

  • For Patients:
  • Must receive informed consent before AI is used in their care
  • Have the right to know when AI influences their diagnosis or treatment
  • Their health data is protected through anonymization and separation from personal identifiers

For Hospitals and Doctors:

  • Cannot use new AI diagnostic tools without first proving they work through clinical validation
  • Must obtain ethics committee approval before deploying AI systems

Doctors maintain final decision-making authority; AI is a tool, not a replacement

  • For AI Developers:
  • Must train AI systems on diverse, representative Indian datasets to prevent bias
  • Must ensure AI explainability—doctors and patients should understand how decisions are made
  • Subject to government medical device regulations if developing diagnostic tools

For Data Protection:

Patient health data used in AI must be anonymized and delinked from personal information

Protects against data breaches and misuse as healthcare digitalization increases

How ICMR’s Governance Works in Practice

  • Ethics Committee Review: Before testing any AI tool on patients, developers must present it to an institutional ethics committee. The committee assesses scientific rigor, patient safety, and ethical compliance.
  • Medical Device Regulation: AI tools classified as medical devices (diagnostic software, treatment recommendations, etc.) require approval from the Central Drugs Standard Control Organization (CDSCO) before commercial use.
  • Global Collaboration: India joined the HealthAI Global Regulatory Network in September 2025, enabling collaboration with countries like the UK and Singapore on AI healthcare standards.
  • Living Guidelines: ICMR’s framework is designed to evolve as technology and understanding improve.

Any AI system used in biomedical research or clinical settings must safeguard patient privacy, avoid discriminatory outcomes, and remain fully accountable to regulatory and ethical requirements. Unauthorized or irresponsible use of patient data, deployment of unvalidated AI tools, or failure to provide explainability and oversight may constitute violations of ethical and legal obligations. These guidelines caution developers, researchers, and healthcare institutions that AI must support clinical judgment not replace it and that non-compliance may lead to professional, ethical, or regulatory consequences.

____________________________________________________________

Astrea Legal Associates LLP

Contributed by Urwi Keche , Partner & Naisergi Desai, Trainee

www.astrealegal.com

Note: This publication is provided for general information and does not constitute any legal opinion.This publication is protected by copyright.

 © 2025,Astrea Legal Associates LLP