The AI Liability Problem
Artificial intelligence is making consequential decisions every day — approving loans, diagnosing patients, recommending content, determining insurance premiums, and operating vehicles. When these decisions go wrong and someone is harmed, a fundamental legal question arises: who is liable?
Indian law does not yet have a definitive answer. But courts and regulators will apply existing legal frameworks, and AI businesses need to understand how liability could land on them.
The Current Legal Framework for AI Liability in India
Tort Law and Negligence
India's law of torts, developed from common law principles, holds that a party is liable for harm caused by its negligence. An AI company could be found negligent if:
- The AI system was deployed without adequate testing
- Known flaws were not corrected
- The system operated outside its designed scope
- Reasonable safety measures were not implemented
The challenge with AI is proving causation — establishing that the AI system's decision, rather than some other factor, caused the specific harm.
Product Liability
AI software deployed as a product may attract liability under the Consumer Protection Act, 2019. The Act imposes liability for "defective products," and Indian courts may characterise AI systems with dangerous flaws as defective products — particularly in healthcare, automotive, and financial services contexts.
Strict Liability
Where an AI system constitutes a "hazardous activity" — as established in the landmark Oleum Gas Leak case — the operator may face strict liability regardless of negligence. This could apply to autonomous systems, industrial robots, or AI in high-stakes medical environments.
Who Bears Liability in an AI Supply Chain?
Modern AI systems often involve multiple parties — foundation model providers, API platforms, application developers, and end users. Liability could potentially attach to:
- Foundation model provider: If the underlying model has inherent defects or dangerous capabilities
- Application developer: If the AI was integrated carelessly, without adequate guardrails or testing
- Deploying business: If the AI was used in a context it was not designed for, or without appropriate human oversight
- User: In limited cases, if the user misused the AI system
In practice, contractual arrangements — API terms of service, liability caps, indemnification clauses — will significantly shape how liability is allocated in the AI supply chain.
AI in Healthcare: Elevated Liability Risk
Medical AI presents the highest liability risk in India. AI systems assisting in diagnosis, radiology, or treatment recommendations could attract medical negligence liability under the Consumer Protection Act if the AI contributes to patient harm. Healthcare providers deploying AI must ensure that AI recommendations are subject to clinical review and that patients are informed about AI involvement in their care.
AI in Financial Services: Regulatory Liability
The Reserve Bank of India has issued guidance on algorithmic lending and model risk management. Financial entities using AI in credit decisioning without adequate model governance risk both civil liability and regulatory action. Fair lending obligations mean that AI models producing discriminatory outcomes could violate existing banking regulations.
Practical Steps to Manage AI Liability
- Document your AI systems — model cards, training data provenance, testing results
- Implement human oversight — particularly for high-stakes decisions
- Draft clear terms of service — addressing AI limitations, liability caps, and indemnification
- Obtain appropriate insurance — technology liability and errors & omissions coverage
- Monitor deployed systems — for drift, unexpected behaviour, and harm signals
- Build a governance framework — with clear accountability for AI decisions within your organisation
The Road Ahead
As India develops AI-specific regulation, liability frameworks will become clearer. But businesses deploying AI today cannot wait for legal certainty. Proactive governance and risk management is both the ethical and commercially prudent approach.
Clawrity's AI law team advises companies on building legally defensible AI systems and governance frameworks. Reach out to discuss your AI liability exposure.