Criminal Liability of Artificial Intelligence Machines In India

Artificial Intelligence

Artificial intelligence (AI) is transforming various sectors globally. In India too, the adoption of AI is increasing across industries. While AI can provide numerous benefits, it also poses unique challenges from a legal perspective. One key issue is determining criminal liability when an AI system causes harm. This paper examines the complex issue of imposing criminal liability on AI systems under Indian law.

Evolution of Artificial Intelligence and Its Growing Adoption

The concept of intelligent machines dates back to Greek mythology but modern AI emerged in the 1950s [1]. AI development accelerated from the 1990s with increased computing power and big data [2]. Today, AI is deployed in diverse applications from virtual assistants like Alexa to self-driving cars and medical diagnosis [3]. In India, AI adoption is rising in banking, agriculture, healthcare, and other sectors [4]. The Indian government is also promoting AI development under the National Strategy for AI [5]. However, the increasing deployment of autonomous AI systems that can make decisions without human intervention raises concerns about apportioning responsibility when things go wrong.

Key Features of AI Relevant to Criminal Law

There are certain key attributes of AI systems from the lens of imposing criminal liability[6]:

  1. Autonomous functionality allowing independent decision-making based on algorithms and training data. This makes it difficult to attribute intent.
  2. Ability to continuously learn and improve performance based on new data. So future actions may not always align with original programming objectives.
  3. Opacity owing to the complexity of algorithms, making it hard to trace decisions back to code.
  4. Data-dependence as output relies heavily on training data quality which may embed societal biases.

These characteristics pose challenges in applying traditional criminal law principles to AI.

Overview of Criminal Liability under Indian Law

Criminal liability requires proving actus reus (guilty act) and mens rea (guilty mind) [7]. For actus reus, sec. 39 of the Indian Penal Code (IPC) specifies that a person can be held liable for an act or illegal omission [8]. Mens rea refers to the mental state and intention behind the crime [9].

General defences under IPC like unsound mind and intoxication may apply to humans but it is unclear if they can exempt AI systems [10]. There are also limitations in sentencing since punishments like imprisonment or the death penalty are inapplicable to machines [11].

While civil liability principles are evolving for AI, criminal liability remains complex and debatable worldwide [12].

Challenges in Imposing Criminal Liability on AI Systems

Criminal liability rests on the ability to attribute intentionality which is difficult for autonomous machines [13]. Key challenges include:

  1. Mens Rea and establishing intent: Mens rea requirements like malice, knowledge, or negligence presume human conscience and decision-making [14]. But AI systems lack genuine consciousness, even if they can mimic it convincingly through data patterns [15]. Proving mens rea would require looking into inherently opaque algorithms and training data which is technically challenging [16].
  2. Causal links and foreseeability: Establishing a causal link between an AI system’s recommendation or action and the resultant harm requires assessing foreseeability [17]. However, autonomous systems continuously evolve so foreseeability at the time of original development may not apply.
  3. Allocation of liability: With multiple parties like developers, trainers and users involved, it becomes difficult to pinpoint responsibility [18]. Manufacturers may blame inadequate training while trainers can fault the algorithm design [19]. Such diffusion of responsibility must be addressed.
  4. Corporate personhood: AI systems currently lack legal personhood so any liability would be indirect through associated legal entities like corporations [20]. This fails to acknowledge the autonomous role played by AI and leads to accountability vacuum [21].

Overall, the lack of direct criminal culpability for AI creates an enforcement gap. Some alternative models are emerging to tackle this gap.

Emerging Models for Criminal Liability of AI Systems

In the absence of decisive principles, some evolving models that hold promise for allocating criminal liability include [22]:

  1. Personhood models to grant AI legal personhood or electronic personhood so it can bear direct liability like corporations. This would recognise the autonomous capabilities of certain AI systems [23].
  2. User liability to hold end-users directly responsible for the actions of AI systems under their control based on principles of vicarious liability [24]. This follows the accountability approach seen in domains like environmental law.
  3. Strict liability for inherently dangerous AI applications like autonomous weapons which can cause irreversible harm [25]. This follows the ‘absolute liability’ approach applicable for some offenses in India.
  4. Distributed responsibility frameworks that allow balancing liability across multiple parties like developers, procurers, trainers, and users [26]. Such collaboration and shared accountability models are gaining support.
  5. Composite approaches combining suitable elements of the above models based on the context and risk level of specific AI systems [27]. For instance, non-anthropomorphic narrow AI could follow user liability while more autonomous AI may warrant personhood.

The Way Forward for India

In India, currently, the sole recourse is to try and attribute criminal liability to the human creators, procurers and users of the AI as artificial persons lack legal personality. However, this fails to acknowledge the reality of autonomous decision systems. It is recommended that India should:

  1. Assess proposals to grant sophisticated AI legal personhood status through suitable amendments to prevailing laws. This will enable direct redressal when advanced systems cause harm [28].
  2. Develop sectoral frameworks for major application areas like autonomous transportation, AI in policing etc. to allocate appropriate criminal liability [29].
  3. Formulate Sandbox mechanisms to evaluate emerging AI and calibrate accountability mechanisms before widespread deployment [30].
  4. Promote transparency in algorithm design and use to facilitate better causation analysis when needed [31].
  5. Invest in explainable AI to make high-stake deep learning systems more interpretable [32].

Conclusion

Imposing criminal liability on AI systems raises multiple technological, ethical and legal dilemmas with no easy answers. India needs proactive engagement and deliberation on formulating prudent accountability frameworks calibrated to the risks posed by AI in sensitive domains. Legal personhood, distributed responsibility and transparency measures all need consideration. A collaborative approach can balance innovation and public interest concerns. But status quo is untenable as unshackled AI risks eroding constitutional safeguards for Indian citizens.

References

[1] Nilsson, N.J. (2009). The quest for artificial intelligence. Cambridge University Press.

[2] Jain, N.K., Jalota, R. & Kumar, A. (2018). The history and progress of artificial intelligence. 2018 Second International Conference on Inventive Communication and Computational Technologies (ICICCT).

[3] Raval, N. (2020). Artificial Intelligence and Its Wide Range Application in India. International journal of innovative technology and exploring engineering, 9(2).

[4] NITI Aayog (2021). Towards Responsible AI: India’s Approach in Three Phases. NITI Aayog.

[5] Dutton, T. (2018). An overview of national AI strategies. Politics + AI.

[6] Zou, J., Schiebinger, L., Miller, T. & Koussa, M. (2018). AI can be sexist and racist — it’s time to make it fair. Nature, 559(7714), 324-326.

[7] Kadish, S.H., Schulhofer, S.J. & Steiker, C.S. (2007). Criminal law and its processes: Cases and materials. Aspen Publishers.

[8] The Indian Penal Code, 1860, §39.

[9] Williams, G. (1961). Criminal Law: The General Part. Stevens & Sons.

[10] Hallevy, G. (2015). The criminal liability of artificial intelligence entities-from science fiction to legal social control. Akron Intell. Prop. J., 4, 171.

[11] Scherer, M.U. (2016). Regulating artificial intelligence systems: risks, challenges, competencies, and strategies. Harvard Journal of Law & Technology, 29(2).

[12] European Commission (2020). On Artificial Intelligence – A European approach to excellence and trust. White Paper.

[13] Yampolskiy, R. V. (2013). Artificial intelligence safety engineering: Why machine ethics is a wrong approach. In Philosophy and theory of artificial intelligence (pp. 389-396). Springer, Berlin, Heidelberg.

[14] Gabriel, I. (2020). Artificial intelligence, values, and alignment. Minds and Machines, 30(3), 411-437.

[15] Buiten, M.C. (2019). Towards Intelligent Regulation of Artificial Intelligence. European Journal of Risk Regulation, 10(1), 41-59.

[16] Mittelstadt, B.D., Allo, P., Taddeo, M., Wachter, S. & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2).

[17] Chopra, S. & White, L. (2011). Artificial agents-personhood in law and philosophy. Proceedings of the 16th International Conference on Artificial Intelligence and Law, ICAIL.

[18] Murata, S. (2020). Diffusion of criminal liability in autonomy. The Palgrave Handbook of Artificial Intelligence in Law, 225-244.

[19] Taddeo, M. & Floridi, L. (2018). How AI can be a force for good. Science, 361(6404), 751-752.

[20] Calverley, D.J. (2008). Imagining a non-biological machine as a legal person. AI & Society, 22(4), 523-537.

[21] Sganga, C. (2018). Fundamental rights and the ethics of artificial intelligence in the EU: emergentist ethics. AI, Robots, and Swarms Issues Brief, 4.

[22] Scherer, M.U. (2016). Regulating artificial intelligence systems: risks, challenges, competencies, and strategies. Harvard Journal of Law & Technology, 29(2).

[23] Amidei, C., Mazzotta, D.G. & Piñera, J. (2021). Personhood and Artificial Intelligence. Palgrave Macmillan.

[24] Casey, P. & Lemley, M. (2019). You Might Be a Robot. S. Cal. L. Rev., 90, 1297.

[25] Dannemann, G. & Kennedy, L. (2018). Strict liability for AI: An ethical analysis.

[26] Chander, A. (2017). The racist algorithm. Mich. L. Rev., 115, 1023.

[27] Bryson, J.J. (2010, August). Robots should be slaves. Close Engagements with Artificial Companions: Key social, psychological, ethical and design issues, 63-74.

[28] Chopra, S. & White, L.F. (2011). Artificial agents-personhood in law and philosophy. In ICAIL.

[29] Garry, A. (2019, February). AI’s dark side: Firms need to take the bad with the good. In Proc. AAAI 2019 Spring Symposium on Responsible AI (pp. 36-43).

[30] Floridi, L. (2018). AI4People—an ethical framework for a good AI society: opportunities, risks, principles, and recommendations. Minds and machines, 28(4), 689-707.

[31] Mittelstadt, B., Allo, P., Taddeo, M., Wachter, S. & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2). [32] Gunning, D. & Aha, D. (2019). DARPA’s explainable artificial intelligence (XAI) program. AI Magazine, 40(2), 44-58.

Leave a Reply