A study from IIT Madras advocates for a “Participatory Approach” to AI governance both domestically and internationally

In addition to creating transparency that will increase public trust and facilitate broad acceptance, an inclusive approach that involves a variety of stakeholders in AI development will result in the inclusion of everyone, especially those who have historically been under-represented

INN/Chennai, @Infodeaofficial

Participatory approaches are needed in the development and governance of artificial intelligence in India and outside, according to a paper published by the Vidhi Centre for Legal Policy, Delhi, and researchers at the Indian Institute of Technology Madras (IIT Madras).

This paper outlined the main justifications for why a participatory approach to AI development can raise the process’s fairness and improve the AI algorithm’s results. Through interdisciplinary collaboration, this project aimed to demonstrate the necessity and significance of a participatory approach to AI governance while firmly establishing it in real-world use cases.

The numerous choices and judgements that go into its design and execution might change, become opaque, and obfuscate accountability as AI progressively automates operations across several domains. This paradigm emphasises how crucial it is to include pertinent parties in the development, application, and supervision of AI systems.
This study was divided into two sections by researchers from the Centre for Responsible AI (CeRAI), which is part of the Wadhwani School of Data Science and AI at IIT Madras, and Vidhi Legal, a prominent legal and tech policy think tank that brings together technologists, attorneys, and policy academics.

Their findings were published in a Pre-Print Paper in ‘arXiv’, an open-access archive for nearly 2.4 million scholarly articles in the fields of physics, mathematics, and computer science, among many others. The Papers can be viewed using the following links – https://arxiv.org/abs/2407.13100 and https://arxiv.org/abs/2407.13103

Prof. B. Ravindran, Head of the Wadhwani School of Data Science and Artificial Intelligence (WSAI), IIT Madras, emphasised the importance of these studies by stating: “The extensive use of AI technologies in both the public and private sectors has led to their profound impact on people’s lives in novel and unexpected ways. It becomes crucial in this situation to find out how they are designed, developed, and implemented. According to this study, those who would be affected by the implementation of these systems have little to no influence on their design. This research study fills this significant gap by advancing the idea that developing and utilising more responsible, secure, and human-centric AI systems benefits from a participatory approach.

“The recommendations from this study are crucial for addressing several pressing issues in AI development,” added Prof. B. Ravindran, who is also the Head of the Centre for Responsible AI (CeRAI) at IIT Madras. Including diverse populations in AI development can help us build systems that benefit everyone, including historically under-represented groups. Building public confidence through greater accountability and transparency in AI systems facilitates the broader use of these technologies. Furthermore, we can lower risks like prejudice, privacy violations, and lack of explainability by incorporating a wide range of stakeholders, which will make AI systems more dependable and safe.

The importance of participatory approaches in AI development and governance is becoming more widely acknowledged, according to Shehnaz Ahmed, Lead, Law and Technology, Vidhi Centre for Legal Policy. However, their implementation is constrained by the absence of a defined framework for putting these concepts into practice. By providing a sector-neutral approach that covers important issues including identifying stakeholders, involving them at every stage of the AI lifecycle, and successfully incorporating their input, this paper tackles important problems. The results show how AI systems can be improved via participatory approaches, especially in fields like healthcare and facial recognition technology. The IndiaAI mission’s primary goal of making AI genuinely human-centric can only be achieved by adopting a participatory approach.

The  Recommendations for Implementing Participatory AI include:

Ø  Adopt a Participatory Approach to AI Governance: Engage stakeholders throughout the entire AI lifecycle—from design to deployment and beyond—to ensure that AI systems are both high-quality and fair.

Ø  Establish Clear Mechanisms for Stakeholder Identification: Develop robust processes for identifying relevant stakeholders, guided by criteria like power, legitimacy, urgency, and potential for harm. The “decision sieve” model is a valuable tool in this process.

Ø  Develop Effective Methods for Collating and Translating Stakeholder Input: It is crucial to create clear procedures for collecting, analyzing, and turning stakeholder feedback into actionable steps. Techniques like voting and consensus-building can be used but it is important to be aware of their limitations and potential biases.

Ø  Address Ethical Considerations Throughout the AI Lifecycle: Involve ethicists and social scientists from the beginning of AI development to ensure that fairness, bias mitigation, and accountability are prioritized at every stage.

Ø  Prioritize Human Oversight and Control: Even as AI systems become more advanced, it is essential to keep humans in control, especially in sensitive areas like law enforcement and healthcare.’.

In this First Paper, the authors investigated various issues that have cropped up in the recent past when it comes to AI governance and explored viable solutions. By analyzing how beneficial a participatory approach has been in other domains, they proposed a framework that integrates these aspects.

The Second Paper analysed two use cases of AI solutions and their governance, with one of them being a largely deployed solution in Facial Recognition Technologies which has been widely discussed and well documented, while the other is a possible future application of a relatively newer AI solution in a critical domain.

CASE STUDIES

Facial Recognition Technology (FRT) in Law Enforcement: FRT systems have the potential to perpetuate societal biases, especially against marginalized groups, if not developed with care. The lack of transparency in how these technologies are deployed raises serious privacy concerns and risks of misuse by law enforcement. Engaging stakeholders like civil society groups, undertrials, and legal experts can help ensure that FRT systems are deployed in ways that are fair, transparent, and respectful of individual rights.

Large Language Models (LLMs) in Healthcare: In healthcare, the stakes are even higher. LLMs can sometimes generate inaccurate or fabricated information, posing significant risks when used in medical decision-making.

Furthermore, if LLMs are trained on biased data, they could exacerbate healthcare disparities. The opacity of these models’ decision-making processes further complicates matters, making it difficult to trust their outputs. Involving doctors, patients, legal teams, and developers in the development and deployment of LLMs can lead to systems that are not only more accurate but also more equitable and transparent.

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: