Research Work

My area of focus is to minimize the downside impacts of risk contributed by AI or autonomous system on humans. Minimizing downside impacts of risk from AI requires building strong Risk Management and Ethical foundations for the organization through Compliance, Controls, and Culture. My work focusses on examining ethics, governance, risk management, audit & evaluations and fairness in establishing strong Responsible AI foundation for corporations. Read more about my individual research below.

1. Ethics

Mind control - Privacy, age appropriateness and deceptive patterns in apps used by adolescents v2.pdf

Ethics of Nudges and Dark Patterns in mobile apps used by adolescents. This paper examines dark patterns on 30+ apps used by children across education, games, communication, social media, and dating categories. This paper is based upon work supported in whole or in part by The Notre Dame-IBM Tech Ethics Lab 

Automated Misinformation- NEURIPS 2022 Conference Poster.pdf

Automated Misinformation: This poster represents research examines instances of misinformation caused by mistranslations from English to Tamil in the Facebook news feed. The results revealed that 20% of generic news headlines and ambiguous headlines, and 30% of sarcastic and domain-specific headlines were misinformation caused by mistranslation. 

This article deals with ethical issues in use of behavior biometric. It covers, among others about dealing with the risks of using behavior biometric. It exposes the key threats and possible solutions associated with Behaviour Biometric. 

2. Governance

Built a Responsible AI framework for guiding organizations in implementing Governance, Principles and Performance related measures within the organization 

Contributed a draft chapter for a Book Project of Prof. Schmidpeter and Prof. Altenburger on Responsible Leadership and Artificial Intelligence titled ‘Responsible AI business model for better social and business ecosystem’.

Why AI ethics Requires a Culture-Driven Approach _ by Sundar Narayanan _ Towards Data Science.pdf

Culture driven approach to drive AI ethics. This position article reflects the need for belief alignment, enabling perception and embedding trust in developing a deep culture of AI ethics and responsibility.

Using synthetic data alternatives and pay for data model as a sustainable approach to model development 

Key components of building a responsible business model (here) and Sustainability principles for data ethics. (here

This work draws learning from deforestation and applies them to AI ethics. 

Responsible AI is about being accountable for the decisions, actions, and consequent human influence towards the decisions and actions that are triggered by AI. To bring parity across society with the adoption of Responsible AI, commercial organizations shall commit to putting principles into practice. 

Here is the book chapter

3. Risk Management

Built a risk management framework for AI, algorithmic or autonomous systems. Its the first such framework that has operational guidance on the implementation of a risk-based approach to dealing with AI governance or responsible AI. The framework ties risks to the established AI principles (eg. OECD AI principles), links the risk management process (aligned to ISO 31000), integrates AI risk to Enterprise Risk Management, and establishes governance structures for review and reporting of the risks.

A 20-hour course focused on understanding key elements of Risk Management, including the framework, COSO integration, Systemic Societal Impact Analysis, Diverse Inputs and Multi-Stakeholder Feedback, and Residual Risk management. Register for the course on Moodle. 

4. Audit and Evaluations

Holistic Validation Framework -conference-template.pdf

Wholistic validation framework for model monitoring that covers accuracy, bias, explainability, adversarial and causal validation. ·   Paper selected as part of ‘The 9th Swedish Workshop on Data Science’ 

Framework for evaluating effectiveness of Human-in-the-loop or Human-on-the-loop. The paper is focused on examining the factors that make HTL in-effective and proposes adequacy criteria for capability assessment of individuals deployed in HTL roles for high-risk systems 


The research highlights principle and specific challenges associated with the false propaganda, hate speech and abuse on twitter platform in Indian context. The paper speaks about 2 kind of challenges (1) Principle challenge and (2) Specific Challenges. 

Accuracy, validity, Reliability, Robustness and Resilience (AVR3) - Body of Knowledge.pdf

Co-developed the audit criteria, approach and concept note associated with audit of AVR3 (accuracy, validity, reliability, robustness, and resilience). Integrated approaches for validity, post market monitoring and adverse event tracking adopted by industries including healthcare, cybersecurity and software testing among others for enabling a comprehensive approach towards audit of AVR3.

A cautionary tale_ multi-stakeholder feedback in AI Ethics _ by Sundar Narayanan _ Towards Data Science.pdf

Multi-stakeholder feedback has its inherent flaws, hence, its best to treat them as ‘one of the means’ than an ‘as an end’ to establishing and maintaining Ethics in Artificial Intelligence. It’s one of the most inclusive approaches to restore faith in humanity about AI systems, however, it is important to recognize that they are flawed 

5. Fairness

GitHub and Pre-Trained Models_ A Keyhole View _ by Sundar Narayanan _ Towards Data Science.pdf

Bias amplification by Pre-Trained Models on GitHub: A Keyhole View. Also presented about Responsibility Cards (Model cards + Bias metrics + Historical inputs and updates) for pretrained models. (here) 

Framework to understand bias contributors in an AI system. This framework allows user to examine the pe-existing, technical and emergent bias across data, model, interface, pipeline, deployment environment, outcomes and human-in-the-loop or on-the-loop. 

Would choice of activation, loss or optimizer amplify bias in Neural Network. The paper establishes the bias influence by activation function and proposes evaluating challenger models with varied model hyperparameters to make an ethical choice of the model that has (a) minimal bias influence and (b) optimal efficiency given the fairness objective.

Game as a model for skill enhancement for Bias Mitigation.pdf

Position paper on using game-based training approach (e.g Chess variant for understanding under privileged groups) to develop better perception of bias for technology developers and data scientist 

6. AI Liability

AI Liability - A Risk Based Approach

Artificial intelligence (AI) can cause inconvenience, harm, or other unintended consequences in various ways, including those that arise from defects or malfunctions in the AI system itself or those caused by its use or misuse. Responsibility for AI harms or unintended consequences must be addressed to hold accountable the people who caused such harms and ensure that victims are made whole concerning any damages or losses they may have sustained. Historical instances of harm caused by AI have led to the EU establishing an AI liability directive. The future ability of the provider to contest a product liability claim will rely on good practices adopted in designing, developing, deploying, and maintaining AI systems in the market. 

Understanding AI Liability

This study attempts to bring a unified approach to looking at the liability of AI Systems and it explores the approaches that are required in identifying different relationships, nature of product/ service, and level of criticality or relevance in the outcome that results in an adverse incident. This study is proposed to be done in collaboration with ForHumanity.