Reckless Strokes

About Reckless Strokes

Acrylic Pour Painting is a satisfying art to celebrate the natural patterns that gets formed with the paint flow. These are abstract, powerful and creates innumerable perspectives for the observer. For me, its expression of diversity in the world.

Would you disturb a well made pour painting canvas with recklessness? I did. The recklessness was obvious, with strokes that were unstructured, non-cohesive and not aligned. But it was purposeful. How better can you explain bias seeping in Artificial intelligence (specifically Artificial Neural Networks) disturbing a beautiful diverse world? Its recklessness!!!

This art project aims at expressing about the several non-exhaustive activities in which bias could seep in; if the process of designing, developing and deploying AI system are not preventing them, recklessly.

Follow the images and associated reflections to immerse yourself in the expressions of "Reckless Strokes"

Credits: Michael McCarthy, with whom I did a research on technical bias in Neural network contributed by Model Hyperparameter choices. The learnings from our research is integrated into this art project.

Problem Framing

Framing a problem is the first step in allowing bias in an artificial intelligence system.

Data Sourcing

Sourcing the data from multiple channels and examining them for completeness is critical. Pre-existing bias invariably exists in every dataset.

Data Labelling

Labelling is one clear contributor for bias in datasets. Not examining the extent of bias through various tests would be naive.

Normalization

As much as normalization (outlier removal, deduplication etc) contributing to useful dataset for building the model, they may also contribute to bias.

Sampling

Sampling is an ethical choice. A choice that can harm one or more groups of people, if representativeness is not adequately considered.

Causality

Models have a number of inferences, proxies and causations. These inferences if not validated against ground truth, they will misrepresent correlation as causation.

Seed

Choice of random seed while building a neural network can provide different outcomes and such choices may bias one group against another.

Learning Rate

Learning Rate is a configurable hyperparameter. A choice of learning rate along with a choice of optimizer may provide different outcomes for different groups.

Batch Size

Batch size choice may have an impact on model performance and may also impact the parity between individuals considered in a model.

Epochs

Optimal Epochs for a model is a process. So is the process of examing if the choice of epochs amplify pre-existing data bias.

Activation & Loss Function

Activation function and loss functions are mathematical contributors to outcome determination in a model. Choice of such functions may impact model fairness.

Other Hyperparameters

Hyperparameters are one of the key sources of technical bias. The choice of architechture, layers, dropouts etc may also be contributing to bias.

Outcome Bias

Optimization choices and perspective of examining the hypothesis and its outcome are also factors that impact bias contributed by a model.

Metric & Scale

Choice of metrics considered for reviewing the model outcomes and scale in which a metric is examined can be another contributor to bias.

Group Fairness

Group Fairness is not fair in circumstances where more opportunities are intended for under priviledged groups. Group fairness is not a solution always.

Emergent Bias

The way outcomes interact with users and how they evolve will also contribute to bias. If an adverse event monitoring is not in place to detect or prevent them, that can contribute to bias in post market phase.