AI and Bias
IT systems are ubiquitous in our society and are indispensable. Everybody can agree that they make our lives easier. The rise of machine learning artificial intelligence (AI) and other complex algorithm systems are driving dynamic improvements. In general, these systems are desirable if they are designed ethically as they can promote a just and efficient decision-making process. Mediocre systems however can illegally perform automated discrimination. It can be agreed that manual discrimination is appalling however an intelligent and efficient discriminating system is catastrophic for marginalized communities. The danger is that automated decision making processes are capable of creating and sustaining efficient systematic oppression. Such scenarios could arise as a result of already existing societal bias that is programmed into computers. For example, It could be a human resources policy not to recruit people of a certain origin; such an illegal policy could be discreetly programmed into a computer system. This is a plausible application scenario. Systems built to execute machine learning can be adapted to stealthily hide unscrupulous motives. Such programs train algorithms on historic data and predict new outcomes from previously unseen data. If training data has a history of unjust practices it will most likely be transferred to the new prediction. In addition, inappropriate formulation of problem statements and choice of model features is a known cause of bias. These days, many intentions are labeled as unconscious bias however a savvy and malicious actor can consciously design a bias system. Take for example an AI system that is marketed as facial recognition technology for predictive policing but is, in reality, disproportionately misclassifying people with non-western facial features as criminals. Such intentions can be inconspicuously constructed into a machine learning algorithm. Other computer programs can disproportionately mislabel subpopulations as low skills, low intelligence, non-credit worthy, incompetent, high risk, etc. My Lab approaches audits from a socio-technical perspective. It performs procedural audits that holistically review the principles behind the AI system.