olutayo adegoke

Decolonise AI


VISION

Decolonise AI is a social change and innovation lab that improves computer algorithms and machine learning AI systems eliminating the inequities and harms that affect vulnerable sub-populations. 



 


OLUTAYO ADEGOKE, PhD

I am passionate about new IT and computer decision making systems. I started Decolonise AI to address some of the societal problems associated with implementing these innovative  systems.

Button



Risk of Algorithms

Poor system designs of algorithms especially machine learning AI systems can harm already marginalized communities. Stakeholder oversight is therefore required to prevent such an occurrence.


Button



Benefit of Algorithms

Responsible algorithm systems tend to produce increased growth and equity for the benefit of all.


Button


I have developed my expertise to be at the intersection of engineering, AI and human rights. As a user advocate, my main responsibility is to identify and correct systems that are likely to cause differential harms to vulnerable consumers. My interests include:

 


Applied research

I investigate socio-technical aspects that can be readily implemented in practice. My forthcoming articles are planned to be published in technology ethics journals.



Audits and Reviews

 I equip myself with the developments in standards and literature that are necessary to review, audit and improve system performance. I develop innovative and efficient audit procedures. Audits are crucial early in system design to understand the problem statements, raise issues and address risks. The current regulatory environment also makes it imperative to perform conformity assessments.  I support independent reviews from a marginalized user perspective. Evaluating model performance in terms of bias/fairness is an area of specialization. 

 

Advisory

  Much attention is now on big tech and associated large language models (LLMs) while ignoring other common applications deployed in the society. I attend to these less-pronounced but potentially dangerous applications.  I advise on risks and opportunities peculiar to implementing AI in these specific applications for example health care, transportation, finance, insurance, security, justice,  welfare, recruitment, enrolment etc. I perform risk analysis. I advise on methodologies for quality assurance and failure mode effect. I help with identifying and fixing computations that have a high risk of inflicting harm. I advocate against pseudo-scientific systems. I help in formulating unambiguous model problem statements that produce equitable outcomes. I support algorithmic and socio-technical feature selection including identifying problematic proxies. I assess data quality in terms of representation. Detecting labeling bias is an area that my lab is proficient in. 

Implementing Systems Engineering

  The human rights and legal problems of AI are numerous. The complexity of finding solutions to issues like discrimination, safety, privacy, cybersecurity and transparency necessitates  systems engineering perspective. I define requirements and apply quality tools to ensure that AI delivers its intended outcomes and minimizes negative societal impacts.   

Interpreting regulations and ethical principles

Ethical principles can be complex and vague. I use my expertise to formulate clear principles that are practical to implement. I follow regulatory development and break them down into simple socio-technical descriptions.