At Ajala Project, we aspire to be a game-changer in the world of Artificial Intelligence by tackling the urgent issue of inherent biases in large language models (LLMs). The challenges presented by biased AI extend from shaping public opinion to affecting decisions in finance, healthcare, and the criminal justice system. Our project takes a targeted approach to amend these shortcomings, offering a groundbreaking solution: the development of fine-tuned AI language models that can both self-correct and educate existing AI systems to mitigate their biases.
Our journey began with a comprehensive research phase in late 2022, during which we identified key vulnerabilities and opportunities in current AI models. This research has informed the architecture of our preliminary fine-tuned models, including the unique dataset parameters designed to ensure equitable results. Our next steps involve a strategic collaboration with commercial open-source LLM companies. Through these partnerships, we'll deepen our research, refine our models, and conduct real-world tests to gauge their efficacy and robustness.
But the real innovation lies in our dissemination strategy. Once our models are honed to perfection, we plan to release a detailed white paper that elucidates the step-by-step methodology for developing these improved AI models. The aim is to democratise this technology, empowering developers---particularly those from marginalised communities---to contribute their unique perspectives to AI development.
Furthermore, we have allocated specific milestones for the public release of prototype models, user feedback loops, and subsequent iterations. We've structured our project timeline to include quarterly reviews, during which we'll assess our models against newly emerging biases and societal shifts, ensuring that they remain as current and effective as possible.
In essence, Ajala Projects is embarking on a journey, not merely as an attempt to patch up existing technology; it's an ambitious endeavour to rewrite the rulebook on how AI should be developed and who gets to contribute. We're not just making AI better; we're making it smarter, fairer, and more inclusive. This isn't just another tech project; it's a societal imperative for the digital age that has EDI fuel behind it. Our work promises to be a watershed moment in AI development, marking a shift towards a more equitable, inclusive digital landscape that respects and reflects the rich diversity of human experience.
49,056
2023-06-01 to 2023-11-30
Grant for R&D
The significant expansion in the field of Artificial Intelligence (AI) has been propelled by advances in machine learning algorithms, the availability of large-scale datasets, and the increasing processing power of computers. In particular, the recent months have seen remarkable growth in the application of AI, facilitated by the emergence of AI chatbots, such as ChatGPT, which provide access to vast quantities of diverse and high-quality data. This proliferation of AI technologies presents an enormous opportunity to address critical challenges across various industries and domains, including the detection and mitigation of racial bias in language, which is the focus of our proposed project.
AI models are built on top of human language, which is inherently shaped by social, cultural, and historical factors, including racial biases and stereotypes. As a result, even the most advanced and sophisticated AI models have shown to perpetuate these biases and many existing approaches to addressing such racial biases in language are reactive rather than proactive.
Addressing racial language bias in AI models is crucial for ensuring fairness and equity in decision-making processes that impact people's lives. This requires careful attention to data selection, model design, and evaluation, as well as ongoing monitoring and updating to ensure that the model remains free of biases.
Given these challenges, there is a need for new and innovative approaches to addressing racial bias in AI models and address AI assurance. Our proposed idea aims to fill this gap by developing an AI assurance tool as a transformer model. This AI tool will be specifically designed to detect and mitigate racial bias in language. By doing so, we hope to contribute to a more equitable and just society by reducing the harmful impact of racial bias in AI-based decision-making processes.
Building this tool as a transformer model that can be integrated with any neural network is affordable due to its reusability, scalability, flexibility, and the availability of open-source libraries. It has the potential to reduce the cost and complexity of building and deploying AI models, while also increasing their accuracy and performance across a wide range of applications.