Coming Soon

Public Funding for Mind Foundry Limited

Registration Number 09882159

Individualised insurance for Older people: preserving independence and fairness while improving safety through data.

278,881
2023-04-01 to 2024-03-31
Collaborative R&D
Reaction time, vision and other faculties vital for safe control of road vehicles decline with age, leading to an increase in risk and in turn in motor insurance premiums, somewhat analogous to the other end of the scale where 17-25 year olds incur highest premiums due to their inexperience. While inexperience is generally universal for young drivers, the other end of the scale is less homogenous, with some drivers representing a significantly increased risk from 65-70 years of age while others remain equally competent well into their 90s. This results in undercharging of some policyholders and overcharging of others, but more importantly means that some individuals who are unsafe both to themselves and wider society are able to keep driving without the assistance and guidance they need to avoid harm. Key objectives of the project are to identify behaviours indicating at-risk drivers, and to operationalise these in a functioning system for the quantification of older driver risk scores based on telematics data, enabling response and handling by underwriters.

Autonomous quantum technologies (AutoQT)

697,924
2022-03-01 to 2025-02-28
Collaborative R&D
Quantum computers are a new type of powerful computer. They are based on building blocks called qubits. For quantum computers to work, we need to be able to control qubits in a predictable way. Controlling just one or two qubits is often the culmination of several years' work in a laboratory and can only be performed by highly trained researchers. Qubits are extremely fragile and require constant delicate attention, like the continuous tweaks of a circus performer keeping a plate spinning. With each new plate, the amount of computing power to keep them spinning increases. Eventually, with so many plates in the air at the same time, existing control methods quickly become overwhelmed. For quantum computing to become commercially useful, we need to be able to control hundreds or even thousands of qubits at the same time. This is the biggest bottleneck in quantum computing. We will solve this challenge by building a system that can control hundreds of qubits and that can be used across different types of quantum computers. We will also use a type of artificial intelligence called machine learning to automate the tuning of qubits and maximise the time they are 'spinning in the air'. This project brings together the UK's leading quantum software company (Riverlane), quantum hardware companies (SeeQC UK, Oxford Ionics) and research organisations (NPL, University of Oxford). They develop different types of qubits that we can test our control system on. Mind Foundry, a University of Oxford spin-out, will develop the artificial intelligence framework that can automatically keep the qubits "spinning". The University of Edinburgh will detect the state of the quantum computer and guarantee optimum performance after intervention. We will work together to combine quantum software and artificial intelligence to build a control system for quantum computers that is powerful and intelligent. Our project brings together UK-based academic and industrial organisations to strengthen the UK quantum industry and help produce quantum computers that will transform the way several industries, such as finance, drug discovery and materials development, work.

Continuous Metalearning for AI lifecycle governance

349,784
2021-10-01 to 2022-09-30
Feasibility Studies
Mind Foundry is applying for an Innovate UK SMART Grant to provide funding for a critical area of AI innovation -- continuous metalearning. Continuous learning is an umbrella term for AI models that continue to learn and adapt after the point of deployment, not only about their task, but also about their own learning process, using data that they receive periodically (through a continual process) or continuously. Mind Foundry asserts that in order to measure, manage, explain and govern a continuous learning AI system, you will need to apply a metalearning approach. This will likely require creating passports and containers for both data and models involved in a particular AI system, capturing necessary data and provenance history on the model's usage, performance and calibration over time. An evolving system will require proactive safeguards and specification in place to ensure it complies with upcoming regulatory requirements, such as the new EU proposal for AI regulation. To that end, full disclosure and transparency about model and system evolution throughout its lifecycle, as well as accurate monitoring and management of changes in the model's behaviour over time, will become a prerequisite for AI system procurement in both the public and private sectors. The initial goals of Mind Foundry's approach are to continuously detect the key performance warning signals of an AI system such as purpose drift (a deployed model, and thus data, being used in a way it was not intended), model overfitting (through overuse of a single training data set), data governance (right to deletion), interpretability (right to an explanation), and security (risk of model-inversion attack). Ultimately this approach can then be extended to AI systems actioning appropriate mitigations, and raising alerts for and collaborating directly with human users.

Geospatial solution for EV Chargepoint Infrastructure

445,540
2021-09-01 to 2022-05-31
Collaborative R&D
The UK is the first major world economy to pledge net zero carbon emissions by 2050\. The achievement of this goal will require transformations in a number of sectors. To this end, sectors such as transport and urban planning, have embarked upon a nimble, focused and data-driven approach in addressing their challenges. The energy sector can undertake a similar approach, taking data from the energy sector and additional sectors , such as transport, to deliver targeted solutions at the local level. We intend to use geospatial data, which combines location information with attribute and often temporal information, to contribute to the evolution of the distributed energy paradigm. The ability to analyze sparse or incomplete data with respect to, for example smart meters, electric vehicle (EV) uptake, EV charging type, locations and user profile, has a range of implications that are difficult to model with conventional approaches. We are building an energy-focused geospatial system that will enable the user to visualise overlays of multivariate spatially and temporally varying data, model and predict trends and correlations, infer across areas of sparse data collection, and model the effects of changes on the system such as varying supply, demand or infrastructure. It will further allow for the simulation and testing of different strategies, for example, alternative charging point placement. We will address these challenges using our world-class expertise in Bayesian optimization and scalable probabilistic models. Using a probabilistic approach will significantly improve the accuracy of AI models, over existing AI based tools. The successful application of this approach will enable the local energy sector to be more quantitative and targeted in its planning and prioritising of resources. The successful transition to net zero local energy systems requires not only cross-sectoral data, advanced geospatial and machine learning models and techniques, but also, crucially extensive collaboration with a broad set of stakeholders to properly understand their needs. We have already taken some initial steps in this process. For example, with respect to the EVs, we are in dialogue with local authorities and commercial organizations who are currently looking for immediate support with respect to the analysis of EV charger types, optimised roll-out of EV charging points and the implications for the capacity of local energy networks.

AI enabled EV charge point location optimiser

227,366
2021-08-01 to 2022-03-31
Collaborative R&D
The UK is the first major economy to pledge net zero carbon emissions by 2050\. The achievement of this goal will require transformation in the energy sector. The application of cross-sectoral geospatial data, which combines location information with attribute and temporal information, can contribute to the evolution of the distributed energy paradigm. The ability to analyze sparse or incomplete data with respect to, for example smart meters, electric vehicle (EV) uptake, EV charging type and locations \[rural and urban\], and user profiles, has a range of implications that are difficult to model with conventional approaches. We propose the creation of an energy-focused geospatial system that will enable the user to visualise overlays of multivariate spatially and temporally varying data, model and predict trends and correlations, infer across areas of sparse data collection, and model the effects of changes on the system such as varying supply, demand or infrastructure. It will further allow for the simulation and testing of different strategies, for example, alternative charge point placement. We will address these challenges using our world-class expertise in Bayesian optimization and Gaussian Process (GP) models. GPs can handle low data regimes and provide values even in the presence of missing and partial information. Key outcomes of this project will be to address data usability to support geospatial modelling, which in turn will support decision-making across a range of stakeholders. Our proposal to combine disparate, exogenous, and unstructured data sources with geospatial data is not unique. However we are the first to use GPs with geospatial data, giving superior adaptability, accuracy, interpretability and explainability. Using GPs will significantly improve the accuracy of our tool versus the state of the art competitors. The successful application of this approach will enable more quantitative and targeted local planning and prioritising of resources. The successful transition to net zero also requires extensive collaboration with a broad set of stakeholders. A key feature of our proposed product is that it will enhance human-AI collaboration by providing an interpretable decision making tool for a range of different users, including: Local Authorities (LAs), Charge Point Operators (CPOs) and Distribution Network Operators (DNOs). Bringing all these stakeholders together is the first step towards the development of a strong business case that can drive investment in the sustainable energy sector. We have already taken initial steps in this process by engaging with Oxfordshire-County-Council, Zeta who support our application.

Next generation of Privacy and Security for AI Systems

343,048
2021-07-01 to 2022-12-31
Collaborative R&D
The application of Artificial Intelligence (AI) models to all parts of the economy is exponentially expanding. The technology and infrastructure to gather, process and make inference from vast quantities of data is enabling governments, companies and individuals to augment their existing decision-making processes with machine learning models, opening the doors to increasingly automated pipelines and solutions. However, this exponential growth has also highlighted the need for regulation, with many governments and controlling bodies scrambling to keep up with assessing the new risks that AI models introduce to the security and privacy of individual, corporate and governmental data. At Mind Foundry, we provide solutions where humans and AI work together to solve the world's most important problems. We work on high-stakes applications of AI--where the correct and ethical use of data, algorithms and models has a direct impact on human outcomes, and there is the potential to make decisions that have a population-level impact. In doing so, we are uniquely invested in ensuring that we have a robust, transparent and ethical framework in which to build, test and deploy AI models, confident that at all stages of the AI lifecycle, security and privacy are deeply embedded, not an afterthought. This project seeks to build a new standard for AI security: self-monitoring AI. This standard must be capable of proactively taking or suggesting actions relevant to data security and privacy at each stage of the AI model lifecycle, from data gathering to productionalised deployment. It must showcase weaknesses in a particular pipeline, protect deployed models from vulnerabilities and attacks, and must correctly identify any behaviours that may infringe on the privacy or security of the model over a future time period. This functionality does not exist. Some solutions existing in this space take a narrow look at different parts of the AI lifecycle, largely focussed around data security and/or model privacy, but this is not sufficient to safeguard the application of AI and ensure compliance. Neither do they take into account the future degradation of a model as it is queried. We believe that the only sustainable solution is to develop a set of technologies that not only assure the compliance of the data and model pipelines at the point of deployment, but also the ability for a model to self-monitor, self-correct and report any risks as they are identified.

Energy-focused geospatial system using multi-sectoral data to deliver net zero

145,211
2021-04-01 to 2021-06-30
Small Business Research Initiative
The UK is the first major world economy to pledge net zero carbon emissions by 2050\. The achievement of this goal will require transformations in a number of sectors. To this end, sectors such as transport and urban planning, have embarked upon a nimble, focused and data-driven approach in addressing their challenges. The energy sector can undertake a similar approach, taking data from the energy sector and additional sectors , such as transport, to deliver targeted solutions at the local level. We intend to use geospatial data, which combines location information with attribute and often temporal information, can contribute to the evolution of the distributed energy paradigm. The ability to analyze sparse or incomplete data with respect to, for example smart meters, electric vehicle (EV) uptake, EV charging type, locations and user profile, has a range of implications that are difficult to model with conventional approaches. We propose the creation of an energy-focused geospatial system that will enable the user to visualise overlays of multivariate spatially and temporally varying data, model and predict trends and correlations, infer across areas of sparse data collection, and model the effects of changes on the system such as varying supply, demand or infrastructure. It will further allow for the simulation and testing of different strategies, for example, alternative charging point placement. We will address these challenges using our world-class expertise in Bayesian optimization and scalable Gaussian Process (GP) models. Using GPs will significantly improve the accuracy of AI models, over existing AI based tools. The successful application of this approach will enable the local energy sector to be more quantitative and targeted in its planning and prioritising of resources. The successful transition to net zero local energy systems requires not only cross-sectoral data, advanced geospatial and machine learning models and techniques, but also, crucially extensive collaboration with a broad set of stakeholders to properly understand their needs. We have already taken some initial steps in this process. For example, with respect to the EVs, we are in dialogue with a local council who are currently looking for immediate support with respect to the analysis of EV charger types, optimised roll-out of EV charging points and the implications for the capacity of local energy networks.

Development and commercialisation of Green AI Auditor to enable sustainable and resource-efficient AI uptake for the financial services market

99,620
2020-11-01 to 2021-04-30
Collaborative R&D
COVID-19 has accelerated digital transformations across our economy. Many companies successfully navigating the pandemic owe their resilience to unconstrained digitisation programs. Artificial intelligence (AI) lies at the heart of many of these transformations, and so the pandemic is growing, and will continue to grow the numbers of AI models being developed and deployed by industry. However, AI models currently come at a cost: they are computationally expensive to create and deploy, which results in significant barriers to entry for smaller businesses and individuals, and the creation of a large AI carbon-footprint. The consequences of COVID-19 have brought this problem from the future to the present, and made it even more acute. However, it has been clear for some time that AI's carbon footprint will eventually come to dirty the heralded fourth industrial revolution. Since 2012, the computation required to do deep learning research has increased approximately 300,000 times \[[Allen Institute][0]\], training a single deep neural network can now emanate ~284 tonnes of CO2, equivalent to five times the lifetime emissions of an average car \[[Strubell et al][1], [NewScientist][2]\], and building super-human Go playing AI systems has cost tens of millions of dollars' worth of compute \[[Allen Institute][0], [Yuzeh][3]\]. Mind Foundry is developing the Green AI Auditor (GAIA), a technology that will enable business users to predict, monitor and mitigate their carbon-footprint generated through the computational requirements of their AI technology projects. Mind Foundry is on a mission to democratise AI, and has already built the world's most advanced, "human-in-the-loop", AI platform. GAIA will bring Mind Foundry's clients the ability to make their AI digital transformations sustainable. [0]: https://arxiv.org/pdf/1907.10597.pdf [1]: https://arxiv.org/pdf/1906.02243.pdf [2]: https://www.newscientist.com/article/2205779-creating-an-ai-can-be-five-times-worse-for-the-planet-than-a-car/ [3]: https://www.yuzeh.com/data/agz-cost.html

Development and Commercialisation of AI certification platform for the banking sector

174,910
2020-10-01 to 2021-06-30
Collaborative R&D
In the wake of COVID-19, businesses of all sizes will need loans to survive, rebuild, and grow their way back to economic success. Banks will be needed to release liquidity into the economy, making the right lending decisions without being biased towards or discriminating against any sections of the economy and our society. At the same time, these same banks are experiencing severe and unexpected loan impairment due to COVID-19, and, as a result, they are taking a cautious approach to credit. Modern banks already make use of Machine Learning systems to quickly and efficiently evaluate and act on loan applications across their lending operations. However, banks currently rely on existing compliance and governance structures to manage these new decision-making systems, structures which lack the knowledge and expertise to anticipate the possible failings of Machine Learning systems . Banks themselves project that they will become increasingly reliant on the automation and scalability that their ML models can and do provide. This poses a significant risk in ensuring that the banks comply with industry standards and regulations, such as non-discrimination or data privacy. Through this 9 month industrial research project, we at Mind Foundry Ltd will deliver a tool addressing compliance of ML models for approving loan applications. The solution will consist of a certification system comprising a definition of compliance for ML models in the field of loan approval, methods for exposing when trained ML models are not compliant, and a mechanism for applying these methods to trained models.

Get notified when we’re launching.

Want fast, powerful sales prospecting for UK companies? Signup below to find out when we're live.