Coming Soon

Public Funding for Unitary Ltd

Registration Number 12044127

Making cities safer and cleaner using micro-mobility and machine learning

168,372
2023-01-01 to 2024-03-31
Collaborative R&D
The use of micro-mobility (e-bikes and scooters) has been growing significantly with a **projected market size of $500bn by 2030** (McKinsey 2021). The use of e-bikes and scooters, both shared and private, are already beginning to form a significant part of the urban transport network, with journeys doubling each consecutive year from 2019 to 2021\. **micro-mobility will be key to reducing traffic and achieving Net Zero by 2030**. This growth in micro-mobility has not been without its challenges - in particular, **safety and parking compliance**. Several initial trials led to streets being cluttered and thoroughfares obstructed which in turn led to **clamp-downs on use and parking bay requirements being mandated.** Residents and businesses are unhappy. Visually impaired and blind people in particular have suffered with **charities calling for micro-mobility schemes to be terminated** (The Guardian 27/04/2022). **Suppliers and local authorities still struggle to incentivise and police parking.** We know that we can do better. Captur and Unitary want to make micro-mobility work for everyone by providing suppliers, customers and local authorities with **AI-enabled technology to de-clutter the streets.** By building **safety verification algorithms** to automatically check the context of where vehicles are parked based on images taken by users post-ride, we will help to improve parking compliance, **make streets safer and help reduce friction.** Ultimately, this will contribute considerably to micro-mobility uptake and eventually **reduce car use and emissions in urban areas.**

Spinor: building novel algorithms to identify abuse in online text

50,000
2021-03-01 to 2022-03-31
Collaborative R&D
I am the CEO and co-founder of Unitary, a startup working to make the internet safer by building technology to identify and remove harmful content online. To date, we have been focusing on image and video moderation, but we plan to capitalise on a growing demand for text-based moderation by developing novel algorithms to interpret abuse in online text.

Shear: The next generation of video understanding technology to automate content moderation across the internet.

265,840
2020-11-01 to 2022-04-30
Study
In this project, Unitary Ltd and Oxford University will develop novel algorithms to address the core challenges of video moderation. This technology will form Unitary's new product, _Shear_, to automatically detect harmful video content online. Automated moderation is desperately needed to ensure both speed and accuracy, and protect moderators' mental health. Current solutions treat each video as a series of frames and apply image analysis. The audio is analysed separately to detect keywords. But any understanding of time (the order of frames), or awareness of context, is lost. **Videos carry fundamentally more information than images, and consequently there is an enormous volume of harmful videos for which this approach completely fails.** Below are some types and examples of videos which are currently impossible to detect with automated means: 1. Videos in which understanding **interactions** is essential E.g., an individual frame containing a gun would not necessarily give away whether this involves a real-life massacre, computer game or movie scene. 2\. Videos which require an understanding of **motion** or awareness of time Videos depicting animal cruelty are unfortunately common. In one example, a dog is seen next to a man holding a baseball bat. The bat swings, the screen goes dark and a horrible crunch is heard. This is an extremely disturbing video, but no individual frame can raise alarm. 3\. Videos in which **multiple** **signals** must be interpreted **together** Videos designed to influence and harm children often include popular cartoons which have been manipulated so that the characters ask the audience (i.e. children) to do dangerous things, such as "Turn the oven on" or to play with electric wires/sockets. The images alone show nothing but familiar cartoons, and the audio itself is not cause for concern -- there is no profanity, and in fact it might be mistaken for an adult's DIY video! But the combination of this audio inside a cartoon is what makes it unacceptable. 4\. Videos in which **context** is key Visually similar content can be harmful or benign depending on other factors: e.g. a nude portrait could be posted alongside a feminist message or narration by a sexist troll. This project will result in breakthrough technology that can interpret a variety of signals to enhance understanding of time and context, enabling improved detection of videos such as those described above. We aim to disrupt the moderation industry, one which is currently extremely manual and ripe for innovation.

Get notified when we’re launching.

Want fast, powerful sales prospecting for UK companies? Signup below to find out when we're live.