Coming Soon

« Company Overview
49,340
2023-06-01 to 2023-11-30
Grant for R&D
466 million worldwide have hearing loss. Our proposed solution helps to break down communication barriers by providing a real-time translation of British Sign Language (BSL). Our project is a mobile application that uses vision-based machine learning (ML) to translate BSL. Our innovation lies in the use of this technology and creatively training a robust ML model that underpins it to improve accessibility for people with hearing-impairments. To date, the limited reliability of ML systems has not stunted the development of accessible solutions for sign language applications. Since BSL is a complex, 3D language the robustness of ML applications is lacking - hand gestures, body posture, facial expression and the context all influence the choice of signs where individual bias (accent) and regional dialect similarly apply. This is further compounded by an insufficient corpus of data to train the ML for a general-purpose BSL translator. The lack of accurate and reliable ML systems also leads to a lack of trust and compliance from users. As human-centred designers, we feel this frustration and appreciate intuitive and accessible user experiences that are not available for marginalised groups. The lack of technical development in this area means current solution providers resort to using human interpreters in person or as an on-demand service over video conferencing. This is very costly and restricts the freedom of hearing-impaired individuals and ultimately, their quality of life. Our application has the potential to be used for a wide range of use cases, but to stage-gate and constrain the technical challenge, we first focus on ordering in coffee shops. For example, it could be used in restaurants, retail, education, and any video conferencing software to help people communicate with and understand those with hearing-impairments. We are fully ready to begin before the 1st June 2023 and over 6 months plan to deliver an accessible, human-centric, ML application for one use case and a plan for to use the training data - both our own and open-source corpora - to deliver a desirable experience. This validation and exposure will enable us to grow the technology to other use cases as companies adopt the technology and grow the business more generally. In short, the current challenge is that the tools available at the moment aren't capable of reliably translating sign language. In this project, we are going to build the tools to effectively train an ML model.