Coming Soon

Public Funding for Habitat Learn Limited

Registration Number 11727701

DeepLlama

264,389
2024-03-01 to 2025-02-28
Collaborative R&D
Habitat Learn Limited (HLL) provides personalised learning software, Messenger Pigeon, using Automated Speech Recognition("ASR") and generative AI to improve student learning outcomes. HLL has 50,000+ student-users, and 400 HEI (B2B2C with student as the end user) customers in UK and North America. DeepLlama builds upon IUK project DeepClover(\#10007097) project's automatic population of Knowledge Graphs. HLL will partner with University of Southampton(UoS). DeepLlama will develop a trustworthy, responsive digital learning assistant using Generative AI and an education based Large Language Model(LLM) to promote adaptive learning for individual students with improved teaching and learning accessibility, especially for students with disabilities. **Need and significance** Post-pandemic most HEIs adopted a blend of in-class and online learning \[RSQuacquerelliSymonds,2021\]. However, remote learning platforms can create barriers to learning, particularly those with learning disabilities or language differences. \[NUS survey,2020\]. Digital learning is challenging for students with disabilities or language differences. Generative AI can analyse large amounts of digital learning content and adaptively help students with different learning requirements. Designing trustworthy solutions for every learner needs to overcome hallucination and plagiarism concerns of using Generative AI and few adaptive learning solutions demonstrate improved learning outcomes.\[Eurasian\_Journal\_of\_Educational\_Research,2022\].

DeepMyna Phase 2

731,457
2024-02-01 to 2025-01-31
Collaborative R&D
Habitat Learn Limited (HLL) provides a personalised learning software, Messenger Pigeon, using its own Automated Speech Recognition("ASR") and generative AI to improve learning outcomes for students. HLL has 50,000+ UK and North American student-users, and 400 HEI (B2B2C with student as the end user) customers in UK and North America. DeepMyna builds upon IUK project DeepSpark, an AI toolkit optimising automated speech recognition (ASR) models for higher education, and DeepMyna Phase 1 Feasibility Study. ASR systems are widely used but can exhibit bias in a multitude of ways - one size does not fit all, leading to inequities, with certain types of speakers being transcribed incorrectly by models that otherwise perform well. Large-Language-Models ("LLM") can help speech recognition better predict words by rescoring the word output from spectrum data analytics, but can introduce domain-specific bias, causing misprediction or ignoring of key information. DeepMyna builds on a feasibility study completed in May 2023 which completed the following evaluation: * Bottlenecks identified which limit the adoption of trustworthy AI; * How DeepMyna's proposed evaluation and customisation model might support acceleration of adoption of trustworthy and responsible Artificial Intelligence (AI) and Machine Learning (ML) solutions; * SME use cases for accessing new markets and supporting their own product and market development As part of the feasibility study Habitat Learn assembled a consortium comprising University of Southampton, Avanade Limited, Microlink PC (UK) Limited and Affiniti AI Limited. DeepMyna is proposed as a world-first bias-driven trustworthy automatic speech understanding (TASU) model evaluation and customisation platform with automated reporting and fine-tuning. Combining ASR and LLM to maximise the understanding of speech content, this regulatory platform will reinforce user-engagement, transparency, and privacy through bias-detection and user-feedback. DeepMyna will address identified bottlenecks:- * Lack of transparency in the diversity and representation of training data; * ASR models are not explainable; * Lack of user input for fine-tuning; and * Lack of privacy protection. DeepMyna will be based on HLL's datasets compiled from 350,000 hours of lecture recordings with manually curated transcript and summary notes. HLL will extract provenance data for ASR training, such as language, background noise, gender, age, accent, etc, which will be applied to the evaluation matrix, to address the bias issues.

DeepMyna

48,912
2023-05-01 to 2023-07-31
Collaborative R&D
Habitat Learn Limited (HLL) provides a personalised learning toolkit (HabitatLearn) to improve learning outcomes for students. HabitatLearn is an ​ecosystem​ of live closed-captioning, note-taking and mindmaps, enabling students to reach individual learning objectives. It has 20,000+ UK student-users, and 300 HEI partners in UK and North America. DeepMyna builds upon IUK project DeepSpark, an AI toolkit optimising automated speech recognition (ASR) models for higher education. European Accessibility Act (2019) requires accurate video-captioning by public sector bodies to make lectures accessible for students with specific learning disabilities. Despite advances in ASR, human intervention is still required for accuracy, making costs prohibitive. The use of AI to convert speech into text is well documented. However, there are significant issues with the current solutions: - * Accuracy for technical, complex language is not good as most ASR models are built for general conversation; and * Privacy and Security concerns for confidential material being shared on the cloud. * Gender and race bias which renders it unusable for certain individuals. * Costs for human-curated alternatives are too high. DeepMyna proposes a solution to this problem by developing a framework to include provenance data in ASR training and evaluation, so that any bias can be detected and tracked all through the ASR training process. Current ASR evaluation focuses too much on the general accuracy in a certain language. Our belief is everyone should enjoy ASR service and ASR should be tailored for everyone. So in the fine-tuning of ASR services, we need to be clear on what data has been fed into ASR and evaluate how it will affect the fine-tuned model. DeepMyna framework will be designed based on HLL's datasets compiled from 300,000 hours of lecture recordings with manually curated transcript and summary notes, where we will extract provenance data for ASR training, such as language, background noise, gender, age, accent, etc. The same provenance data will also be applied to the evaluation matrix, I.e. what is the bias on each domain defined in the provenance data. HLL has good working relationships with many partners in ASR and transcription companies, and we will firstly select partners to form the consortium and together work on the use cases descriptions and proposed solutions. HLL will target financial services and Independent Financial advisers(IFAs), medical, legal and commercial markets where accuracy and security of information is often critical. DeepMyna will broaden HLL's customer base outside of its core education market.

DeepScribe

349,262
2023-02-01 to 2024-01-31
Investment Accelerator
Habitat Learn Limited (HLL) provides a personalised learning toolkit (HabitatLearn) to improve learning outcomes for students. HabitatLearn is an ecosystem of live closed-captioning, note-taking and mindmaps, enabling students to reach individual learning objectives. It has 20,000+ UK student-users, and 300 HEI partners in UK and North America. DeepScribe builds upon IUK project DeepSpark, an AI toolkit optimising automated speech recognition (ASR) models for higher education. 98% of UK/North American universities were forced online in 2021 and online/blended learning will remain a permanent feature. Curriculum digitalisation means increasing numbers of international students will attend courses online. Not all students find the transition to online learning easy, especially those with specific learning challenges. European Accessibility Act (2019) requires accurate video-captioning by public sector bodies to make lectures accessible for students with specific learning disabilities. Despite advances in ASR, human intervention is still required for accuracy, making costs prohibitive. Workplace and consumer interaction with healthcare advisers and doctors have also been digitalised and individuals with hearing loss suffer the same issues. We will explore this market, piloting DeepScribe with trusted partners. DeepScribe will develop end-to-end, continuous Personalised Automatic Speech Understanding (PASU) services for private networks and end-user devices using its growing 250,000+ hour repository of human-curated lecture recordings and transcripts across disciplines. Initially targeting educational lecture contexts, but subsequently for wider non-education markets such as healthcare and professional services, where domain knowledge and training data can be covered by our existing medical and health science lectures. DeepScribe improves current state-of-the-art through: * Design and implementation of a multi-dimensional context-aware personalised ASR training process for individual users; * Optimising PASU fine-tuning algorithms using low-specification computing instead of high-end GPUs and TPUs; * Using edge computing architecture to maximise security and privacy for audio data collection and PASU training , limiting personal data-sharing on the cloud; * accurate transcription to enable mass-affordability for different use cases. DeepScribe will provide personalised automation of speech recognition applied to education, the workplace and everyday interaction with healthcare advisers/doctors to improve accuracy, disciplinary specificity, and speed of captioning. DeepScribe will use incremental AI-based fine-tuning and edge computing to deliver PASU with speed of response and affordability, resulting in a commercially viable proposition. DeepScribe will support mass-adoption of HLL's technology in education, transforming it from a limited accessibility tool to a generic learning toolkit. DeepScribe will broaden HLL's customer base outside of education. HLL will create 15 highly skilled UK jobs over 5 years.

DeepScribe

349,262
2023-02-01 to 2024-01-31
Investment Accelerator
Habitat Learn Limited (HLL) provides a personalised learning toolkit (HabitatLearn) to improve learning outcomes for students. HabitatLearn is an ecosystem of live closed-captioning, note-taking and mindmaps, enabling students to reach individual learning objectives. It has 20,000+ UK student-users, and 300 HEI partners in UK and North America. DeepScribe builds upon IUK project DeepSpark, an AI toolkit optimising automated speech recognition (ASR) models for higher education. 98% of UK/North American universities were forced online in 2021 and online/blended learning will remain a permanent feature. Curriculum digitalisation means increasing numbers of international students will attend courses online. Not all students find the transition to online learning easy, especially those with specific learning challenges. European Accessibility Act (2019) requires accurate video-captioning by public sector bodies to make lectures accessible for students with specific learning disabilities. Despite advances in ASR, human intervention is still required for accuracy, making costs prohibitive. Workplace and consumer interaction with healthcare advisers and doctors have also been digitalised and individuals with hearing loss suffer the same issues. We will explore this market, piloting DeepScribe with trusted partners. DeepScribe will develop end-to-end, continuous Personalised Automatic Speech Understanding (PASU) services for private networks and end-user devices using its growing 250,000+ hour repository of human-curated lecture recordings and transcripts across disciplines. Initially targeting educational lecture contexts, but subsequently for wider non-education markets such as healthcare and professional services, where domain knowledge and training data can be covered by our existing medical and health science lectures. DeepScribe improves current state-of-the-art through: * Design and implementation of a multi-dimensional context-aware personalised ASR training process for individual users; * Optimising PASU fine-tuning algorithms using low-specification computing instead of high-end GPUs and TPUs; * Using edge computing architecture to maximise security and privacy for audio data collection and PASU training , limiting personal data-sharing on the cloud; * accurate transcription to enable mass-affordability for different use cases. DeepScribe will provide personalised automation of speech recognition applied to education, the workplace and everyday interaction with healthcare advisers/doctors to improve accuracy, disciplinary specificity, and speed of captioning. DeepScribe will use incremental AI-based fine-tuning and edge computing to deliver PASU with speed of response and affordability, resulting in a commercially viable proposition. DeepScribe will support mass-adoption of HLL's technology in education, transforming it from a limited accessibility tool to a generic learning toolkit. DeepScribe will broaden HLL's customer base outside of education. HLL will create 15 highly skilled UK jobs over 5 years.

DeepSpark

244,614
2022-09-01 to 2023-08-31
Collaborative R&D
Habitat Learn Limited (HLL) (formerly Note Taking Express Limited) provides a personalised learning toolkit (HabitatLearn) to improve learning outcomes for students. HabitatLearn is an eco-system of live closed-captioning, note-taking and mind maps to enable students to reach individual learning objectives more effectively. It has 20,000+ UK student-users, and 300 HEI partners in UK and North American. DeepSpark builds upon IUK project ANGEL, an AI tool for automated summary notes/lecture-transcript mind-maps generation. Need and significance At least 98.3% of UK/North American universities were forced online in 2021 and online/blended learning will remain a permanent feature of education. Curriculum digitalisation means that increasing numbers of international students will attend courses online. Not all students find the transition to online learning easy, especially in the presence of specific learning challenges. The 2019 European Accessibility Act requires accurate video-captioning by all public sector bodies to make lectures accessible for students with specific learning disabilities. Despite advances in automated speech recognition (ASR), human intervention continues to be required for the desired accuracy, making the costs prohibitive. Innovation Objectives HLL proposes DeepSpark to develop end-to-end, continuous, automatic speech understanding (ASU) drawing upon a 250,000+ hour repository (growing at 50% pa) of human-curated lecture recordings and transcripts across a range of disciplines. DeepSpark will support the development of HLL's proprietary algorithms to create affordable, accurate, offline ASU services for low-energy consumption end-user devices for educational lecture contexts. DeepSpark will improve current state-of-the-art through: \* designed once, used everywhere training/optimisation methodology; \* personalisation for individual students; \* accurate transcription cost reduction to mass-affordability levels; \* speed improvement using mobile and off-line technologies; \* creating secure storage on individuals' devices HLL is partnering with University of Southampton's expertise in AI, accessibility and personalised learning to support these developments. Project Outcomes DeepSpark will provide transformational automation of speech recognition applied to education to substantially improve the accuracy, disciplinary specificity and speed of transcription, and to generate smart notes for students as a learning support tool. DeepSpark will use AI, knowledge engineering and on-device methodology to deliver speed of response and affordability, resulting in a commercially viable proposition. DeepSpark will support the mass-adoption of HLL's knowledge engineering technology across the entire student population transforming it from a limited accessibility tool to a generic EdTech learning toolkit meeting Universal Design for Learning methodology for teaching and learning.HLL will create 8 highly skilled UK jobs over 5 years increasing FTEs to 25\.

Odyssey

20,844
2022-01-01 to 2022-03-31
Collaborative R&D
Note Taking Express (NTE) provides an online personalised learning solution (NTE Learning) to improve learning outcomes for students. It has 20,000+ student users in UK and 180 Higher Education institution (HEI) customers in UK and North America. NTE has collected more than 200K hours of lecture recordings with human curated summary notes and transcripts. The uploaders of those lecture recordings are students from different universities and majors with their own learning styles and career goals. With AI algorithms, we can help those universities automatically monetise their courses to Indian customers, who have been affected by the pandemic in education. NTE want to use their AI-based adaptive learning system, Cougar, to monetise course content from different providers and resources according to students' own learning goals and learning styles. This project will enable NTE to expand its business to students (end users) in India. Key objectives are to work with our partners to clarify the market and requirements from customers and end users, develop a proof of concept of AI-based adaptive learning system and set up trials of Cougar an innovative AI-based adaptive learning system. Innovation NTE has unique, rich and well-cleaned training data for adaptive learning and course recommendations. NTE's Cougar system utilises Natural Language Processing, AI and Knowledge graphs to create a recommendation engine for suitable user learning paths. NTE will construct these to match students' learning goals and background knowledge automatically, based on case studies and user research.

DeepPrism

219,484
2022-01-01 to 2022-12-31
Collaborative R&D
Note Taking Express (NTE) provides an online personalised learning solution (NTE Learning) which delivers curated content, live captioning and summary notes to students at 180 higher education institutions (HEIs) in the UK and North America. NTE is seeking to develop a commercial private wiki for individual students as an adaptive tool to support online learning. * **_Interlinking of Learning materials through knowledge-base population_** By analysing the unstructured text, DeepPrism will create links between summary notes and generate wiki-style documents from the learning materials. * **_Personalised adaptive learning knowledge graph._** Building a knowledge-graph to describe the student's learning journey, such as the coverage of the topics and sequence of learning activities and their learning styles (preferring text, images or video tutorials etc.). This will then be used to tailor private DeepPrism pages for individual students at the same time as indicating broad student preferences to lecturers. * **_Semantic document indexing and search._** By analysing each DeepPrism page, establishing similarities and patterns of the relationships between documents, and further organise them by topic and predefined relationships. * **_Document recommendation_** Based on the knowledge-graph, design algorithms to suggest internal and external links between the documents based on individual learning preferences. * **_Question-answer model_**. Instead of searching keywords, NTE will design QA models to provide formative answers to students' questions based on the private DeepPrism repository. This is a completely novel way of utilising their original note taking and content repository. NTE is partnering with the University of Southampton's (UoS) expertise in AI and personalised learning to support the development. DeepPrism is a productivity and adaptive online learning tool which will support students with different learning preferences and abilities leading to improvement in their learning outcomes. NTE will work with UoS, other HEIs and their student groups during development to ensure the user interface is practical and easy to use. NTE will create DeepPrism as a web-based application transforming the way students curate and use data and media in education to transform the learning journey. DeepPrism will create 8 highly skilled UK jobs over 5 years with a total FTE of 25\.

DeepClover

400,000
2021-11-17 to 2022-11-17
Collaborative R&D
Note Taking Express (NTE) provides an online personalised learning solution (NTE Learning) delivering curated content, live captioning and summary notes to students at 180 higher education institutions (HEIs) in the UK and North America. NTE is seeking to solve a commercial, educational and attainment problem by developing DeepClover through: AI based multimedia segmentation and metadata extraction: using speech recognition, Optical Character Recognition (OCR) and NLP to quickly segment the multimedia resources and extract metadata from each segment for indexing. Contextual, semantic interlinking and search for multimedia fragments: automatically interlink personal notes to segmented and modular education resources created by lecturers or external publishers. Then users can quickly locate a relevant place in a long lecture, so that those resources can be repurposed for adaptive learning in fine granularity. Question-answer model: Instead of searching keywords, NTE will design QA models to provide formative answers to students' questions based on the private DeepClover repository. This is a completely novel way of utilising their original note taking and content repository. Simplified user experience: curation of a personal learning repository by collecting personal or public multimedia segments into one place for rapid searching and referencing. Development of DeepClover has the following goals: To create a web application to complement NTE Learning which provides users with a personalised learning platform to easily discover and consume segments in the lecture recordings to improve learning outcomes through individual empowerment. This will To transform the way in which users store and manage their notes, media and data portfolios. Enabling users to access relevant curated information to enhance their learning repositories using an AI-driven recommendation engine. To create 20 highly skilled jobs in the UK over the next 9 years.

NLive

215,675
2021-07-01 to 2022-06-30
Collaborative R&D
Note Taking Express (NTE) provides online personalised learning solutions and applications for the digital classroom (NTE Learning) delivering curated content, live captioning and summary notes to students of all abilities at over 150 educational institutions in both the UK and North America. NLive is a secure, accessible and affordable lecture capture solution which significantly enhances NTE Learning providing tools to enable blended learning (online and in-class) in an inclusive environment for students. NLive is an innovative advancement of the recently developed NRemote lecture capture platform which will achieve the following goals: * **_Improved lecture IP management_** Using Multi-DRM (or Universal-DRM) and Forensic Watermarking to aid HEIs to manage copyright and IP issues across HEI Learning Management Systems (LMS). This will protect academic content and protect it from being broadcast illegally across social media by students and thereby losing its value. * **_Improving audio quality._** Key to good captioning is the quality of the audio. NLive will improve the quality of the audio by filtering out background noises using AI and Machine Learning on its 60,000 hour dataset of audio/video recordings. This will facilitate cost-effective captioning and accurate translation into other languages. * **_Automated quality control._** Using Natural Language Processing (NLP), NLive will automatically assess the quality of the captioning providing quality feedback to those creating the captioning. * **_Live_** **_transcript and_** **_translation of English Closed-Captions_**. We provide high qulity closed captioning and translation into international students' native language lowering the barriers to learning. * **_Integration_** **_with major LMS_** Integration of the improved live captioning systems into major LMS platforms, such as Blackboard to broaden NLive's commercialisation potential. * **_Enhanced accessibility design_** Implementing online learning accessibility requirements to make NLive a truly accessible and responsive platform. NTE is partnering with the University of Southampton's (UoS) AI and accessibility experts to support the development. UoS will use NLP and AI to design a remote lecture conferencing platform which is secure, accessible and will protect HEIs' Intellectual Property (IP). Development of NLive has the following goals: -To create a web application to complement NTE Learning, which will transform remote learning for all students through providing accessibility which caters for those with learning disabilities, such as dyslexia which affects 10% of the population, and international students who may struggle with the English language. -To create 15 highly skilled jobs in the UK.

ANGEL - Ai Notes GEneration for Lectures

99,648
2020-11-01 to 2021-04-30
Collaborative R&D
ANGEL will use Artificial Intelligence (AI), especially Natural Language Processing (NLP), Computer Vision (CV) and Knowledge Graphs (KG) to create automated summary notes and curated relevant content to support student studies which will complement Note Taking Express's current digital classroom solution. In an online teaching environment, students often lack the concentration or attention to detail required to abstract a clear set of notes for subsequent study. Notes and content will be generated during live lectures enabling students to have accessible and focused content, mind maps and notes to promote their studies. NTE have 50,000 hours of data and almost 50 million words of existing content we will use to train the algorithms for this project. Current solutions, such as Zoom and Microsoft Teams are difficult to use in a blended (online/offline) teaching environment and designed mainly for meetings and conferences. BlackBoard Collaborate has been widely used for online teaching, however the cost of the software and the complexity of both the setup and UI have been a cause of frustration for many users. ANGEL will be compatible with each of these platforms as well as being integrated into NTE's own digital classroom solution and will provide a third dimension for students in an otherwise two-dimensional learning space. It is likely that the implementation of social distancing and new technology will increase overheads significantly for HEIs as they seek to put in place COVID-19 mitigation measures. In order to maintain and grow student numbers a more engaging and supportive technology platform needs to be deployed for students to maximise their learning experience. ANGEL provides the support that many students will require in the new online learning dimension of post COVID-19\.

NFeedback - improving online learning outcomes

174,921
2020-10-01 to 2021-06-30
Collaborative R&D
NFeedback will use AI, especially Natural Language Processing (NLP), Computer Vision (CV) and Knowledge Graphs (KG) to create both live and offline feedback for Note Taking Express's current digital classroom solution. In an online teaching environment, many interactions are missed compared with traditional classroom teaching, such as the number of attendees, students' facial emotions, body language etc. These AI services will provide valuable information on non-verbal emotions and signals. Then based on each user's profile, automatic feedback can be generated for teachers and students, together with suggested improvements and support. KG (sometimes also referred to as Ontology) is a technology which structurally describes the concepts and relations between concepts, so that AI systems can spot patterns of data and give connections within the data. This reasoning process offers feedback to HEI administrators, teachers and students. Accessibility is an important component of the feedback to consider diversity of learning styles. If it's an online lecture, for example, a deaf student may require Closed Captioning turned on whereas a visually impaired student may need text to speech to read out text on the screen or description of images. At a more granular level we will be structuring a number of work packages to collect more information about the lecture to feed into the KG, so that our AI solution can decide what needs to be feedback to school admins, lecturers and students. We will be collaborating with University of Southampton (UoS) to obtain the data from open datasets (Southampton Open Data Service, for example), and other clients current course management systems, such as BlackBoard or Moodle. Following the above, we will use lecture audio/video content recorded from our existing digital classroom solution and generate a semi-structured data store for further analysis. From this we can further enhance the NLP and CV modules to prepare data that is needed for automatic feedback. Current solutions, such as Zoom and Microsoft Teams are difficult to use in a blended (online/offline) teaching environment and designed mainly for meetings and conferences. BlackBoard Collaborate has been widely used for online teaching, however the cost of the software and the complexity of both the setup and UI have been a cause of frustration for many users. Fundamentally these solutions lack accessibility for neurodiverse learners and any form of personalised feedback. NFeedback will therefore be designed as a simple to use interactive cloud-based software service, which captures lectures securely for all students, including feedback content for students and accessibility tools. It is likely that the implementation of social distancing and new technology will increase overheads significantly for HEIs as they seek to put in place COVID-19 mitigation measures. NFeedback is seeking to solve this problem directly by providing AI supported assessment and feedback systems for students with different backgrounds and learning preferences and to recreate, at least in part, the engagement and experiential component of learning which in-class and on campus courses deliver at an affordable price point.

NRemote:a personalised remote learning solution

74,601
2020-06-01 to 2021-03-31
Feasibility Studies
Note Taking Express is a software and service company which provides Note writing software and remote note taking services through NHub a web-based software program. We provide our solution to the UK's Disabled Student Allowance Scheme and over 300 Higher Education Institutions (HEIs) worldwide. Universities and Colleges have had to temporarily close and adapt to remote learning. Their students are using online solutions which are limited or inappropriate for the learning environment. Students with learning differences are particularly penalised, as much of that the content is not formatted in an accessible way. Covid-19 has accelerated the need to educate differently with the resulting unprecedented challenges creating a significant period of economic uncertainty. We are introducing **NRemote** as a remote lecture capture software for lecturers to engage with students remotely and provide continuing education which will enable engagement and participative learning. Lectures can be delivered from the classroom or the lecturer's home to students in remote locations. **NRemote** can be combined with our existing note writing software and note taking service portal **NHub** to create a personalised learning solution, which is inclusive, and provides lifelong learning for students of all abilities. The video content from **NRemote** can be replayed in **NHub** where students can add content, write notes and access supporting tools to enhance their learning journey, including our personally tailored knowledge-bank to further enhance their learning experiences. **NRemote** will positively transform the delivery of accessible learning for education and is straightforward and easy to use, with no concerns about security or privacy, (unlike other online platforms like Zoom, which have been removed from use by certain schools, as not being sufficiently secure for, nor designed for education.) **NRemote** and **NHub** will be available to Universities and Colleges for all students by way of an annual subscription and/or students will have the option to subscribe to **NHub** for additional services offered. Disability support may also be available for students with learning differences to provide this subscription. Extension for Impact funding will enable us to implement and trial an automated live captioning accessibility function for NRemote which our cusomer base has requested for their students. Existing accessibility features for the standard conference platforms do not provide sufficient quality for higher education so we see this as a must have option for our customers.

Get notified when we’re launching.

Want fast, powerful sales prospecting for UK companies? Signup below to find out when we're live.