The DRIFT Eureka Consortium presents an innovative project to help communities prepare for, respond to, and recover from disasters. The project aims to enhance structural assessments, using AI-driven 2D and 3D imaging technology for real-time analysis of structural vulnerability and damage risk.
**Key areas of focus include:**
* **Digital Technologies:** Developing a framework for real-time data collection on building damage with rapid processing of 2D images during emergency phases.
* **Resilient Construction**: Creating methods for assessing seismic vulnerability and establishing a harmonised database of seismic risk from diverse data sources.
* **Post-Disaster Waste Management**: Offering solutions to convert demolition waste into usable materials for reconstruction, aiding environmental protection and recovery efforts.
Partnering organisations from South Korea, Türkiye, and the UK bring a wealth of expertise, with each country leading a technical work package. Aralia coordinates the project, with collaborative networking to ensure strong social impact and commercial success. The final product will be a QGIS-based framework with Postgres integration, interfacing with public and private GIS resources.
The scope of the project is to incorporate recent developments in AI applied to 3D imaging within an existing, low-cost, high-performance 3D device.
The addition of AI processing will enable users within the creative sector to capture 3D data quickly and at low cost. The solution addresses similar needs in many other sectors including telemedicine and non-destructive testing.
The product uses IPR created by Aralia that provides higher quality 3D data at substantially lower cost that existing alternatives.
The project brings existing Aralia IPR developed in response to applications in infrastructure, urban, transportation systems that benefit from the automated analysis of video images.
The project combines multispectral and 3D reconstruction of surfaces at high resolution, CNN based object detectors and scene context generation, supported by a semantic database that uses a lexicon appropriate to the rail industry problem space.
The objective of the project is to provide a low-cost, portable scene analysis system that addresses use-cases in surveillance, safety, construction, and asset management.
The project aims to improve the speed and quality of asset management in the rail industry using AI and condition monitoring sensors attached to a smartphone.
Our innovation is the development of an augmented reality tool for the analysis of rail infrastructure. The AI component of the system recognises the subassembly of the asset and recalls the maintenance procedure from a cloud service. The augmented reality solution then leads the user through the maintenance process, including quantitative visual inspection using sensors attached to the smartphone.
The proposed method reduces manual inspection of assets for condition and provides a cost-effective solution to manage and maintain assets across the rail network.
The project adds 3D and multi-spectral imaging to smartphones so that they can be used for a wide range of test and monitoring purposes including QVI. The products delivered by the project are low-cost and can be used by anyone familiar with mobile phones.
The project supports sustainability and the circular economy during the period of COVID-19 restrictions by facilitating inspection and test in a wide range of industries, including construction, manufacturing and infrastructure.
The project provides a low cost photometric stereo illumination device and associated software that enables a smart phone to capture and process images revealing a high resolution 3D image of a surface. The images can be transmitted to a remote location for expert analysis. The method of surface reconstruction is protected by patents held by the University of Strathclyde Institute of Photonics.
The photometric stereo images aid diagnosis of dermatological conditions by means of tele-medicine. The same 3D technique is also used in non-destructive testing, security and surveying applications.
The simplicity and low cost of the device ensures that they may be readily distributed whenever necessary.
The project also includes facilities for archiving, displaying and processing images through the application of Machine Learning.
The illumination device and associated software will be available to the public by Q2 2021\.
ADDITIONAL INFORMATION:
The extension scope includes the process of securing IPR generated during the project, in the form of registered designs and patent applications.
Following user and market review, the initial design will be revised, and the documentation required for tooling and manufacture will be prepared.
Formalised performance testing, safety assessment and field trials will be used to create all information needed for CE certification and preparation to register product as a Class 1 medical device.
Small Business Research Initiative
Edge based trespass detection software for the detection of individuals trespassing, fare evasion and causing criminal damage; such as graffiti. Deterrence through responses tailored to the individual and the nature of the trespass. The solution reduces false positives through the use of convolutional neural nets for object classification and scene context to determine behaviour. The solution is low cost, can accept a wide range of sensors and is capable of operating at sites without mains power or WiFi.
GrassVision will use imaging and precision agriculture techniques to develop a novel spray apparatus
for precision application of herbicides to broad-leaf weeds in grass crops. The GrassVision consortium
consists of imaging experts (Center for Machine Vision, UWE), data analysis experts (Aralia Ltd.) and
precision agriculture experts (SoilEssentials Ltd.). Sustainable production requires weed control methods
to reduce herbicide use to comply with current and future EU legislation. The primary focus will be to
detect weeds using novel 3D machine vision techniques. Initially the project will use off-the-shelf
machinery to spray a targeted area around each weed, with an estimated aimed decrease in herbicide
use of around 75%.The project will then look to determine the limits of precision by refining the boom
itself. Using this approach, we hope to achieve an ideal target of a 5x5cm spray area per-weed,
providing potential reductions in herbicide use in excess of 90 %.
This project will enable Aralia Systems to develop 3D imaging technology for moving objects at very long range (100s metres) outside in daylight. This is currently not possible. The team has previously demonstrated how by controlling a network of existing near-IR illuminators and cameras, photometric stereo (PS), a shape from shading technique, can be used to capture high-resolution moving 3D images in an indoor environment. The project is timely as it will explore the feasibility of exploiting an exciting recent development in camera sensor technology, known as black silicon. Black silicon imaging allows the capture of artificially illuminated images in a depressed region of the solar spectrum (at 940nm) by using controlled LEDs and eye-safe lasers. This allows the use of PS to create information rich 3D hotspots that can be placed anywhere within a monitored environment. This development could allow a step change in the use of moving 3D imaging outdoors. This will enable Aralia to develop enhanced visualisation and threat detection software in surveillance applications.
This project will enable Aralia systems to develop a Physical Security Information Management (PSIM) threat response system, which will tie security sensors together to form one unified network of sensors. This development will allow security sensor devices to cooperatively process information that can be accessed by connected sensors, and autonomously respond to a possible threat with minimal human intercention. For example, if an access control point is breached, then mobile sensors will be able to respond to this by automatic reconfiguration of the network so that some sensors can navigate to that point. This response development program will apply to both static and mobile devices, and will handle the communication of multiple sensors processing at any one time.
Directed or targeted advertising is common place in the on-line shopping environment but is less known in the out-of-home (OOH) shopping experience where technical challenges are more complex. This application presents an exciting opportunity to use 2D and 3D image recognition technology for the capture of information from passerby individuals (or group of individuals) and will provide analysis/metrics about consumer demographics and behaviour for use in automated targeted marketing campaigns. Our technology will make no attempt to recognise individuals and all image data will be treated as metadata which will be used for analysis but will not be stored. Application of 2D and 3D analysis in the outdoor advertising sector is novel and beyond the current state-of-the-art. Addressing this area is also timely, as this is a developing market where technological advances are rapidly taking place.
We will develop and test a basic prototype using state-of-the-art 2D and 3D imaging
approaches to establish the feasibility for gathering accurate human demographic (gender, age) and behavioural (head movements, eye gazing transitions) information (to assess engagement / interest in digital adverts). We will also demonstrate a capability for the automatically captured information to be used to intelligently adapt the advertising content in real-time in order to maximise consumer engagement. We will work closely with billboard and advertising companies as well as consumers to test our product.
Automatic recognition of demographic and gaze data in real-world environments presents many challenges. Aralia has already made steps towards developing systems in this area in relation to surveillance and also has clear presence in the application market to ensure access routes to marketing and advertising sectors.
Knowledge Transfer Partnership
To applying advanced machine vision techniques and algorithms to enhance scene analysis capabilities of the system for assessing viewer behaviour in cinemas.