Video action recognition in the urban environment, powered by AI and computer vision analytics
"New advances in computer vision are offering opportunities for disruptive innovation, which could have widespread benefits across multiple use cases.
The 12 month project is a partnership between Cortexica Vision Systems Ltd (artificial intelligence and computer vision experts) and Hammerson plc (owner, manager and developer of shopping centres). Cortexica have already built state of the art single image visual recognition solutions for the retail industry. This new research moves into live analysis of video.
This grant application is to develop an automated action recognition system, which uses CCTV to automatically identify ""actions"" e.g. recognising a bag/object which has been left; or identify people who slip or fall. Adopting such technology in a proactive urban safety environment, without using biometric data, could create a novel citizen-centred approach to public health and safety."
AI-SAFE - Autonomous Intelligent System for Assuring Safe Working Environments
Worker and workplace safety are critical programmes and have high priority status in regulated sectors. Non-compliance creates incident costs and loss, lost productivity and business disruption, and sometimes regulatory fines. There can also be a high human cost. The current state-of-the-art is for a person to be manually checked that they are wearing the correct equipment, all of which is prone to human error and costly to manage and monitor. AI-SAFE aims to design and build an autonomous system that will detect and monitor workers against the correct inventory. This would remove the human errors, thus improving worker safety and compliance, and increase productivity. An automated system would reduce personnel costs and costly contamination. During idea generation GSK confirmed that improving their systems could save millions of pounds per year of contaminated labs caused by people entering wearing the wrong equipment. As a base technology it will apply to all industries where Personal Protection Equipment is required, creating significant opportunity.
Fashion Recommendation System prototype
GRD Development of Prototype
The Fashion Recommendation System (FRS) will use bio-inspired visual search technology to
model the relationship between outfit items worn by people in the real world. To do this, we
will ingest large volumes of data from social media, retailers own databases, and by
incorporating the knowledge from the retailers own merchandising decision. The system will
be intelligent enough to cope with a picture uploaded from a social media platform or to a
fashion blog. Our existing clients and partners are repeatedly highlighting how brands want to
leverage user-generated images from social media e.g. Instagram and Twitter pics to drive
user engagement and sales. Our research suggests that there is no product or technology in the
world that is addressing this problem in this way
"street style" visual recognition prototype
GRD Development of Prototype
“Visual search is finally here,” so said New York-based Liz Bacelar during a panel discussion
about the future of shopping at the annual SXSWi festival recently in Austin, TX. Bacelar is
founder of Decoded Fashion, a group that aims to foster creative partnerships between startups,
fashion designers and retailers. Cortexica are a retail fashion image-recognition company,
and we presented on the same panel, and were hailed last week in an article by the Financial
Times as being a “visual search pioneer”.
The simple idea is that anyone can snap an image of a jacket on a mobile phone and
Cortexica’s software can isolate the pattern on that jacket and then search for other kinds of
garments with an exact or similar pattern. It’s an intuitive enhancement of an already existing
shopping behaviour (product search). Instead of hunting out your dream item, the most
suitable products can find you – based on a single image. The technology originated in
Imperial College London, and uses software developed during a seven year long research
project by bioengineers exploring how the human brain processes images. Since 2009,
Cortexica have developed the technology into a number of products whilst continuing to
improve the core aspects of the technology.
As we are gaining more traction, it is becoming evident that our end-users are trying to push
the capability of the present technology that is centred around product images. As users try to
take a picture in a normal setting with a shop window or city background or worn by their
friends and acquaintances, the current system cannot distinguish the product in the image
amongst all the other complex visual elements. We therefore need to enhance the system to
work on “street style” fashion images. This will allow our product to cross-over into a new
area of need which is now being demanded by the end-users and retailers alike. As far as we
are aware, there is no product or technology out there that is addressing this need.
Mobile Visual Search for Fashion
GRD Development of Prototype
Cortexica, a spin-out company at Imperial College has developed software that will help
consumers to make better fashion choices by replicating the way that the eye and the brain
have worked together to recognize patterns over millions of years of evolution. The Cortexica
“Find Similar” software mimics the way the brain processes images and finds similarities. An
image of a dress, a blouse or a shirt, for example, is captured by a consumer using a
smartphone, it is then analyzed by the software and a series of images of available items are
returned to the consumer’s phone with similar characteristics, such as colour, shape and
design. Several leading UK fashion retailers are testing the software, which will be integrated
into websites and mobile phone-based apps, ahead of a full launch in the autumn 2014.
However the technology needs to be developed further to optimize its performance and
maximize the benefit to the retailer and the end user.
This Prototype Proposal sets out to extend Cortexica’s existing mobile visual search capability
and exploit new mobile devices such as Google Glass. This grant application will provide the
funds to do BETTER and FASTER R&D necessary to make progress, provide the tools to
measure sales uplift, but also give us the wherewithal to seek MORE clients. There are a
number of challenges, technical and UI-related, along the road to making this vision of “Find
Similar” become reality, however we believe there is a significant business opportunity which
this grant will help leverage.
Bio insprired visual recognition algorithms on a mobile device
Cortexica Vision Systems Ltd, a spin out company from Imperial College develops patented computerised image recognition technology.
This feasibility study seeks to make a step change, by trying to run our bio-inspired algorithms on a mobile phone device, rather than on a large datacentre. We aim to produce a working mobile prototype which will cary out the search of a target product against a product database without the need for significant communication with a datacentre. If successful it should enhance the potential applications of our technology, and ultimately increase the growth prospects for the company.
Cortexica "find me similar" visual search prototype
GRD Development of Prototype
Many online retailers are looking at ways to develop a more interactive and engaging user experience to help customers discover products and recommendations. An ideal scenario for a user is to be able to use their mobile phone to capture images of interesting Retail products such as ladies clothing from wherever they happen to be, and from that image find similar or exact items for sale on the web.
By creating a visual search capability and specifically designing an app that has “find me
similar” functionality, a user could quite simply “snap and match” items of clothing and then be guided on a journey of discovery and recommendation that both inspires and encourages a purchase.
This Prototype Proposal sets out to prove that a mobile visual search function can work
successfully by allowing users of the mobile app to select an image they may have or capture from the “wild” to initiate a search for an item. We wish to investigate the accuracy and efficiency of such a system, and the integration points necessary with the current volume flow of images and user inquiries so we could design a solution that fits seamlessly into an ecommerce platform and delivers the end user experience.
Cortexica Vision systems have spent the last 7 years developing patented algorithms which have been released in a number of early adopter applications. We are now ready to develop a larger prototype. Our biggest client to date is a large US online retailer who is innovating with our technology but wants to see it working as a prototype scale before being fully convinced.
This grant application will provide the funds to do BETTER and FASTER R&D necessary in make progress with our existing customer, but also give us the wherewithal to seek MORE customers. There are a number of challenges, technical and UI-related, along the road to making this vision of “find me similar” become reality, however we believe there is a significant business opportunity which this grant will help leverage.
Cortexica VisuRec: visual search of movies or music videos
Google have built a multi billion dollar empire on the supply of text based searches. We
believe that over time, a similar opportunity exists for image based search. This means no
reason to type in your query or description into a search engine, simply submit an image or a
sample of video , and it will be recognised by a machine based vision system residing in the
cloud and give you information on what you are interested in looking at. Our vision for this
proof of concept project is for content on multiple media types (broadcast, movie, outdoor,
print) to be discoverable via a "handheld screen device". Introduction of a variety of
technologies will help users gain instant access to content and rich media and enable new
types of interaction.
So imagine you were to take a sample of a live playing video using your mobile phone from a
TV or video screen TV programme or film and instantly receive back relevant content and
extras on the program so that you could interact with the recognised video content in a variety
of ways:
• Receive back relevant content and extras
• If it’s a new movie trailer, add it to your Lovefilm wishlist.
• If it’s a music video, instantly resume play back on your mobile phone and take it
away with you.
• Take a video recording of another mobile phone playing a music video and “Thump”
it from mobile screen to mobile screen, enabling instant viral distribution of playing media.
Using the Cortexica VISUALSEARCHTM platform, a proof of concept will be to create the
vision we have just described
We believe there is a business opportunity to provide content owners and broadcasters with a
competitive edge by offering their customers a more engaging “third screen” experience and
new means of distributing media. Ultimately the content owners will be prepared to share
some of their increased revenue to have this competitive advantage.
Visual search technology development for retail products?????
Awaiting Public Summary