Machine Learning and AI are showing that efficiency gains and cost savings can be made if these technologies are applied appropriately.
One area where there is still a lot of manual effort is in the post production or finishing of films, commercials, and Television programmes. Here operators need to keep track almost manually of what they've done, and what needs doing. There is a lot of drudgery in the loading of the required material for a session, and checking afterwards that a deliverable package contains the 'right' versions of all of the files and formats.
By working with Cinegrell PostProduction GmbH and the FHNW research institute, both in Switzerland, we will demonstrate and quantify the time and cost saving made
GRD Development of Prototype
The eighteen-month project will develop ways for creative teams making movies or TV shows
to collaborate interactively to review and agree the ‘look’ of a sequence across all the stages
of video processing.
Video for effects-rich movies and TV drama is processed by multiple teams, both on-set and
in postproduction, in a series of steps involving video capture, shot management, look
management, transcoding, editing, VFX, compositing and grading. Each of these steps can
change the look and the dramatic impact of a sequence, but it is extremely difficult for the
director to ensure that a look is communicated, applied and maintained along the chain as
sequences are repeatedly edited and effects, CGI and grades are added. Work is often
distributed between companies in centres such as London, California, Vancouver, Wellington,
Beijing and Mumbai. Creative professionals and teams view digital sequences in different
conditions and applications: there is no guarantee that the ‘red’ seen in a facility house in
London matches the colour the director on location in Rome wants to change. Mistakes occur
and the ‘look’ drifts away from the director’s vision, leading to frustration, delay, and
expensive re-working.
‘Look in the Cloud’ will develop a metadata system and architecture that will allow all the
creative teams to see, discuss and modify the look, with a Cloud-based workspace,
metadatabase and application plug-ins that enable everyone (director, cameraman, DIT, editor,
VFX artist, colourist et al) to see – and know they are seeing – the same thing, irrespective of
the software application or equipment they are using.
FilmLight Ltd will market the resulting system and services to facility houses, film and TV
studios and production companies around the world from its sales and marketing offices in
London and USA and its international network of resellers.
ASAP is a two-year project to research, develop and demonstrate a new pipeline, tools and processes for rendering and reviewing CGI and video-based media, at appropriate levels of quality for real-time, interaction-time and near-line use cases. The results will facilitate the cross-media creation and use of assets for film, TV and games and compress the present stages of preproduction, production and postproduction, eliminating the need to re-create assets from scratch for different uses.
The project will advance the state of the art in asset preparation, scalable dynamic rendering and shaders, production visualisation, lighting interaction, real-time combination of 2D and 3D assets, and the unification of media processing pipelines. It will deliver a range of new tools, applications, software and production services.
The project is a large-scale cross-industry initiative by a consortium comprising the UK’s largest VFX company (DNeg), three world-leading SMEs specialising in media technology creation (FilmLight, Ncam and The Foundry), and the UK’s leading independent games developer (Rebellion).
GRD Development of Prototype
The 18-month FilmLight FLUX Indexer project will research and develop new ways of
finding and accessing media in big data stores by means of an Indexer, metadatabase and file
server that can run over any very large data store or Storage Area Network. The aim is to
increase dramatically the performance of media storage systems by a radical extension of the
FLUX metadatabase concept for media processing and management, developed by FilmLight
with the assistance of recent UK and EU collaborative R&D projects.
It is becoming increasingly difficult to find and access particular items in the sea of media
data. File and indexing systems were designed for data volumes a thousand times smaller than
they are now. People do not think in file numbers: they think of ‘the shot we did late
yesterday afternoon’ or ‘the test shots we did at 4K resolution’ and computer file systems are
bad at answering these questions. FilmLight’s FLUX metadatabase technology relates the
kind of metadata that humans understand to the numerical file systems that computers use; it
relates user-defined searches to file-system-level metadata, which is collected and stored in
real time during any file i/o operation. Today, FLUX only works with FilmLight’s proprietary
storage system. FLUX Indexer will develop a prototype system that will run over any of the
main third party big data stores used in the media industry. It addresses a core target market of
1,000-2,000 customers (currently with an Exabyte of storage in aggregate) in movie
production, postproduction and Visual Effects, and high-end broadcasting. FilmLight will
market the results by direct sales to users, and license the technology to providers of managed
storage systems, with potential spillover to many other sectors with massive video data.
FilmLight is a world-leading UK technology developer. In 2012 its employees won four
‘Technical Oscars’, and it has won Queen’s Awards for Export and Innovation (twice).
Workflow and tools for stereoscopic TV programme production
The project will develop and test technologies for robust, efficient and increasingly automated methods for the shooting, postproduction, checking, verification and correction of stereoscopic TV. The outcomes will be a pipeline and processes for S3DTV, tools and extensions to the MXF standard.
Workflow and tools for stereoscopic TV programme production
The project will develop and test technologies for robust, efficient and increasingly automated methods for the shooting, postproduction, checking, verification and correction of stereoscopic TV. The outcomes will be a pipeline and processes for S3DTV, tools and extensions to the MXF standard.
Movie making is changing from a two-dimensional process (in which scenes are shot on a camera and ‘composited’ as 2D layers) to one that combines digital video, computer-generated models, animations and effects in a three-dimensional world. This increases the director’s creative freedom, and supports the production of both 2D and 3D stereo versions, but is very technically demanding.
SyMMM is developing ways to capture and process many kinds of metadata from video streams, photographs, laser scans and other measurements to support 3D approaches to movie making. The project will advance the state of the art in 3D video and automatic metadata extraction. It will lead to new methods and tools for blockbuster movie production, using multimodal metadata to control the way that scenes are put together and to help the creative team to visualise what is happening.
The project leader is SME technology developer FilmLight, which won four ‘technical Oscars’ in 2010 and the Queen’s Award for Innovation in 2012. The project partners are Double Negative, Europe’s largest visual effects company (winner of the 2011 Oscar, as 2011 and 2012 BAFTAs for best VFX) and the University of Surrey.