Vicon Advances MoCap at SIGGRAPH 2016

Vicon Advances MoCap at SIGGRAPH 2016

Siggraph, July 2016 – In an interview with Jeffrey Ovadya, Director of Sales and Marketing for Vicon Motion Systems, we got a chance to hear about their latest mocap camera and also Project Katana which was being demonstrated on the conference expo floor at SIGGRAPH 2016.

Motion Capture, or MoCap has moved to the mainstream for VFX production in movies, TV, advertising and most content creation.  It has even expanded to support automated PTZ camera movement for live TV production using virtual studios.  The primary product from the company is high speed cameras for motion capture that are mounted to a scaffolding system or framing to define a 3D space where the movement is captured.  The new cameras and software are designed to be smarter and easier to use, as mocap moves from the professional space to the prosumer market.

For simplicity in the use of mocap, Jeff said that Vicon has started a program called “Project Katana”.  The idea behind it is to have a system that in real time will create a mocap skeleton model ad bring the data into systems like Final Cut.  The goal is to have final quality, fully rigged and articulated skeletons of the mocap session at the end of each shot.  This will provide full skeletons of all characters in the 3D capture space, at the end of each day, along with the production dailies.  In order to perform this analysis, the studio setup and network has to be self-healing.  The data environment for motion and the rigging connections in the project are being done using a Matlab mathematical modeling core.

The booth at the SIGGRAPH expo was showing the system however there is not release date or a “product name” for when Project Katana will be released.


To address this shift towards simplicity, Vicon has introduced the Vero camera.  It is available in two models the v1.3 which is a 1.3MP camera and the v2.2 which is a 2.2MP camera.  Designed specifically for mocap applications, the cameras are an 850nm IR greyscale camera that operates at either 250FPS for the v1.3 or 330FPS for v2.2.  The high frame rates on the cameras allow for real time and full range live motion capture.

The cameras have a variable focus lens from 6mm-12mm for use in low angle and high angle applications.  A major design simplification for the user is the single cable connection.  The cameras have standard RJ45/Cat5e Ethernet connector that serves to be the interface for the camera control, data connection from the units and power the units using the POE specification.  For this single cable connection system, the cameras have been designed to only require 12W to operate.  Like the prior generation and larger camera the Vantage, the Vero has on-board sensors that monitor camera position and temperature to ensure optimal performance.

Production details of Captain America: Civil War reveal at Siggraph

Production details of Captain America: Civil War reveal at Siggraph

Siggraph, July 2016 – Due to the complexities of scheduling a large cast of actors and crew it is difficult to move the production to locations around the world. Due to tax rebates and incentives many movies are often shot in just a few locations. What is the solution for a production that is supposed to span the world? That was the situation that was presented to the production of Captain America Civil War which was primarily shot in Atlanta Georgia.

At the Siggraph this year the lead production session was dedicated to “The Making of Marvel’s Captain America: Civil War” movie. Victoria Alonso, Executive Vice President of Physical Production from Marvel Studios, Dan Delleeuw, VFX Supervisor and Swen Gillberg, Associate VFX Supervisor from Marvel Studios, Jean Lapointe, Compositing Supervisor from ILM, and Greg Steele, VFX Supervisor from Method Studios were discussing the production process of the movie.

As we already know the film was primarily shot in Atlanta Georgia and filmmakers utilize VFX to bring locations from around the world to Atlanta digitally. The presenters were detailing the stats for this film. The numbers for the 135 minutes film Captain America: Civil War were as follow: 2,782 finals were created; 2,745 finals were used in the movie, 415 shared shots between multiple VFX teams, 194,608 frames. The production team created description of total 12 characters that play in the movie including: Captain America, Falcon, Scarlet Witch, Winter Soldier, Hawkeye, Ant Man, Iron Man, Black Widow, War Machine, Vision, Black Panther and Spider Man. Every character was evaluated under the criteria such as: Fighting, Agility, Strength, Endurance, Intuition or Psyche. For example the Winter Soldier: Fighting – Incredible, Agility, Strength and Endurance – Remarkable, Intuition – Excellent, Psyche – Typical.

For those who think that the developing story of the film like Captain America is a linear process: Script > Look Development > Story Boards > Previs, they are mistaken. The panelists showed that is more like a matrix of those elements and multiple teams working and a lot of material needed to be shot. Many times short post schedules sometimes required that they started on the assets before the foreground was shot or the sequence was fully realized. They don’t always stick to what was originally planned with pre-vis or story board. As a result a lot additional material was shot to cover all options such as time of day, weather, and any camera angle.

The production of Captain America Civil War was collaborating work of 18 teams worldwide working on a single project.


SMPTE ETCA highlights creation and distribution

SMPTE ETCA highlights creation and distribution

ETCA, June 2016 – This year SMPTE had a new venue for their Entertainment Technology in the Connected Age (ETCA) conference, relocating to Campbell CA in the center of Silicon Valley from their prior location at Stanford University.  The event was opened by SMPTE President Robert Seidel, who is also the CTO of CBS.  He started by emphasizing that for the past 100 years the group has focused not only on the global standards and methods for content creation, but also on the technologies behind getting that content to the people who can enjoy it.  This includes the identification and use of new devices and methods as well as the definition to be able to share content among them.

Robert was followed by Pat Griffis who is the VP of Education for SMPTE who is from Dolby Labs.  He gave a quick overview of the SMPTE Advanced Technology Conference (ATC) that is being held in October.  The 100th anniversary event will be at the Ray Dolby Theater in Hollywood, CA and will feature an opening night Red Carpet event.

The two day event discussed traditional broadcast over the air as well as the addition of Internet based streaming connections to mobile devices and computers.  A number of the discussions focused on the change from linear television where the programs are available on a set schedule to the on-demand capability of the viewers as well “binge” watching of shows.  It also discussed the place and method of getting user created content up to the cloud, and how to then get that content out to the global viewers.

The lunch time keynote was from AMD announcing their new consumer graphics card for VR applications.  The card that was introduced by Roy Taylor retails for $199 and is called the Radeon RX 480.  It has been qualified on all VR platforms – Occulus, Valve, and HTC.  The price point also makes the card viable for industries that are rapidly adopting AR for their workflows and IT departments.  The goal is to make the product available at a price that is not only acceptable by a large range of consumers, but also for both public and private schools.