Challenging Tech Requirements
The technology requirements for producing live-motion virtual reality journalism are burdensome, non-synergistic, rapidly evolving, and expensive.
First, the capture technology is burdensome. This is primarily because the necessary cameras are either DIY, prototype, or very high-end and proprietary. This project used a DIY camera setup, which was challenging to operate and train our filmmaker to use. Having 12 separate cameras as part of the rig also introduced extra risks around equipment malfunction, storage, and battery life, particularly when shooting in challenging settings.
Second, the suite of cameras, editing software, and viewing devices necessary to produce a live-motion virtual reality product are non-synergistic. Virtual reality components are produced by a range of different companies, and often include experimental and DIY hacks. This means there is no common workflow or suite of products that integrate well, and the production demands a broad range of specialist, technological experts.
Third, the many technologies for creating and distributing live-motion virtual reality are rapidly evolving. During the course of this one project, we learned of new production cameras being developed and many new headsets—ranging from the sub-$10 Google Cardboard to the high-end Oculus consumer release. Google is developing a full-process, integrated suite including a new camera, auto-stitching program, and YouTube VR player. While this is exciting, the shifting landscape makes editorial and production planning very difficult. Technology and process decisions made early on can have a tangible effect on audience reach and relevance months down the line.
Finally, nearly every stage of the VR process is currently very expensive. This is particularly the case when working with CGI, which this project did. The high cost of the camera came from its hardware components, as well as compensation for the development team that designed, built, and refined the kit. Post-production requires a range of technical expertise for the stitching process, programming CGI components, and navigation. This particular expertise is in high demand, so hourly rates are commensurately elevated.
The technology and steps used in the Ebola documentary project (detailed in the appendices) will be useful as a starting point for other teams who want to produce documentary VR, especially those incorporating immersive, 360-degree, live-action video. However, any project starting now would likely benefit from more recently released equipment and newly developed techniques. That being said, some high-level points seem worth reviewing. It is helpful to divide this process into three stages: capture, digital and post-production, and dissemination.
Stage 1: Capture
First, the video capture. Using a camera rig of 12 GoPros introduced huge labor and logistical challenges in the field and in post-production. We were only able to produce a high-quality, immersive video under quite narrow circumstances.
Director Dan Edge reported difficulties in the field: The GoPros aren’t designed to work in synchronization, nor is it possible to coordinate exposure and color. The cameras require individual charging cables and it’s laborious to turn them on and off.
The size, appearance, and unwieldiness of the camera are in marked contrast to other modern recording devices, which can be flexible and unobtrusive. This reduces the types of locations where footage can be easily collected (e.g., some conflict settings or in places where discretion is paramount, like hospitals). These restrictions pose a significant challenge for documentary filmmaking, where access and actuality are key.
There is some hope on this front, however. Participants in the VR marketplace are developing more integrated and elegant cameras. Some keep the strategy of using many lenses in left-and-right pairs of “eyes.” (Paired lenses are necessary for stereoscopic footage, which can theoretically produce greater fidelity, and therefore immersion and presence.) Others are aiming for high-quality, 360-degree video, but with one lens per direction, abandoning stereoscopy in favor of simplicity, lower cost, and weight. It remains to be seen if either strategy will become the norm, or if both will remain viable—depending on the videographer’s context.
However, when this production started the team reluctantly judged that a custom-built, 12-camera rig was our only viable strategy. One alternative was to work with a camera-making company that could also perform the stitching and authoring, but excluded outsiders from that process. The production team, which viewed authoring as a particularly important journalistic process, could not envision removing itself from that step. This view has only gotten stronger. A second alternative, which has become more viable in the rapidly developing industry, would be to use a commercially released camera that produces 360-degree video, but not stereoscopic video. The question of whether stereoscopy is crucial to the effectiveness of the experience remains unanswered; however, the authors have spoken to teams planning current productions that have avoided it because of its high production overhead.
Stage 2: Digital and Post-Production
Second is the Digital and Post-Production process. This VR project required significant post-production effort working with newly combined technologies. The workflows and tools are documented in the appendices, and the authors believe that those sections will be particularly useful for future producers.
The camera strategy had implications for the post-production process. Recording HD video with 12 GoPro cameras produces many gigabytes of data. Organizing those files, moving them from the field to the studio, storing them—each of these steps becomes more onerous as the amount of video data increases. The process of stitching the video together, and making the visual quality consistent, is likewise laborious. At the moment, our production team could find no technology capable of doing the job without very large amounts of human effort to supplement the computational pass.
Working with 12 video frames for each stereoscopic, 360-degree scene required vast, specialized, human effort. This involved cost and time implications which logically challenge journalistic applications, reliant as they are on timely release to audiences.
Stage 3: Dissemination
Third is product Dissemination. The VR user base is currently relatively small and divided across a number of platforms. As detailed earlier, each platform has a unique set of strengths and weaknesses, such as the accessibility of the Google Cardboard versus the processing power of the Oculus Rift DK 2.
The producers weighed the strengths of each platform and chose to release the VR experience for the Samsung Gear VR and Google Cardboard devices. While these devices lack the raw processing power of Oculus’s DK 2, and advanced features like positional tracking (a camera that tracks the user’s head position in 3D space and reflects that positioning in VR), the Gear VR and Google Cardboard offer the greatest ease of use and accessibility to the growing VR audience.
The Gear VR and Google Cardboard head-mounted displays utilize individuals’ smartphones for VR playback rather than a high-end PC like the DK2. This creates a much larger potential audience than the DK2. They also require less setup and configuration, and aren’t tethered to a computer, making their use less arduous and intimidating to the average viewer. Knowing that the audience for our project would skew more toward the mainstream than the video, gamer-heavy demographics of the DK2, the producers felt it was of great importance that the VR equipment itself could get into as many hands as possible and be accessible to a relatively less tech-savvy audience.
The distribution networks for the Gear VR and Google Cardboard are more mature than that of the DK2. Google Cardboard apps are distributed through the existing iOS and Android app stores, and the recently launched Gear VR app store follows a standardized installation process and hardware requirements, unlike the Oculus Share store for DK2.
Finally, Gear VR and Google Cardboard enjoy one hardware advantage over the DK2, which is the higher-screen resolution on newer smartphones. Because the producers decided to focus on a more linear, narrative-driven experience, rather than high levels of interactivity and user input that require more processing power, the higher-resolution screen was a greater benefit than features that the DK2 could provide.