arrow-right chevron-down chevron-left chevron-left chevron-right chevron-right close facebook instagram pinterest play search shallow-chevron-down shallow-chevron-up soundcloud twitter
Tech Articles

Augmented reality in Cinema

Tech
Apr 18, 2018

Augmented reality in Cinema

BY : DAVID CIHELNA

action!

Augmented Reality (AR) is one of the fastest growing technologies this year, revitalized due to cheap computing power and advancements in consumer technologies. We’ve been dreaming about AR through movies and sci-fi for a long time – now let’s explore how our favorite cinematic portrayals of AR match up with current technologies.

A common AR trope in sci-fi is projecting 3D content into the world.

IRON MAN (2008)

Fact is, there is currently no way of doing this at the scale and flexibility presented in movies like Iron Man. And it doesn’t look like there will be any time soon. Researchers recently used lasers to build 3D visuals. There have been attempts to use mist and multiple projectors. We can create visual illusions that appear as holograms using smartphone tricks. Still, these are very specific illusions – they aren’t useful every day tools – and none of them are user-friendly, scalable or especially good. In the foreseeable future, we’re sticking with tools that attach to your body to augment the world around us. Whatever those may be – glasses, headsets, transparent screens or contact lenses.

This is what AR should look like.

AVATAR (2009)

Photorealistic renders with all the data you need or want. Avatar raises a core issue we are facing in AR at the moment – who’s making all the content? To make AR relevant to consumers requires more than just low-rez shapes. In most movies using AR (or any tech really) they merely press a button, and a beautiful 3D rendering appears. To have photorealistic, flexible 3D assets to use in AR,  we need the tools or libraries to deliver it. We’d need useful large-scale 3D sensors around the world streaming their data. There are a bunch of startups working on making 3D content accessible – Google poly, 8i, Geopipe and more. But we’re nowhere near the point of being able to “just pull up” an area of a forest in real-time.

Star Wars has been doing AR since inception.

STAR WARS (1977 – 2019)

Their blue low-fidelity projections are all over their story world. One of the most exciting elements, however, is the concept of multi-users real-time AR. In essence, teleconferencing in physical space.

Multiplayer AR is incredibly tricky. The bulk of the problem lies in sharing representations of physical space between devices. Your phone can’t match its position with other phones without some anchor in the real world. This anchor might be an accurate location, a marker, object or shared 3D representation of the space its in. The problem at the moment is how to efficiently share this data between devices to create a shared space to play in.

That said, it’s happening this year. Niantec (Pokemon Go’s developer) purchased Escher Reality, a startup that allows multiple users to play together in AR. Startups like ubiquity6 are also working on high-fidelity room tracking that is shared between devices to be released in 2018.

One of the most creative uses of AR in movies.

BLADERUNNER 2049 (2017)

The virtual body swapping scene in Bladerunner 2049 shows a virtual avatar use an actual human body as a stand-in. It’s the enhanced full-body version of a Snapchat face filter.

It’s not impossible to use computer vision to track body features in real-time. It’s been done before using a variety of tech – from computer vision, motion capture to Kinnect sensors. It wouldn’t take much more to layer a virtual character on top of it. Anyone wearing an AR headset would then see the two together.

 

This one is kind of meta.

GAMER (2009)

Gamer is about prisoners that are forced to become remote-controlled real-life video game characters. The prisoners are “played” by kids at home on giant screens. It’s technically just twitch-like live-stream, but then it’s also real, and content is augmented on top of it: a multi-layered AR/live stream experience. An experience like this would rely heavily on reliable, fast real-time data. From location-tracking (knowing where all the players are), stable video live streams, to tracking which gun the person is holding or what their vital signs are. Most importantly – it would require the processing power to recognize and track content from the video feed. We’d need to have some spatial understanding of the scene in front of us. Using machine learning and computer vision, we can track other players and whatever we are “looking” for in the field of view.

However, this might need to go further than that. It may need to rely on physical sensors in the arena and in objects that stream live data back to us. Tracking player locations with beacons, what direction their gun is pointed at, when they trigger a shot, etc. A crazy future version of what The Void is doing with its VR Arcades. Here’s the thing though, aside from the dystopian concept, we can really build something like this.

 

A graduate from NYU Tisch ITP and a fellow at NYU’s Cinema Research Institute, David Cihelna is an Immersive Director and Entrepreneur based in Los Angeles.