nyt-banner3.png

The New York Times Mixed Reality Research

 

The New York Times Mixed Reality Research

 

OVerview

A 10-month research project by The New York Times’s R&D department exploring how journalists can leverage the capabilities of mixed reality headsets to add value to the way news is captured and delivered — in the near term and further future. 

I led the strategy and creative for the project, directing a cross-functional team of designers and engineers in creating a series of design and technical prototypes exploring key areas of opportunity for the company and journalism at large.

ROLE

Creative Lead

TEAM

Max Lauter, Fabio Piparo, Chloé Desaulles, Alexandre Devaux, Lydia Jessup, Jonathan Fletcher Moore, Nick Bartzokas, Matt Brown, Jon Chen

Project links

  1. Exploring The Future of Journalism For Mixed Reality Headsets

  2. Exploring Mixed Reality Tools for Journalists

  3. Exploring Design Patterns for Building Mixed Reality Stories

  4. Designing Mixed Reality Journalism For The Open World

 
 

The project was divided up into 5 phases. Phase 1 consisted of understanding user needs within the organization and how MR could further Times’s mission of seeking truth and helping people understand the world. Workshops, brainstorms, research, and interviews were conducted to gain an empathetic understanding of the problems MR could seek to solve. From here, 4 research areas were defined investigating applications for the technology at The Times through MR prototypes.

 
 

The Foundational Elements of Mixed Reality Journalism

 

The team began by exploring the fundamental elements needed to create mixed reality experiences from the ground up. This included establishing the technical framework to build these types of experiences and exploring high-level questions of what MR can bring to journalism. How can hands-on interactions add depth to stories? How can we scale stories to the real world? How can your environment become part of the story? How can we design stories that work on any surface? How can we build stories readers can experience in the first person?

HoloLens 2 prototypes were made to explore these questions and to further define the research areas for the following project phases. For more on this foundational research, see the article “Exploring The Future of Journalism for Mixed Reality Headsets” on the NYTimes R&D website.

 

Representing how cough particles disperse in air at human scale can aid in reader comprehension of the importance of social distancing.

 

Scaling up and slicing through a Covid particle can aid in reader comprehension of the virus.

 
 

Mixed Reality Tools For Journalists

 

Mixed reality holds value in how news is delivered to and consumed by the public but it also has untapped potential for how journalists could use the technology to capture news. MR technology can aid in expediting the gathering of information and the verification of what occurred. The team explored this area through a series of HoloLens 2 prototypes aimed at creating tools for journalists to capture and reconstruct news in the field.

 

A common space at The New York Times as seen by a Microsoft HoloLens during user testing.

 

Investigation & Annotation Tool

This HoloLens 2 prototype supplements reporters' workflow on the scene of investigating a news event by using a MR headset to quickly create a 1:1 scale 3D model of the location and place notes and annotations in space. This information is streamed to a web portal that can be accessed in real-time or after a site visit to aid in information verification, creating accurate graphical models of the scene, and publishing news stories.

The team tested this out at "The Morgue," NYTimes's photo archive where we were able to spatially annotate a user's search for a specific photograph and create a rough 1:1 model of the room that was later used to make a detailed model using Blender in the span of a just few hours.

 

Tool prototype showing the spatial mapping and object annotation with voice features. Other features include photo and video capture and marker placement and editing.

 
 

Photo taken with the investigation and annotation tool showing a reporter-placed spatial marker to tag a photo’s location within The Times’s archives.

 
 

The multicolored mesh of the Morgue shows what the Hololens 2 captured and the gray volumes shows what the team was able to model from the HoloLens data.

 

Spatial Reconstruction Tool

Verifying what actually occurred during an event is an essential part of reporting. To explore how MR could aid in this process, the team built a WebXR prototype that explores a method for displaying and playing back media at the exact location it was captured.

This prototype enables an event’s chronology to be stitched together from footage taken by people who documented it from different vantage points. Such a method gives a reporter the ability to investigate a past event from multiple angles and moments in time cross-referencing footage sources to each other and the physical environment to verify what happened in context of the location.

A future version of this prototype could also enable site-specific experiences for readers to visit locations of significance and reconstruct that event through archival and found footage. 

For more on these prototypes, see the article “Exploring Mixed Reality Tools for Journalists” on the NYTimes R&D website.

The WebXR interface visualizes the paths and projections of multiple videos recording the same scene.

 

HoloLens 2 prototype showing a user scrubbing through the timelines and paths of 2 video clips that capture a person in black on a city street.

 
 

Design Patterns For Mixed Reality Stories

 

Exploring how to deliver news stories to the public in mixed reality was one of the most important elements of the project. Standardizing and scaling the delivery of news is a multiyear endeavor and is highly dependent on advances with the technology and widespread adoption of MR devices with the general public.

As an exercise and a step towards that eventual end goal, the team took the published interactive article “Why Ventilation Is a Key to Reopening Schools” and explored ways of adapting it to mixed reality. 

Using this story as a starting point, the team explored some of the key design questions at the heart of MR journalism: How do we make it as straightforward to consume MR journalism as it is to scroll a webpage? How do we balance user autonomy and narrative clarity? And how do we create moments of novel interaction that expand reader understanding? 

 
 

“Why Ventilation Is a Key to Reopening Schools,” New York Times, February 26, 2021.

 

Tabletop MR story container framework aka “the fish tank” showing the animated classroom 3D model and text from the article “Why Ventilation Is a Key to Reopening Schools.”

 
 

In mixed reality, very little standardization exists for presenting news. The team set out to investigate how to design stories that work for every space and can be produced at scale. Prototypes were made around the building blocks of bringing MR stories to life in a reader’s space including scene understanding, handoff to MR headsets from consumer’s digital devices (phone & computer), the narrative mechanics of moving through a story, user agency, interaction, and a tabletop framework for presenting news that is accessible and scalable for a large audience.

 
 
 

These experiments were then brought together to create a passthrough experience for Quest 2 that plays the article on a user’s table from start to finish with interactive moments that enable a user to go deeper into the story to better understand the subject matter.

 
 
 

For more on these prototypes, see the article “Exploring Design Patterns for Building Mixed Reality Stories” on the NYTimes R&D website.

 
 
 

Designing Mixed Reality Journalism for the Open World

 

Mixed reality offers new ways for readers to learn about the physical spaces around them. As part of the team's research into headsets, we explored how to design location-based stories, surfacing Times journalism wherever readers roam in the open world.

One of mixed reality’s biggest strengths is its ability to sense and understand the world around the user, adding context to their space. What might news look like in this context? Using computer vision and geolocation, the team designed a prototype to explore how Times journalism might be delivered to readers in a way that rewards curiosity and exploration of their surroundings.

Mixed reality footage in Blender. 

 

Tracking a user’s HoloLens 2 video feed moving through 3D space towards the Washington Square Arch.

 

Prototype video of user moving through Washington Square Park

Using Washington Square Park as a test location, we placed Times articles in locations related to the contents of that article. In the future this process could be automated using article tagging and/or AI to extract information from articles, videos and images to determine the content's location and orientation in the world.

As a person walks up to an article anchor, an article preview is revealed. Since reading full articles on a headset is suboptimal with current technology, the article can be sent to the user's phone for ease of reading and comfort.

Mixed reality is inherently open-ended. Embracing open world design gives readers the control to choose how they interact with content and empowers them to see places from entirely new perspectives. For more on this prototype, see the article “Designing Mixed Reality Journalism for the Open World” on the NYTimes R&D website.

 
 

All images and videos from The New York Times R&D.