The challenges of authoring content for VR

November 5, 2015

Virtual and Augmented Reality (and Computer Vision in general) teases us with an exciting future – new kinds of entertainment, new ways of interacting with the world around us, new ways of intersecting the real and digital worlds. It all sounds amazing, but as we speak to people that are trying to build high-end content for these platforms, we keep hearing the same problems.

“How do we build these experiences?” or “The tools that we have for creating digital content are not designed to build these kinds of applications.” or  “Working within a traditional 2D medium (screen/keyboard/mouse) doesn’t really hit the spot when you’re trying to build interactive worlds and stories.” and finally, “Rapid iterations happen when the viewing medium is the same as the editing medium – we want to be able to load a set into VR and make live edits to that data, and then get those changes back to our editing application of choice.”

Game engines are fantastic when it comes to playing back rich content, but they struggle when it comes to highly iterative authoring. A game engine is highly optimized to playback at the best fidelity possible, striking a balance between richness of content and the ever-critical framerate. This means that data is crunched and massaged in order to perform at the best possible level, which unfortunately means it is no longer in an editable state. That’s fine when all you need to do is playback and experience the content, but it is limiting when you want to edit that content and make changes to it. This is where Fabric Engine can help.

Rather than making this optimization pass on data, Fabric loads the data natively, meaning you can make changes and save those results back out. Fabric can also be extended to support pretty much any file type – we ship with support for Alembic and FBX (and have been working with USD for a while), but we provide the tools for you to support your own data types, or data types from other industries.

Fabric can also be easily extended to support different libraries – this means we can work with any hardware devices. In the video below we’re hooking up a range of hardware: Oculus, XBox controllers, xsens motion capture, Hydra controllers – and all of it is accessible through the Canvas visual programming system as well.

So how does this all come together? Well the first use case we decided to build towards was storyboarding in VR. We wanted to make it possible for a content creator to load in a scene and then mark up that scene with a ‘3D stroke’ system. And most importantly, we wanted to capture that data and bring it back into an authoring application like Autodesk Maya. We’re pretty excited by the results!

Then we started playing around with different ideas. Could we hook up a motion capture suit and use it with the strokes system? Could we build a set dressing system so you could walk a virtual set and make live edits to it? How about a complete VCS system? The answer was ‘yes’ to all of these questions!

What’s especially compelling here is that these tools were built by one engineer in a few days. Fabric provides the rich foundation that allows you to spend time building specific functionality rather than spending time on well-solved problems.

So what’s coming next? Right now this is a prototype and we’re looking for studios and companies (in all industries) that are keen to work with us on these tools. If you have the resources to get involved, please drop us a line at info@fabricengine.com.