Wednesday, July 20, 2016

Guri VR: Virtual Reality for the Rest of Us

By Dan Zajdband

Guri VR: Virtual Reality for the Rest of Us

(Topher McCulloch, licensed under CC)

When Gustavo Cerati, a legendary Argentinian musician and songwriter, was asked to share his best advice for new musicians, he refused—saying instead that “experiences are not transferable.” You may agree or may not with his statement, but if you’ve ever worn an Oculus Rift or a similar virtual reality (VR) headset, you’ll know we are getting closer and closer to transferable experiences.

Journalism seems to be a good use case for VR technology. The New York Times has delivered a wide range of stories through their nytvr app, available on Android and iOS; the Washington Post took us to Mars; and The Guardian showed us what it’s like to survive solitary confinement.

On the other hand, companies like Google, Facebook, and Microsoft are pushing the limits of the technology in partnership with game makers, artists and developers. We are still experimenting and trying to understand what the future of Virtual Reality will look like in the near future.

With the rise of VR, Mozilla, Google, and other companies expressed interest in developing open standards for making the web the home of the VR ecosystem, which led to WebVR, a JavaScript API that provides access to Virtual Reality devices. The specification draft is available, and we can even use what is called a polyfill to use WebVR today, before this API is added in your mobile browser.

With WebVR, a web developer can access a wide range of device-specific information that’s needed to create VR apps: position, orientation, velocity, acceleration, field of view, and eye distance.

I got really interested in the development of WebVR technology and the impact it could have in journalism. This led me to explore a variety of tools for making VR easier to develop, both for journalists and for developers without graphics programming knowledge.

Here are three tools for making VR in the open web, in descending order of required technical skills. But even if the first are harder to use for non-technical people, I’ll bet that if you can write just couple of lines of JavaScript, you’ll be able to create amazing VR scenes.

A Note on Asset Generation and Usage

The first question I get in every workshop is, "how can I take my own pictures and videos?" And it makes a lot of sense. If we want to create our own experiences, we need to be able to capture our own world.

Taking 360° Panoramas

The preferred format for this kind of 360° panoramas is called “equirectangular,” and the good news is, you can take them with your phone.

The Android Camera has a mode called Photo Sphere and an active community around the functionality. The Cardboard Camera is also a good option, enabling the ability to record audio along with your panoramas.

On iOS, you can install the Google Street View app to take this kind of picture—there are more apps that will help you, but this one works really well.

If you don’t want your own pictures, you can always download one from the Flickr Equirectangular group. You can search between more than 15000 equirectangular pictures (remember to add attributions to the photographers and be careful with the licenses).

Taking 360° Videos

This gets tricky because usually you will need a special camera. We are in a stage where consumer cameras are not perfect yet, but I can point to a couple of cameras that do the job:

  • The Ricoh Theta S and the Samsung Gear 360 are tiny 360° cameras that can make your life easier, and both come with their own apps and software for preview, editing and uploading videos. The battery duration and heat are real issues, but you can get pretty decent videos.
  • For higher quality, VR makers sometimes use a rig with GoPros. There are a lot of different options and more cameras and techniques are coming out.

Recording Audio for VR

Recording audio is easier, and you can use your regular audio recorder to do this. If you want to record in 360°, there are special mics. You can take a look at different options here.

Uploading Your Assets

Equirectangular pictures and videos are regular media files. Your panorama file can be opened and edited with your favorite photography app, meaning you can upload your assets to the same services you use to upload your images and videos.

The only catch is that if you host your pictures in a domain that is not the same as your VR app (your website), the service needs to support CORS. If you don’t know what CORS is, don’t worry, you can upload your images to services like Imgur and also your videos to a public Dropbox folder, and it will work. For uploading to AWS S3, make sure you have a CORS policy set up.

WebVR Starter Kit

I was happy to discover that, thanks to the work of amazing developers, you can actually create simple VR scenes that work on the Oculus or Google Cardboard with just one or two lines of JavaScript or HTML.

The first project I want to highlight is the WebVR Starter Kit by Brian Chirls. This library allows developers (or people with very little JavaScript experience) to create VR scenes in seconds. The examples really show the power of the library.

The main idea is that by adding this library to your website (as a .js script) your website is automagically converted to a 3D VR scene. The only thing you need to do is just add objects (like boxes, spheres, audios, panoramas, videos) with attributes like position, color, and material. You can even add simple animation.

You can, for example, play God and indicate that you want a wooden floor and a sky with two lines:

VR.floor().setMaterial(‘wood’);

VR.sky(); 

Then you can add a green cylinder, position it, and tell everybody it’s yours:

VR.box({ color: 'green' }).moveTo(0, 2, 0)

VR.text('This is my cylinder').moveTo(0, 1, 0) 

Create a nice atmosphere for this little bunny video in just three lines of code.

A-Frame with HTML

After some time playing around with the amazing WebVR starter kit, I found out that a Mozilla team working on WebVR experiments, known as MozVR, was creating a framework that can be used by people who can’t write a single line of JavaScript.

It’s called A-Frame. The idea is to create VR scenes using HTML tags and properties, making the VR development process pretty much like building a website. The A-Frame project website is full of examples, the documentation is really good, and the community is active and growing. Inside every example on their website, there is a link to the source code.

This time, instead of using JavaScript to create our objects we will use HTML-like tags. Most of the WebVR starter kit objects are available, including handy resources like 3D model loaders and arrow controls for moving inside the scene.

Let’s say we want to get a videosphere surrounding the scene and a box inside. We would express it via the a-box and a-videosphere tags:

<a-scene>

 <a-assets> <video id="antarctica" autoplay loop="true" src="antarctica.mp4"> </a-assets>

 <a-videosphere src="#antarctica"></a-videosphere> <a-box color="red" depth="2" height="4" width="0.5"></a-box>

</a-scene> 

As you can see, it’s really easy to create—for example—a spherical video just with some lines of HTML.

Also, if you’re still not sure about the power of this tool, check out Mars: an interactive journey—a Washington Post VR experience made with A-Frame.

Guri VR, for Journalists and Non-Developers

After playing around with all these amazing tools available to create VR experiences, two questions popped into my mind:

  • Is there a way to add context to a story told with VR?

  • Is the A-Frame learning curve reachable for storytellers and journalists, or should I create something simpler for them?

Based on those questions, and after some experimentation, I created Guri VR. Guri is a set of tools focused on the creation of VR experiences based on intuitive descriptions, and it’s targeted at journalists and other non-developers. The main tool is the Guri editor. This online tool allows the users to express in plain English what they want to experience and generate a shareable and embeddable link with the VR scene.

The output is a HTML file using autogenerated A-Frame markup. This is helpful if you want to create a prototype and then pass the code to a developer to change it.

You can play around with the editor at GuriVR.com, but let’s see how you can describe a basic scene. For example I can write down this into the editor:

My first scene lasts 5 seconds and has a skyblue background and text saying “This is my first scene”.

The second is 30 seconds and has just a panorama located at https://ucarecdn.com/8e6da182-c794–4692–861d-d43da2fd5507/ along with the audio https://ucarecdn.com/49f6a82b–30fc–4ab9–80b5–85f286d67830/

And this is the result.

Since one of the goals is to remove the friction between the users and the VR generation, the editor includes a file uploader. To upload a file, you just need to drag it to the editor and select where to place it based on the cursor position.

Screenshot

Since my goal was to present a friendly interface for VR-scene generation, I also wanted to interact with existing tools. For example, I started working on an A-Frame Chartbuilder component. You can feed the component with the JSON output from ChartBuilder, and it will draw a 3D representation of the chart you just made. This works on any A-Frame scene, but I also added it to the Guri Editor. There is also a guide to help you getting started with the tool.

Screenshot

VR Tweetbot

In the end, Guri is an API that accepts a JSON file describing what we want and translating it into A-Frame so that it can be easily modified after.

Thinking about even easier ways to create a VR scene, I developed a proof of concept using Twitter. Using GuriVR and the Twitter API, I use @guri_vr as a VR bot. You can tweet an equirectangular picture, and if you mention @guri_vr on that tweet, it will tweet you back with the VR scene link embed into a Twitter Card, so you can even watch the experience without leaving Twitter:

For now, it only works with a single picture, but it can be easily modified to allow multiple panoramas as scenes or intertitles.

Transferable Experiences, Open Web

Guri VR is open source, and it’s under heavy development. You can fork it and help make it great. I’m also looking for feedback from storytellers and other people interested in creating VR without coding skills.

VR opens up powerful new ways of telling meaningful stories, and it’s important to be able to prototype these stories easily. As with any new technology, there is a potentially steep learning curve and a lot of hype around VR. But I think that the right uses can be very beneficial for newsrooms and their readers, and by using open web standards, we are as close as possible to the public. I encourage you to try these tools—and see for yourself.



Read Full Story from Source https://source.opennews.org/articles/virtual-reality-rest-us/
This article by Dan Zajdband originally appeared on source.opennews.org on July 19, 2016 at 06:00PM

Latest Posts