VR with a Gamepad Sucks

I was kind of bummed my first day of Oculus Connect 2.

FullSizeRender

Last year’s Oculus Connect was revelatory to me. Despite having worked on two different Gear VR titles at the time, the Crescent Bay demo was incredible in comparison. From Oculus’ own vignette demos to Epic’s Showdown sequence–the leap in quality from DK2 to Crescent Bay was astounding. Everyone walked out of that demo with a huge simile on their faces.

The first demos I tried at OC2 were the Gamepad demos. Oculus spent an absurd amount of time at their E3 keynote talking about how amazing it was that they were launching with the XBox 360 controller as the input device. At Oculus Connect, I put this claim to the test.

Every game from EVE Valkyrie to Edge of Nowhere seemed like playing a regular video game strapped to my face. I felt like I was playing an XBOX One through binoculars. In fact, a few of the games made me a little queasy–which I’m usually not susceptible to.

Maybe I’m just jaded having been developing gamepad VR experiences on Gear VR for a while, I thought.

Later on I tried Toybox which is a cool tech demo but doesn’t really illustrate how you’d play an actual game for any length of time with the Touch controllers. In fact, I found the controllers a little hard to use compared to the Vive. They have tons of confusing buttons and getting the finger gestures right seemed to take a little bit of work.

I was leaving the demo area and getting ready to head home when a friend of mine who works for Oculus stopped to ask what I thought. I told him honestly that I felt last year’s demos were better–they were more immersive and interesting. Although a little taken aback at my impressions, he strongly suggested I come by tomorrow for the next set of demos. He couldn’t tell me what they were, but promised they’d be awesome.

The Oculus Connect app sent a notification alerting me that new demo registrations would be available at 8 AM. I set my alarm and woke up the next morning to register for the Touch demos via my iPhone. I promptly slept through the keynote and arrived on the scene at noon for my demo.

We were only allowed to try two games, and it was heavily suggested I try Epic’s “Bullet Train” experience. Having not seen the keynote, I had no idea what I was getting into.

Bullet Train is mind blowing.

Bullet Train is essentially Time Crisis in VR. When I saw the Showdown demo last year I thought a game like this in VR would be a killer app. One of my favorite coin-ops of all time is Police 911–which motion tracks your body with a pair of cameras to duck behind obstacles. I thought doing this in VR would be amazing. However, last year there were no hand tracking controls–it was just a vague idea.

Here, Epic took the Touch controllers and made an incredible arcade shooter experience that should be a killer app should Epic choose to develop this further. Oculus really needs to do everything in their power to get Epic to produce this as a launch title for the Touch controllers.

The touch controls make all the difference. From handling weapons and grenades to plucking bullets out of the air in slow motion, Bullet Train really drives home how flexible the Touch controls are. Unlike Vive which is like holding a set of tools, these let you reach out and grab stuff–Even pump a shotgun.

The combination of standing in a motion tracked volume and visceral interaction with the world using your hands–even with Touch’s primitive finger gesture technology–really immerses you in an experience way beyond what’s possible sitting in a chair with an XBox controller.

It’s disappointing that Touch won’t launch with Oculus’ headset. Hand tracking is absolutely required for a truly immersive experience. Developing games that support both Gamepad and Touch control is going to be difficult without diluting features for one or the other. I’ve experienced a similar issues developing games that work with Gear VR’s touchpad and Bluetooth gamepad.

I left Oculus Connect 2 I reinvigorated with the feeling that VR with hand tracking is the One True VR Experience. Gamepad is fine for mobile VR at the moment, but all of my PC and Console VR projects are now being designed around Touch and hand tracked input. It’s the only way!

How To Support Gear VR and Google Cardboard In One Unity3D Project

Google Cardboard is a huge success. Cardboard’s userbase currently dwarfs that of Gear VR. Users, investors, and collaborators who don’t have access to Gear VR often ask for Cardboard versions of my games. As part of planning what to do next with Caldera Defense, I decided to create a workflow to port between Gear VR and Cardboard.

Always keep a Cardboard on me at ALL TIMES!

I used my VR Jam entry, Duck Pond VR, as a test bed for my Unity3D SDK switching scripts. It’s much easier to do this on a new project. Here’s how I did it:

Unity 4 vs. Unity 5

Google Cardboard supports Unity 4 and Unity 5. Although Oculus’ mobile SDK will technically work on Unity 5, you can’t ship with it because bugs in the current version of Unity 5 cause memory leaks and other issues on the Gear VR hardware. Unity is working on a fix but I haven’t heard any ETA on Gear VR support in Unity 5.

This is a bummer since the Cardboard SDK for Unity 5 supports skyboxes and other features in addition to the improvements Unity 5 has over 4. Unfortunately you’re stuck with Unity 4 when making a cross-platform Gear VR and Cardboard app.

Dealing With Cardboard’s Lack of Input

Although Gear VR’s simplistic touch controls are a challenge to develop for, the vast majority of Cardboards have no controls at all! Yes, Google Cardboard includes a clever magnetic trigger for a single input event. Yet, the sad fact is most Android devices don’t have the necessary dock connector to use this.

You have a few other control options that are universal to all Android devices: the microphone and Bluetooth controllers. By keeping the microphone open, you can use loud sounds (such as a shout) to trigger an action. You can probably use something like the Pitch Detector plug-in for this. Or, if your cardboard has a head strap for hands-free operation, you can use a Bluetooth gamepad for controls.

Because of this general lack of input, many Cardboard apps use what I call “stare buttons” for GUIs. These are buttons that trigger if you look at them long enough. I’ve implemented my own version. The prefab is here, the code is here. It even hooks into the new Unity UI event system so you can use it with my Oculus world space cursor code.

Gear VR apps must be redesigned to fit within Cardboard’s constraints. Whether it’s for limited controls or the performance constraints of low end devices. Most of my Cardboard ports are slimmed down Gear VR experiences. In the case of Caldera Defense, I’m designing a simplified auto-firing survival mode for the Cardboard port. I’ll merge this mode back into the Gear VR version as an extra game mode in the next update.

Swapping SDKs

This is surprisingly easy. You can install the Cardboard and Gear VR SDKs in a single Unity project with almost no problems. The only conflict is they both overwrite the Android manifest in the plugin folder. I wrote an SDK swapper that lets you switch between the Google Cardboard and Oculus manifests before you do a build. You can get it here. This editor script has you pick where each manifest file is for Cardboard and Gear VR and will simply copy over the appropriate file to the plugin folder. Of course for iOS Cardboard apps this isn’t an issue.

Supporting Both Prefabs

Both Oculus and Cardboard have their own prefabs that represent the player’s head and eye cameras. In Caldera Defense, I originally attached a bunch of game objects to the player’s head to use for traces, GUI positioning, HUDs, and other things that need to use the player’s head position and orientation. In order for these to work on both Cardboard and Oculus’ prefabs, I placed all objects attached to the head on another prefab which is attached to the Cardboard or Oculus’ head model at runtime.

Wrapping Both APIs

Not only do both SDK’s have similar prefabs for the head model, they also have similar APIs. In both Cardboard and Oculus versions, I need to refer to the eye and head positions for various operations. To do this, I created a simple class that detects which prefab is present in the scene, and grabs the respective class to wrap the eye position reference around. The script is in the prefab’s package.

Conclusion

For the final step, I made separate Cardboard versions of all my relevant Gear VR scenes which include the Cardboard prefabs and modified gameplay and interfaces. If no actual Oculus SDK code is in any of the classes used in the Cardboard version, the Oculus SDK should be stripped out of that build and you’ll have no problem running on Cardboard. This probably means I really need to make an Oculus and Cardboard specific versions of that CameraBody script.

The upcoming Unity 5.1 includes native Oculus support which may make this process a bit more complicated. Until then, these steps are the best way I can find to support both Cardboard and Gear VR in one project. I’m a big fan of mobile VR, and I think it’s necessary for any developer at this early stage of the market to get content out to as many users as possible.

Why I’m All In On Mobile VR

Last month I released Caldera Defense, a Virtual Reality tower defense game on Gear VR. This is the second Gear VR title I’ve worked on, and the first I’ve built and published from the ground up. (Not including my Oculus Mobile VR Jam submission) Caldera Defense is a free early access demo–basically a proof of concept of the full game–and the reaction has been great. Thousands of people have downloaded, rated, and given us valuable feedback. We’re busy incorporating it into the first update.

Caldera Defense featured on the Gear VR store

Originally I planned to use this as a demo to fund an expanded PC and Morpheus launch version of the game with greatly improved graphics, hours of gameplay, and additional features such as multiplayer and second-screen options.

However, pitching even a modestly budgeted console and PC VR game experience to publishers, or even the platforms themselves, is a tough sell. I’m sure at E3 next month we will see all sorts of AAA VR announcements. Yet, many traditional funding avenues for games remain skeptical of the opportunity VR presents.

Since the Caldera project began last year, mobile VR has morphed into a unique opportunity. With over a million Google Cardboards in the wild and new versions of the Gear VR headset in retail stores worldwide, there will be millions of mobile VR users before there’s comparable numbers on Oculus desktop, Vive, and Morpheus.

Is it possible that mobile VR will be a viable business before it is on PC and consoles? Most of my colleagues are skeptical. I’m not.

The economics work out. Due to the mobile nature of the experience, games and apps for these platforms tend towards the bite-sized. This greatly reduces the risk of mobile VR since assets optimized for mobile are simpler and casual VR experiences require less content to be built overall.

I can make a dozen mobile VR minimum viable products for the same budget of one modestly scoped Morpheus experience. From these MVPs I can determine what types of content gains the most traction with VR users and move in that direction. I can even use this data to guide development of larger AAA VR experiences later.

By this time next year it will be possible to monetize these users significantly, whether through premium content or advertising. It may be more valuable to collect a lot of eyeballs in mobile VR than breaking even on a multi-million dollar AAA launch tile. As we’ve seen in the past, acquiring a huge audience of mobile players can lead to tremendous revenue streams.

Being on the Oculus desktop, Vive, or Sony’s Morpheus deck at launch is an enormous opportunity. In fact, I’m still searching for ways to produce the console and desktop version of Caldera Defense. However, if you lack the capital to produce at that scale, smaller mobile projects are much easier to bootstrap and the upside is huge.

Samsung Gear VR Development Challenges with Unity3D

As you may know, I’m a huge fan of Oculus and Samsung’s Gear VR headset. The reason isn’t about the opportunity Gear VR presents today. It’s about the future of wearables–specifically of self-contained wearable devices. In this category, Gear VR is really the first of its kind. The lessons you learn developing for Gear VR will carry over into the bright future of compact, self-contained, wearable displays and platforms. Many of which we’ve already started to see.

The Gear VR in the flesh (plastic).

The Gear VR in the flesh (plastic).


Gear VR development can be a challenge. Rendering two cameras and a distortion mesh on a mobile device at a rock solid 60fps requires a lot of optimization and development discipline. Now that Oculus’ mobile SDK is public and having worked on a few launch titles (including my own original title recently covered in Vice), I figured I’d share some Unity3D development challenges I’ve dealt with.

THERMAL ISSUES

The biggest challenge with making VR performant on a mobile devices is throttling due to heat produced by the chipset. Use too much power and the entire device will slow itself down to cool off and avoid damaging the hardware. Although the Note 4 approaches the XBOX 360 in performance characteristics, you only have a fraction of its power available. This is because the phone must take power and heat considerations in mind when keeping the CPU and GPU running at full speed.

With the Gear VR SDK you can independently tell the device how fast the GPU and CPU should run. This prevents you from eating up battery when you don’t need the extra cycles, as well as tune your game for performance at lower clock speeds. Still, you have to be aware of what types of things eat up GPU cycles or consume GPU resources. Ultimately, you must choose which to allocate more power for.

GRAPHICAL DETAIL

The obvious optimization is lowering graphical detail. Keep your polycount under 50k triangles. Avoid as much per pixel and per vertex processing as possible. Since you have tons of RAM but relatively little GPU power available–opt for more texture detail over geometry. This includes using lightmaps instead of dynamic lighting. Of course, restrict your usage of alpha channel to a minimum–preferably for quick particle effects, not for things that stay on the screen for a long period of time.

Effects you take for granted on modern mobile platforms, like skyboxes and fog, should be avoided on Gear VR. Find alternatives or design an art style that doesn’t need them. A lot of these restrictions can be made up for with texture detail.

A lot of standard optimizations apply here–for instance, use texture atlasing and batching to reduce draw calls. The target is under 100 draw calls, which is achievable if you plan your assets correctly. Naturally, there are plenty of resources in the Asset Store to get you there. Check out Pro Draw Call Optimizer for a good texture atlasing tool.

CPU OPTIMIZATIONS

There are less obvious optimizations you might not be familiar with until you’ve gone to extreme lengths to optimize a Gear VR application. This includes removing as many Update methods as possible. Most update code spent waiting for stuff to happen (like an AI that waits 5 seconds to pick a new target) can be changed to a coroutine that is scheduled to happen in the future. Converting Update loops to coroutines will take the burden of waiting off the CPU. Even empty Update functions can drain the CPU–death by a thousand cuts. Go through your code base and remove all unnecessary Update methods.

As in any mobile game, you should be pooling prefabs. I use Path-o-Logical’s PoolManager, however it’s not too hard to write your own. Either way, by recycling pre-created instances of prefabs, you save memory and reduce hiccups due to instantiation.

IN CONCLUSION

There’s nothing really new here to most mobile developers, but Gear VR is definitely one of the bigger optimization challenges I’ve had in recent years. The fun part about it is we’re kind of at the level of Dreamcast-era poly counts and effects but using modern tools to create content. It’s better than the good old days!

It’s wise to build for the ground up for Gear VR than to port existing applications. This is because making a VR experience that is immersive and performant with these parameters requires all disciplines (programming, art, and design) to build around these restrictions from the start of the project.