Debugging HoloLens Apps in Unity3D

I’ve been developing on HoloLens for a few weeks now, and I’m being re-acquainted with the tricky part of debugging hardware-specific augmented reality apps in Unity3D. I went through a lot of these issues with my Google Tango project, InnAR Wars, and so I’m sort of used to it.  However, having to wear the display on your head while testing code brings a whole new dimension of difficulty to debugging Augmented Reality applications. I figured I’d share a few tips that I use when debugging Unity3D HoloLens apps other than the standard Unity3d remote debugging tools you are used to from mobile development.

IMG_2757

Debugging in the Editor vs. Device

The first thing you need to do is to figure out how test code without deploying to the device. Generating a Visual Studio project, compiling, and uploading your application to one (or more) HoloLens headsets is a real pain when trying to iterate on simple code changes. It’s true, Unity3D can do none of HoloLens’ AR features in the editor, but there are times when you just have to test basic gameplay code that doesn’t require spatialization, localization, or any HoloLens specific features. There’s a few steps to make this easier.

Make A Debug Keyboard Input System

HoloLens relies mostly on simple gestures (Air Tap) and voice for input. The first thing you need to test HoloLens code in the Unity3D editor is a simple way to trigger whatever event fires off via Air Tap or speech commands through the keyboard. In my case, I wrote a tiny bit of code to use the space bar to trigger the Air Tap. Basically–anywhere you add a delegate to handle an Air Tap or speech command, you need to add some input code to trigger that same method via keyboard.

Use An Oculus Rift Headset

I was pleasantly surprised to find out Unity HoloLens Technical Preview supports Oculus Rift. Keep your Rift plugged in when developing for HoloLens. When you run your application in the Unity editor, it will show up inside the Rift–albeit against a black background. This is extremely helpful debugging code that uses gaze, positional audio, and even limited movement of the player via Oculus’ positional tracking.

Use The HoloLens Companion App

Microsoft provides a HoloLens companion app in the Windows Store with a few handy features. The app connects to HoloLens via WiFi and lets you record videos live from the headset (very useful for documented reproducible bugs and crashes). It lets you stop and start apps remotely which can be useful when trying to launch an app on multiple HoloLenses at the same time. You can also use your PC’s keyboard to send input to a remote HoloLens. This is convenient for multiplayer testing–use Air Tap with the one on your face, the companion app to trigger input on the other device.

These tips may make building HoloLens apps a little easier, but I really hope Microsoft adds more debugging features to future versions of the SDK. There are some simple things Microsoft could do to make development more hassle-free, although there’s really a limit to what you can do in the Unity Editor versus the device.

My Favorite VR Experiences So Far

Now that I’ve had plenty of time to go through the launch content of both Oculus and Vive, I figured I’d highlight my favorite experiences you can try for both devices instead of a typical product review. Many of these games are available for both platforms, while some are exclusive.

IMG_2589

My Retail Vive Finally Arrived!

Adr1ft (Oculus)

This is the flagship release for Oculus and deservedly so. Although not a pure VR experience (it also works as a standard game), it’s an absolutely wild trip in VR. Billed as a First Person Experience (FPX), it ranks somewhere between a walking simulator like Firewatch and an adventure such as Bioshock on the “Is It a Game?” scale.

This is consistently one of the top-selling Oculus titles, yet ranks near the bottom on comfort. I had no nausea issues at all, but I generally don’t feel uncomfortable in most VR games. I can see how free-floating in zero gravity, desperately grasping at oxygen canisters as you slowly suffocate to death inside a claustrophobic space suit can cause issues with those prone to simulation sickness. Regardless, this shows that it pays to be hardcore when making VR experiences–especially at this early adopter stage of the market.

A stunning debut for Adam Orth’s threeonezero studio.

Firma (Oculus)

This perhaps one of my absolute favorite pure VR games so far. Think Lunar Lander, Space Taxi or Thrust in VR. If this was a standard video game, it would be mundane, but as a VR experience I really do feel like I have a job piloting a tiny lander craft on a desolate moon base. It actually sort of achieves presence for me–but not the feeling of being in another reality…more like being in an ‘80s sci-fi movie.

Originally available via Oculus Share for years–it’s obvious that a lot of work has been put into this game to get it here for the commercial Oculus release. There are tons of missions, great voice acting, and a lot of fun mechanics and scenarios. This game is giving me plenty of ideas on how to adapt my old Ludum Dare game to VR.

Strangely, this game is in Oculus’ Early Access section, even though I consider it a complete game.

The Apollo 11 Virtual Reality Experience (Oculus, Vive)

An astounding educational journey through America’s moon landing told via VR. This is better than any field trip I took as a kid to the Boston Museum of Science, that’s for sure. This is just the tip of the spear when it comes to education and VR.

Hover Junkers (Vive)

Hover Junkers requires the most physical activity out of any VR game I’ve played–So much so that after 20 minutes of shooting, cowering behind my hovercraft’s hull for cover, and frantically speeding around post-apocalyptic landscapes, my Vive was soaked in sweat. One thing is for sure, public VR arcades are going to need some kind of solution to keep these headsets sanitary. Hover Junkers certainly is the most exciting multiplayer experience I’ve had in VR so far.

Budget Cuts (Vive)

The absolute best example of room scale VR. I didn’t really get it when watching the videos, but when I was finally able to try the demo on my own Vive….wow. This is the game I let everyone try when they first experience Vive. It really nails the difference between seated, controller-based VR and a room scale hand-tracked experience. This is the first “real game” I’ve played that uses all of these elements. So many firsts here, and done so well.

The past month has been a very encouraging start for VR. At this early stage there are already several games that give me that “just one more try” lure. This is surprising given that many current VR titles are small in scope, and in some cases partially-finished early access experiences. With the launch of PSVR later this year, we’re sure to see more full-sized VR games…whatever that means.

The Basics of Hand Tracked VR Input Design

Ever since my revelation at Oculus Connect I’ve been working on a project using hand tracking and VR. For now, it’s using my recently acquired Vive devkit. However, I’ve been researching design techniques for PSVR and Oculus Touch to keep the experience portable across many different hand tracking input schemes. Hand tracking has presented a few new problems to solve, similar to my initial adventures in head tracking interfaces.

The Vive's hand controller

Look Ma, No Hands!

The first problem I came across when designing an application that works on both Vive and Oculus Touch is the representation of your hands in VR. With Oculus Touch, most applications feature a pair of “ghost hands” that mimic the current pose of your hands and fingers. Since Oculus’ controllers can track your thumb and first two fingers, and presumably the rest are gripped around the handle, these ghost hands tend to accurately represent what your hands are doing in real life.

Oculus Touch controller

This metaphor breaks down with Vive as it doesn’t track your hands, but the position of the rod-like controllers you are holding. Vive games I’ve tried that show your hands end up feeling like waving around hands on a stick–there’s a definite disconnect between the visual of your hands in VR and where your brain thinks they are in real life. PSVR has this problem as well, as the Move controllers used with the current devkit are similar to Vive’s controllers.

You can alleviate this somewhat. Because there is a natural way most users tend to grip Move and Vive controllers, you can model and position the “hand on a stick” in the most likely way the controllers are gripped. This can make static hands in VR more convincing.

In any case, you have a few problems when you grab an object.

For Oculus, the act of grabbing is somewhat natural–you can clench your first two fingers and thumb into a “grab” type motion to pick something up. In the case of Bullet Train, this is how you pick up guns. The translucent representation of your hands means you can still see your hand pose and the gripped object at the same time. There’s not much to think about other than where you attach the held object to the hand model.

It also helps that in Bullet Train the objects you can grab have obvious handles and holding points. You can pose the hand to match the most likely hand position on a grabbed object without breaking immersion.

With Vive and PSVR you have a problem if you are using the “hand on a stick” technique. When you “grab” a virtual object by pressing the trigger, how do you show the hand holding something? It seems like the best answer is, you don’t! Check this video of Uber Entertainment’s awesome Wayward Sky PSVR demo:

Notice anything? When you grab something, the hand disappears. All you can see is the held object floating around in front of you.

This is a great solution for holding arbitrary shaped items because you don’t have to create a potentially infinite amount of hand grip animations. Because the user isn’t really grabbing anything and is instead clicking a trigger on a controller, there is no “real” grip position for your hand anyway. You also don’t have the problem of parts of the hands intersecting with the held object.

This isn’t a new technique. In fact, one of the earliest Vive demos, Job Simulator, does the exact same thing. Your brain fills in the gaps and it feels so natural that I just never noticed it!

Virtual Objects, Real Boundaries

The next problem I encountered is what do you do when your hand passes through virtual objects, but the objects can’t? For instance, you can be holding an object, and physically move your real, tracked hand through a virtual wall. The held object, bound by the engine’s physics simulation, will hit the wall while your hand continues to drag it through. Chaos erupts!

You can turn off collisions while an object is held, but what fun is that? You want to be able to knock things over and otherwise interact with the world while holding stuff. Plus, what happens when you let go of an object while inside a collision volume?

What I ended up doing is making the object detach, or fall out of your virtual hand, as soon as it hits something else. You can tweak this by making collisions with smaller, non-static objects less likely to detach the held object since they will be pushed around by your hand.

For most VR developers these are the first two things you encounter when designing and experience for hand-tracking VR systems. It seems Oculus Touch makes a lot of these problems go away, but we’ve just scratched the surface of the issues needed to be solved when your real hands interact with a virtual world.

VR with a Gamepad Sucks

I was kind of bummed my first day of Oculus Connect 2.

FullSizeRender

Last year’s Oculus Connect was revelatory to me. Despite having worked on two different Gear VR titles at the time, the Crescent Bay demo was incredible in comparison. From Oculus’ own vignette demos to Epic’s Showdown sequence–the leap in quality from DK2 to Crescent Bay was astounding. Everyone walked out of that demo with a huge simile on their faces.

The first demos I tried at OC2 were the Gamepad demos. Oculus spent an absurd amount of time at their E3 keynote talking about how amazing it was that they were launching with the XBox 360 controller as the input device. At Oculus Connect, I put this claim to the test.

Every game from EVE Valkyrie to Edge of Nowhere seemed like playing a regular video game strapped to my face. I felt like I was playing an XBOX One through binoculars. In fact, a few of the games made me a little queasy–which I’m usually not susceptible to.

Maybe I’m just jaded having been developing gamepad VR experiences on Gear VR for a while, I thought.

Later on I tried Toybox which is a cool tech demo but doesn’t really illustrate how you’d play an actual game for any length of time with the Touch controllers. In fact, I found the controllers a little hard to use compared to the Vive. They have tons of confusing buttons and getting the finger gestures right seemed to take a little bit of work.

I was leaving the demo area and getting ready to head home when a friend of mine who works for Oculus stopped to ask what I thought. I told him honestly that I felt last year’s demos were better–they were more immersive and interesting. Although a little taken aback at my impressions, he strongly suggested I come by tomorrow for the next set of demos. He couldn’t tell me what they were, but promised they’d be awesome.

The Oculus Connect app sent a notification alerting me that new demo registrations would be available at 8 AM. I set my alarm and woke up the next morning to register for the Touch demos via my iPhone. I promptly slept through the keynote and arrived on the scene at noon for my demo.

We were only allowed to try two games, and it was heavily suggested I try Epic’s “Bullet Train” experience. Having not seen the keynote, I had no idea what I was getting into.

Bullet Train is mind blowing.

Bullet Train is essentially Time Crisis in VR. When I saw the Showdown demo last year I thought a game like this in VR would be a killer app. One of my favorite coin-ops of all time is Police 911–which motion tracks your body with a pair of cameras to duck behind obstacles. I thought doing this in VR would be amazing. However, last year there were no hand tracking controls–it was just a vague idea.

Here, Epic took the Touch controllers and made an incredible arcade shooter experience that should be a killer app should Epic choose to develop this further. Oculus really needs to do everything in their power to get Epic to produce this as a launch title for the Touch controllers.

The touch controls make all the difference. From handling weapons and grenades to plucking bullets out of the air in slow motion, Bullet Train really drives home how flexible the Touch controls are. Unlike Vive which is like holding a set of tools, these let you reach out and grab stuff–Even pump a shotgun.

The combination of standing in a motion tracked volume and visceral interaction with the world using your hands–even with Touch’s primitive finger gesture technology–really immerses you in an experience way beyond what’s possible sitting in a chair with an XBox controller.

It’s disappointing that Touch won’t launch with Oculus’ headset. Hand tracking is absolutely required for a truly immersive experience. Developing games that support both Gamepad and Touch control is going to be difficult without diluting features for one or the other. I’ve experienced a similar issues developing games that work with Gear VR’s touchpad and Bluetooth gamepad.

I left Oculus Connect 2 I reinvigorated with the feeling that VR with hand tracking is the One True VR Experience. Gamepad is fine for mobile VR at the moment, but all of my PC and Console VR projects are now being designed around Touch and hand tracked input. It’s the only way!

How To Support Gear VR and Google Cardboard In One Unity3D Project

Google Cardboard is a huge success. Cardboard’s userbase currently dwarfs that of Gear VR. Users, investors, and collaborators who don’t have access to Gear VR often ask for Cardboard versions of my games. As part of planning what to do next with Caldera Defense, I decided to create a workflow to port between Gear VR and Cardboard.

Always keep a Cardboard on me at ALL TIMES!

I used my VR Jam entry, Duck Pond VR, as a test bed for my Unity3D SDK switching scripts. It’s much easier to do this on a new project. Here’s how I did it:

Unity 4 vs. Unity 5

Google Cardboard supports Unity 4 and Unity 5. Although Oculus’ mobile SDK will technically work on Unity 5, you can’t ship with it because bugs in the current version of Unity 5 cause memory leaks and other issues on the Gear VR hardware. Unity is working on a fix but I haven’t heard any ETA on Gear VR support in Unity 5.

This is a bummer since the Cardboard SDK for Unity 5 supports skyboxes and other features in addition to the improvements Unity 5 has over 4. Unfortunately you’re stuck with Unity 4 when making a cross-platform Gear VR and Cardboard app.

Dealing With Cardboard’s Lack of Input

Although Gear VR’s simplistic touch controls are a challenge to develop for, the vast majority of Cardboards have no controls at all! Yes, Google Cardboard includes a clever magnetic trigger for a single input event. Yet, the sad fact is most Android devices don’t have the necessary dock connector to use this.

You have a few other control options that are universal to all Android devices: the microphone and Bluetooth controllers. By keeping the microphone open, you can use loud sounds (such as a shout) to trigger an action. You can probably use something like the Pitch Detector plug-in for this. Or, if your cardboard has a head strap for hands-free operation, you can use a Bluetooth gamepad for controls.

Because of this general lack of input, many Cardboard apps use what I call “stare buttons” for GUIs. These are buttons that trigger if you look at them long enough. I’ve implemented my own version. The prefab is here, the code is here. It even hooks into the new Unity UI event system so you can use it with my Oculus world space cursor code.

Gear VR apps must be redesigned to fit within Cardboard’s constraints. Whether it’s for limited controls or the performance constraints of low end devices. Most of my Cardboard ports are slimmed down Gear VR experiences. In the case of Caldera Defense, I’m designing a simplified auto-firing survival mode for the Cardboard port. I’ll merge this mode back into the Gear VR version as an extra game mode in the next update.

Swapping SDKs

This is surprisingly easy. You can install the Cardboard and Gear VR SDKs in a single Unity project with almost no problems. The only conflict is they both overwrite the Android manifest in the plugin folder. I wrote an SDK swapper that lets you switch between the Google Cardboard and Oculus manifests before you do a build. You can get it here. This editor script has you pick where each manifest file is for Cardboard and Gear VR and will simply copy over the appropriate file to the plugin folder. Of course for iOS Cardboard apps this isn’t an issue.

Supporting Both Prefabs

Both Oculus and Cardboard have their own prefabs that represent the player’s head and eye cameras. In Caldera Defense, I originally attached a bunch of game objects to the player’s head to use for traces, GUI positioning, HUDs, and other things that need to use the player’s head position and orientation. In order for these to work on both Cardboard and Oculus’ prefabs, I placed all objects attached to the head on another prefab which is attached to the Cardboard or Oculus’ head model at runtime.

Wrapping Both APIs

Not only do both SDK’s have similar prefabs for the head model, they also have similar APIs. In both Cardboard and Oculus versions, I need to refer to the eye and head positions for various operations. To do this, I created a simple class that detects which prefab is present in the scene, and grabs the respective class to wrap the eye position reference around. The script is in the prefab’s package.

Conclusion

For the final step, I made separate Cardboard versions of all my relevant Gear VR scenes which include the Cardboard prefabs and modified gameplay and interfaces. If no actual Oculus SDK code is in any of the classes used in the Cardboard version, the Oculus SDK should be stripped out of that build and you’ll have no problem running on Cardboard. This probably means I really need to make an Oculus and Cardboard specific versions of that CameraBody script.

The upcoming Unity 5.1 includes native Oculus support which may make this process a bit more complicated. Until then, these steps are the best way I can find to support both Cardboard and Gear VR in one project. I’m a big fan of mobile VR, and I think it’s necessary for any developer at this early stage of the market to get content out to as many users as possible.

Why I’m All In On Mobile VR

Last month I released Caldera Defense, a Virtual Reality tower defense game on Gear VR. This is the second Gear VR title I’ve worked on, and the first I’ve built and published from the ground up. (Not including my Oculus Mobile VR Jam submission) Caldera Defense is a free early access demo–basically a proof of concept of the full game–and the reaction has been great. Thousands of people have downloaded, rated, and given us valuable feedback. We’re busy incorporating it into the first update.

Caldera Defense featured on the Gear VR store

Originally I planned to use this as a demo to fund an expanded PC and Morpheus launch version of the game with greatly improved graphics, hours of gameplay, and additional features such as multiplayer and second-screen options.

However, pitching even a modestly budgeted console and PC VR game experience to publishers, or even the platforms themselves, is a tough sell. I’m sure at E3 next month we will see all sorts of AAA VR announcements. Yet, many traditional funding avenues for games remain skeptical of the opportunity VR presents.

Since the Caldera project began last year, mobile VR has morphed into a unique opportunity. With over a million Google Cardboards in the wild and new versions of the Gear VR headset in retail stores worldwide, there will be millions of mobile VR users before there’s comparable numbers on Oculus desktop, Vive, and Morpheus.

Is it possible that mobile VR will be a viable business before it is on PC and consoles? Most of my colleagues are skeptical. I’m not.

The economics work out. Due to the mobile nature of the experience, games and apps for these platforms tend towards the bite-sized. This greatly reduces the risk of mobile VR since assets optimized for mobile are simpler and casual VR experiences require less content to be built overall.

I can make a dozen mobile VR minimum viable products for the same budget of one modestly scoped Morpheus experience. From these MVPs I can determine what types of content gains the most traction with VR users and move in that direction. I can even use this data to guide development of larger AAA VR experiences later.

By this time next year it will be possible to monetize these users significantly, whether through premium content or advertising. It may be more valuable to collect a lot of eyeballs in mobile VR than breaking even on a multi-million dollar AAA launch tile. As we’ve seen in the past, acquiring a huge audience of mobile players can lead to tremendous revenue streams.

Being on the Oculus desktop, Vive, or Sony’s Morpheus deck at launch is an enormous opportunity. In fact, I’m still searching for ways to produce the console and desktop version of Caldera Defense. However, if you lack the capital to produce at that scale, smaller mobile projects are much easier to bootstrap and the upside is huge.

Samsung Gear VR Development Challenges with Unity3D

As you may know, I’m a huge fan of Oculus and Samsung’s Gear VR headset. The reason isn’t about the opportunity Gear VR presents today. It’s about the future of wearables–specifically of self-contained wearable devices. In this category, Gear VR is really the first of its kind. The lessons you learn developing for Gear VR will carry over into the bright future of compact, self-contained, wearable displays and platforms. Many of which we’ve already started to see.

The Gear VR in the flesh (plastic).

The Gear VR in the flesh (plastic).


Gear VR development can be a challenge. Rendering two cameras and a distortion mesh on a mobile device at a rock solid 60fps requires a lot of optimization and development discipline. Now that Oculus’ mobile SDK is public and having worked on a few launch titles (including my own original title recently covered in Vice), I figured I’d share some Unity3D development challenges I’ve dealt with.

THERMAL ISSUES

The biggest challenge with making VR performant on a mobile devices is throttling due to heat produced by the chipset. Use too much power and the entire device will slow itself down to cool off and avoid damaging the hardware. Although the Note 4 approaches the XBOX 360 in performance characteristics, you only have a fraction of its power available. This is because the phone must take power and heat considerations in mind when keeping the CPU and GPU running at full speed.

With the Gear VR SDK you can independently tell the device how fast the GPU and CPU should run. This prevents you from eating up battery when you don’t need the extra cycles, as well as tune your game for performance at lower clock speeds. Still, you have to be aware of what types of things eat up GPU cycles or consume GPU resources. Ultimately, you must choose which to allocate more power for.

GRAPHICAL DETAIL

The obvious optimization is lowering graphical detail. Keep your polycount under 50k triangles. Avoid as much per pixel and per vertex processing as possible. Since you have tons of RAM but relatively little GPU power available–opt for more texture detail over geometry. This includes using lightmaps instead of dynamic lighting. Of course, restrict your usage of alpha channel to a minimum–preferably for quick particle effects, not for things that stay on the screen for a long period of time.

Effects you take for granted on modern mobile platforms, like skyboxes and fog, should be avoided on Gear VR. Find alternatives or design an art style that doesn’t need them. A lot of these restrictions can be made up for with texture detail.

A lot of standard optimizations apply here–for instance, use texture atlasing and batching to reduce draw calls. The target is under 100 draw calls, which is achievable if you plan your assets correctly. Naturally, there are plenty of resources in the Asset Store to get you there. Check out Pro Draw Call Optimizer for a good texture atlasing tool.

CPU OPTIMIZATIONS

There are less obvious optimizations you might not be familiar with until you’ve gone to extreme lengths to optimize a Gear VR application. This includes removing as many Update methods as possible. Most update code spent waiting for stuff to happen (like an AI that waits 5 seconds to pick a new target) can be changed to a coroutine that is scheduled to happen in the future. Converting Update loops to coroutines will take the burden of waiting off the CPU. Even empty Update functions can drain the CPU–death by a thousand cuts. Go through your code base and remove all unnecessary Update methods.

As in any mobile game, you should be pooling prefabs. I use Path-o-Logical’s PoolManager, however it’s not too hard to write your own. Either way, by recycling pre-created instances of prefabs, you save memory and reduce hiccups due to instantiation.

IN CONCLUSION

There’s nothing really new here to most mobile developers, but Gear VR is definitely one of the bigger optimization challenges I’ve had in recent years. The fun part about it is we’re kind of at the level of Dreamcast-era poly counts and effects but using modern tools to create content. It’s better than the good old days!

It’s wise to build for the ground up for Gear VR than to port existing applications. This is because making a VR experience that is immersive and performant with these parameters requires all disciplines (programming, art, and design) to build around these restrictions from the start of the project.