​ARKit, ARCore, Facebook and Snapchat or THE BATTLE FOR SMARTPHONE AR WORLD SUPREMACY

I haven’t written a blog post in awhile. Over the past 6 months, I’d try to pontificate on the topic of Augmented Reality but some major new development would always occur. I have a bunch of scrapped posts sitting in Google Drive that are now totally irrelevant. Cruising through December, I figured the coast was clear. I was considering writing a dull year in review post when the final paradigm shift occurred with Snap’s release of Lens Studio. So, let’s try and get this out before it’s obsolete!

The Return of Smartphone AR

Smartphone AR is definitely back.  After Apple’s announcement, everyone wanted to talk about ARKit. Despite developing the award-winning Holographic Easter Egg Hunt for HoloLens with Microsoft this past Spring, discussions with clients and investors became laser-focused on smartphone AR instead of mixed reality.

It looks like 2018 will be a big year for these platforms while mixed reality headset makers gear up for 2019 and beyond. Because of this renewed interest in smartphone AR, this is a good time to investigate your options if you’re looking to get into this platform.

ARKit and ARCore

Despite being announced after Facebook’s AR Camera Effects platform, it really was Apple’s ARKit’s announcement that set off this new hype cycle for smartphone AR. Google’s announcement of ARCore for Android was seemingly a me-too move, but also quite significant.

This isn’t about ARKit versus ARCore since there is no competition. They both do similar things on different platforms. ARCore and ARKit have a common set of features but implement them in ways that are subtly different from the user’s perspective. Because of this, it’s not super difficult to port applications between the two platforms if you are using Unity.

The biggest limitation of both ARKit and ARCore is that when you quit the application, it forgets where everything is. Although you can place anchors in the scene to position virtual objects in the real world, there is no persistence between sessions. I suspect ARCore might advance quicker in this department as Google’s ill-fated Tango technology had this in their SDK for years. I’m assuming we’ll see more and more Tango features merged into ARCore in 2018. Rumors suggest ARKit 2.0 will also see similar improvements.

ARKit does one up ARCore with the addition of face tracking for the iPhone X. This is the most advanced facial tracking system currently available on mobile phones. However, it’s only on one device–albeit a wildly popular one. ARKit’s facial tracking seems to produce results far beyond current mask filter SDKs as it builds a mesh out of your face using the TrueDepth camera. However, there doesn’t seem to be a reason why many of the basic facial tracking features can’t be brought over to phones with standard cameras. Maybe we’ll see a subset of these features trickle down into other iOS devices in the near future.

ARKit has far more penetration than ARCore. ARCore runs on a tiny fraction of Android devices, and this isn’t likely to improve. ARKit requires an iPhone 6S and above, but that’s still a large chunk of iOS devices. There probably is zero business case for focusing on ARCore first. If you truly need to develop a standalone AR app, your best bet is to target iOS primarily and Android second (if at all). If ARCore starts to get some of Tango’s features added to it ahead of ARKit, then there will be compelling use cases for ARCore exclusive apps.

Facebook Camera Effects Platform vs. Snapchat World Lens

When ARKit was first announced, I had a few meetings at large companies. They all thought it was cool, but didn’t want to develop standalone apps. Getting users to download yet another app is expensive and somewhat futile as most go unused after a few tries. There’s a lot more interest in distributing AR experiences inside apps people already have installed. Before Facebook Camera Effects was announced, the only option was Blippar. Which really isn’t an option since hardly anyone uses it.

I got access to Facebook Camera Effects early on and was really impressed with the tools. Leading up to the public release, Facebook has added a lot of features. I’ve seen everything from simple masks to full-blown multiplayer games built with Facebook’s AR Studio.

Screen Shot 2017-12-18 at 6.09.19 PM

Facebook’s AR Studio

Facebook developed an entire 3D engine inside the Facebook Camera. It has an impressive array of features such as a full-featured JavaScript API, facial tracking, SLAM/plane detection, bones (sadly only animated in code), 2D sprite animation, particles, shaders, UI, and advanced lighting and material options. You also can access part of the Facebook graph as well as any external URL you want. If you can fit it inside the filter’s size, poly count, and community guideline restrictions–you can make a fairly elaborate AR app far beyond simple masks.

The great thing about Camera Effects Platform is you are able to distribute an AR experience through an app that already has hundreds of millions of users. Because of this reach, a filter must be tested on a wide variety of phones to account for per-platform limitations and bugs. This is because Facebook AR filters run on a huge number of devices–whether they have native AR SDKs or not.

What’s tricky is after getting approval for distribution of your filter, you still have to somehow tell users to use it. Facebook provides a few options, such as attaching a filter to a promoted Facebook page, but discovery is still a challenge.

As Camera Effects Platform opened to all, Snap released Lens Studio for both Windows and Mac. This platform allows developers to create World Lens effects for Snapchat. I was really excited about this because a lot of clients were just not very enthusiastic about Facebook’s offering. I kept hearing that the valuable eyeballs are all on Snapchat and not Facebook, despite Snapchat’s flatlining growth. Brands and and marketers were chomping at the bit to produce content for Snapchat without navigating Snap’s opaque advertising platform.

Screen Shot 2017-12-18 at 6.07.56 PM

Snap’s Lens Studio

Lens Studio shares many similarities to Facebook’s AR Studio, including the use of JavaScript as a language. The big difference here is that Lens Studio does not expose Snapchat’s facial tracking features. You can only make World Lenses–basically placing animated 3D objects on a plane recognized by the rear camera.

World Lenses also have much tighter size and polycount restrictions than Facebook Camera Effects. However, Lens Studio supports the importing of FBX bone animations and morph targets, along with a JavaScript API to play and blend simultaneous animations. Lens Studio also supports Substance Designer for texturing and a lot of great material and rendering options that make it easier to build a nice looking World Lens despite having lower detail than Facebook.

As for distribution, you still have to go through an approval process which includes making sure your lens is performant on low-end devices as well as current phones. Once available you can link your lens to a Snapcode which you can distribute any way you want.

Which should you develop for? Unlike ARCore and ARKit, Facebook and Snapchat have wildly different feature sets. You could start with a Facebook Camera Effect and then produce a World Lens with a subset of features using detail reduced assets.

The easier path may be to port up. Start with a simple World Lens and then build a more elaborate Facebook AR filter with the same assets. Given how few people use Facebook’s stories feature, I feel that it may be smarter to target Snapchat first. Once Facebook’s Camera Effects Platform works on Instagram I’d probably target Facebook first. It really depends on what demographic you are trying to hit.

App vs. Filters

Should you develop a standalone AR app or a filter inside a social network platform? It really depends on what you’re trying to accomplish. If you want to monetize users, the only option is a standalone ARKit or ARCore app. You are free to add in-app purchases and ads in your experience as you would any other app. Facebook and Snap’s guidelines don’t allow this on their respective platforms. Are you using AR to create branded content? In the case of AR filters, they are usually ads in themselves. If you are trying to get as much reach as possible, a properly marketed and distributed AR filter is a no-brainer. A thorough mobile AR strategy may involve a combination of both native apps and filters–and in the case of Facebook’s Camera Effects Platform, they can even link to each other via REST calls.

spectrum

How each platform ranks sorted by feature complexity

2018 is going to be an exciting year for smartphone AR. With the explosive growth of AR apps on the AppStore and the floodgates opening for filters on social media platforms, you should be including smartphone AR into your mixed reality strategy. Give your users a taste of the real thing before the mixed reality revolution arrives.

VRLA Mixed Reality Easter Egg Hunt: Behind the Scenes

Late last year, John Root of Virtual Reality Los Angeles approached me with a crazy idea: what if we built a giant fake forest, placed it in the middle of the Los Angeles Convention Center, and used HoloLens to allow people to hunt for virtual Easter eggs inside a mixed reality experience? I wasn’t sure exactly how I’d go about creating it, but based on my time building and demoing Ether Wars at VRLA’s previous event, I knew it was possible and people would love it. I was all in.

VRLA_Easter_Egg

The project got a late start–a mere matter of weeks before the show. Regardless, everything came together at the right time. We were able to have Microsoft on as a partner who donated all the HoloLens devices for the event as well as brought in their own developers to help with technical issues, project management, and the logistics of running a large public HoloLens experience.

 

LESSONS LEARNED

There haven’t been many projects of this kind. Certainly very few people have experience designing real-world mixed reality installations. I figured I’d give a brain dump of what I learned which might be useful for developers attempting similar feats.

DESIGNING FOR REALITY

Before anything could be built, we had to work out the math behind the user flow of the whole experience. People such as Disney Imagineers who design amusement park rides are very familiar with this process. We had to figure out how many people we could fit in the exhibit’s space and how long it would take to get users in and out of the experience. From here we determined how many people could move through the Easter Egg hunt per day, how much staff would be needed to run it, and how many HoloLens devices we’d need in total. We also had to design the set to accommodate potentially large lines that wouldn’t descend into chaos and mayhem by the middle of the day.

PHYSICAL SET DESIGN IS HARD

The VRLA Mixed Reality Easter Egg Hunt involved typical software development issues and the added challenge of building a unique physical set that works properly with mixed reality technology.

The first step was designing the set itself. Mike Murdock of TriHelix designed the set, using a Vive to make sure it fit inside the dimensions set by the booth. This allowed him to not only judge the size, but preview what it would be like to walk through the set before construction began.

IMG_0779

Choosing paint colors to match Mike’s virtual set design.

It’s important to get the art director of the set and the app on the same page–Primarily to make sure the colors of the real world work best against the HoloLens’ additive display while still being highly trackable.  Unifying the look of the virtual and physical objects is also critical to creating a seamless experience. This means you need to organize the art process far in advance. Nathan Fulton, my art director on the app, made sure the color swatches for the set matched his vision of the virtual objects in the experience.

Lighting is also very important. We spent a lot of time designing the placement and type of lights so the space would be as trackable as possible without having the display overpowered by the environment.

 

To build the physical set, we turned to Fonco Studios, an experienced Hollywood set and prop design company who constructed an amazing N64-stylized forest out of literally a ton of styrofoam. This set was then sliced up into chunks and transported to the LA Convention Center where it was reassembled.

ANCHORING TO THE REAL WORLD

One of the first challenges when designing the software was to determine how the virtual objects were going to be placed on the set. In the beginning I thought we’d simply drop spatial anchors on the physical location and then share these anchors with each HoloLens. This proved to be impractical for a number of reasons.

Firstly, I’ve had lots of issues sharing spatial anchors across devices. They often appear in the wrong places. Not only that, but anyone who has been to a conference will tell you wifi access is spotty, if available at all. Having to use the Internet to transfer spatial anchors around might not be possible.

Plus, without having the physical set to scan we would have no way to test the final layout of eggs and other objects until we got to the location a day or so before the event.

For most of the app’s development, we used Mike’s 3D model of the set design in Unity3D to place the objects on. Then we used AfterNow’s technique of placing three spatial anchors in the corners of the set, saving them, and then spawning the game level at the center of these points. This real world alignment process is a little error prone, but does work.

These anchors are saved locally to the HoloLens so the setup process only has to be done once. Whenever players put on the headset after the app is restarted, they can jump right into the experience–no alignment necessary.

IMG_0854 2

The final set, assembled at the Los Angeles Convention Center

We also attempted to match lighting as accurately as possible by taking spherical photos using a RICOH THETA camera at different points inside the set and using them as cubemaps in Unity3D. These cubemaps provided convincing reflections, and can also be used with IBL shaders to help make virtual objects match the real world environment.

TO OCCLUDE OR NOT TO OCCLUDE

The added bonus of having a fixed set is we know the exact shape of the world. This meant we had the option of using a LIDAR scan of the environment as an occlusion mesh instead of the one built internally by HoloLens.

There are lots of advantages to this. Most importantly, the 3D scan can be optimized for performance by reducing the number of polygons and adding detail where the scan may have missed a few things. This makes the occlusion much more accurate while giving a bonus in performance.

Mimic3D made an amazing scan of the set once it was finally set up in the convention center. We saw quite a reduction in poly count from our own retopologized mesh vs. the dynamically generated HoloLens one. This highly optimized mesh also let us do a neat “The Matrix” style grid effect that appears to pour out over the world with a simple shader.

IMG_0828

Mimic3D’s amazing LIDAR scan

Perhaps most importantly, with over a dozen HoloLens devices to set up before the show, not having to generate a good occlusion mesh on each headset saved a lot of preparation time.

WHERE DO I LOOK?

The limited FOV of the HoloLens made it very important to guide the user’s gaze to where the action is. In some cases our assumptions of where users would look and how they moved through the environment was not exactly lined up with reality. One prime example is the rabbits.

In the back right corner of the forest there is a group of three rabbits that scurry away when you get close. We spent a lot of time tweaking the trigger size, position, and animation of the rabbits to make sure people had plenty of time to notice them. We even had Somatone create a positional audio effect to call attention to this. However, it was far too easy for someone to back into the trigger volume and have the bunnies escape without seeing them. A lot of participants reported not seeing rabbits at all–perhaps, the most important creature in the entire experience!

IN CONCLUSION

The lines were huge but everyone seemed to have a smile on their face when leaving the booth. The press loved it, too. I consider this a huge success for all parties involved and a leading example of how Mixed Reality can be used for things far more interesting than enterprise apps.

Building on what we’ve learned at VRLA, I’m totally ready to tackle a much more elaborate Mixed Reality entertainment experiences. Since the event, I’ve recieved a lot of interest in doing this type of project on a larger scale. Who knows what we might build next!

Designing HoloLens Apps For A Small FOV

In my recent VRDC talk I spent a slide talking about limitations of the platform. The most common complaint about HoloLens and just about any other AR or MR platform is the small window in which the augmentations appear. This low FOV issue is a huge problem of physics that’s not likely to be solved according to Moore’s Law. Get used to it. We’re going to be stuck with it for awhile.  (Please someone prove me wrong!)

It’s not the end of the world. It’s just that developers have to learn how to build applications around this limitation.

GUIDE THE USER

Most VR applications require the user to look around. After all, that’s the whole point of being immersed in a virtual environment. Even if it’s a seated experience, usually the player is encouraged to search the scene for things to look at or interact with.

In mixed reality, the lack of peripheral vision (or anything near it) due to FOV limitations makes visually searching for objects frustrating. A quick scan of the scene won’t catch your eye on something interesting, you have to look more deliberately for stuff in the scene.

HoloLens’ HoloToolkit provides a solution to this with the DirectionIndicator class. This is a directional indicator arrow attached to the cursor that points in the direction of a targeted object.

Perhaps a more natural version of this is used in Young Conker. The directional indicator is 3D, naturally sliding along and colliding with the environment.

USE AUDIO CUES

Unity makes it incredibly easy to add spatial sound to a HoloLens app. Simply enable the Microsoft HRTF Spatializer plugin in the audio settings and check off “spatialize” on your positional audio sources. This is more than just a technique for immersion–the positional audio is so convincing you can use it to direct the user’s attention anywhere in the environment. If the object is way out of the user’s view, emit a sound from it to encourage the player to look at it.

DESIGN ART ACCORDINGLY

Having art break the limited FOV frame is a real problem. To a certain degree, this can’t be solved–get close enough to anything and it will be big enough to go beyond the FOV’s augmentation area.

2016-07-19-2

Ether Wars uses small objects to prevent breaking the frame

This is why I design most HoloLens games to work with lots of smaller models instead of large game characters or objects. If the thing of interest to the user isn’t breaking the frame, he might not notice the rest of the graphics are getting clipped.  Also, Microsoft recommends keeping the clipping plane a few feet out from the user–so if you can design the game such that the player isn’t supposed to get close enough to the holograms, you might be able to prevent most frame-breaking cases.

CONCLUSION

For AR/MR developers, limited FOV is a fact of life. In enterprise apps where you are focused on a specific task, it’s not so bad. For games, most average players will be put off if they have to wrestle too much with this limitation. Microsoft’s showcase games still play very well with this restriction, and show some creative ways to get around it.

So, You Wanna Make A Pokemon Go Clone?

I told you not to do it.

IMG_3003

But suddenly my 2013 blog post about displaying maps in Unity3D is now my top page of the month. There are lots of Pokemon Go clones being built right now.

Well, if you absolutely insist, here’s how I’d go about it.

Step 1: Raise tons of money

You’re going to need it. And it’s not just for user acquisition. You’ll need a lot of dry powder for scaling costs in the unlikely event this game is as successful as you’ve claimed to your investors. For small apps, accessing something like the Foursquare API may be free–but it will require an expensive licensing deal to use it at the scale you’re thinking of and without restrictions.

Step 2: Buy every single location based game you can

Just having access to a places API such as Foursquare or Factual isn’t enough. You need location data relevant to a game–such as granular details about places inside of larger locations that are of interest to players. Pokemon Go has this from years of Ingress players submitting and verifying locations around the world.

Nearly 10 years ago, there was a frenzy of investment in location based games. The App Store is now littered with dead husks of old LBS games and ones that are on life support. With that pile of money you raised, it should be easy to go on a shopping spree and buy up these games. Not for their users, or even the technology, but for the data. Most of these games may have been fallow for years, making their location data stale. Yet, it may be possible with machine learning or old fashioned elbow grease to work that data into a layer of interesting sub-locations for your game to be designed around.

Step 3: Plan for Database Hell

Designing for scale at the start is a classic mistake for any startup. You’re effectively building a football stadium for a carload of people. That doesn’t mean you shouldn’t entertain the idea of scaling up a service once it’s successful.

Full disclosure, I’ve never built an app at the scale of Pokemon Go. Few people have. I suspect many of the server issues are related to scaling a geospatial database with that many users. It’s much harder to optimize your data around location than other usage patterns. Don’t take my word for it, check out this analysis.

It’s been years since I’ve looked at geospatial databases. Despite some announcements, it doesn’t look like a lot has changed. A cursory search suggests PostGIS is still a solid choice. Plus, there are a lot of Postgres experts out there that can help with scaling issues. MongoDB’s relatively new spatial features may also be an option.

As for fancier alternatives–Google App Engine is an easy way to “magically” scale an app. They have also started releasing really interesting new geospatial services. Not to mention some great support for mobile apps that may make integrating with Unity3D a bit easier. However, GAE  is very expensive at scale, and the location features are still in alpha. Choosing Google App Engine is a risky decision, but also may be an easy way to get started.

To avoid vendor lock-in, have a migration strategy in mind. One of which may be using your pile of money to recruit backend people from startups with large amounts of users.

Step 4: Get Ready for the Disappointing State of Mobile AR

Pokemon Go has sparked a lot of renewed interest in AR. Much like geospatial databases, not much has changed in the past 5 years as far as what your average smartphone can do. Sure, beefier processors and higher res cameras can get away with some limited SLAM functionality. But, these features are very finicky. Your best bet is to keep AR to a minimum, as Pokemon Go smartly did. Placing virtual objects on real world surfaces in precise locations, especially outdoors, is the realm of next generation hardware.

Step 5: ??????

Ok, this isn’t a precise recipe for a Pokemon Go clone. But hey, if you’ve completed step one, maybe you should contact me for more details?

Debugging HoloLens Apps in Unity3D

I’ve been developing on HoloLens for a few weeks now, and I’m being re-acquainted with the tricky part of debugging hardware-specific augmented reality apps in Unity3D. I went through a lot of these issues with my Google Tango project, InnAR Wars, and so I’m sort of used to it.  However, having to wear the display on your head while testing code brings a whole new dimension of difficulty to debugging Augmented Reality applications. I figured I’d share a few tips that I use when debugging Unity3D HoloLens apps other than the standard Unity3d remote debugging tools you are used to from mobile development.

IMG_2757

Debugging in the Editor vs. Device

The first thing you need to do is to figure out how test code without deploying to the device. Generating a Visual Studio project, compiling, and uploading your application to one (or more) HoloLens headsets is a real pain when trying to iterate on simple code changes. It’s true, Unity3D can do none of HoloLens’ AR features in the editor, but there are times when you just have to test basic gameplay code that doesn’t require spatialization, localization, or any HoloLens specific features. There’s a few steps to make this easier.

Make A Debug Keyboard Input System

HoloLens relies mostly on simple gestures (Air Tap) and voice for input. The first thing you need to test HoloLens code in the Unity3D editor is a simple way to trigger whatever event fires off via Air Tap or speech commands through the keyboard. In my case, I wrote a tiny bit of code to use the space bar to trigger the Air Tap. Basically–anywhere you add a delegate to handle an Air Tap or speech command, you need to add some input code to trigger that same method via keyboard.

Use An Oculus Rift Headset

I was pleasantly surprised to find out Unity HoloLens Technical Preview supports Oculus Rift. Keep your Rift plugged in when developing for HoloLens. When you run your application in the Unity editor, it will show up inside the Rift–albeit against a black background. This is extremely helpful debugging code that uses gaze, positional audio, and even limited movement of the player via Oculus’ positional tracking.

Use The HoloLens Companion App

Microsoft provides a HoloLens companion app in the Windows Store with a few handy features. The app connects to HoloLens via WiFi and lets you record videos live from the headset (very useful for documented reproducible bugs and crashes). It lets you stop and start apps remotely which can be useful when trying to launch an app on multiple HoloLenses at the same time. You can also use your PC’s keyboard to send input to a remote HoloLens. This is convenient for multiplayer testing–use Air Tap with the one on your face, the companion app to trigger input on the other device.

These tips may make building HoloLens apps a little easier, but I really hope Microsoft adds more debugging features to future versions of the SDK. There are some simple things Microsoft could do to make development more hassle-free, although there’s really a limit to what you can do in the Unity Editor versus the device.

Developing Applications for HoloLens with Unity3D: First Impressions

I started work on HoloLens game development with Unity3D over the past week. This included going through all of the example projects, as well as building simple games and applications to figure out how all of the platform’s features work.  Here’s a some takeaways from my first week as a HoloLens developer.

CiPSvqKVAAAsvoP

Baby steps…

The Examples Are Great, But Lack Documentation

If you go through all of the Holo Academy examples Microsoft provides, you’ll go from displaying a basic cube to a full-blown multi user Augmented Reality experience. However, most of the examples involve dragging and dropping pre-made prefabs and scripts into the scene. Not a lot about the actual SDK is explained. The examples are a good way to get acquainted with HoloLens features, but you’re going to have to do more work to figure out how to write your own applications.

HoloToolkit is Incredibly Full Featured

All of the examples are based on HoloToolkit, Microsoft’s collection of scripts and prefabs that handle just about every major HoloLens application feature: input, spatial mapping, gesture detection, speech recognition, and even some networking.

I also found that features I needed (such as the placement of objects in the real world using real-time meshing as a collider) are features in the examples I could easily strip out and modify for my own C# scripts. Using these techniques I was able to get a very simple carnival milk bottle game running in a single Saturday afternoon.

Multiplayer Gets Complicated

I’m working on moving my award winning Tango RTS, InnAR Wars, to HoloLens. However, multiplayer experiences work much differently on HoloLens than Tango. In the case of Tango, each device shares a single room scan file and is localized in the same coordinate space. This means that once the game starts, placing an object (like a floating planet or asteroid) at any position will make it appear in the same real-world location on both Tangos.

HoloLens shares objects between devices using what are called Spatial Anchors. Spatial Anchors mark parts of the scanned room geometry as an anchored position. You can then place virtual objects in the real world relative to this anchor. When you share a Spatial Anchor with another device, the other HoloLens will look for a similar location in its own scan of the room to position the anchor. These anchors are constantly being updated as the scan continues, which is part of the trick to how HoloLens’ tracking is so rock solid.

Sure, having a single coordinate frame on the Tango is easier to deal with, but the Tango also suffers from drift and inaccuracies that may be symtomatic of its approach. Spatial Anchoring is a rather radical change from how Tango works–which means a lot of refactoring for InnAR Wars, or even a redesign.

First Week Down

This first week has been an enlightening experience. Progress has been fast but also made me aware of how much work it will be to produce a great HoloLens app. At least two independently published HoloLens games popped up in the Windows Store over the past few days. The race is on for the first great indie HoloLens application!

How To Support Gear VR and Google Cardboard In One Unity3D Project

Google Cardboard is a huge success. Cardboard’s userbase currently dwarfs that of Gear VR. Users, investors, and collaborators who don’t have access to Gear VR often ask for Cardboard versions of my games. As part of planning what to do next with Caldera Defense, I decided to create a workflow to port between Gear VR and Cardboard.

Always keep a Cardboard on me at ALL TIMES!

I used my VR Jam entry, Duck Pond VR, as a test bed for my Unity3D SDK switching scripts. It’s much easier to do this on a new project. Here’s how I did it:

Unity 4 vs. Unity 5

Google Cardboard supports Unity 4 and Unity 5. Although Oculus’ mobile SDK will technically work on Unity 5, you can’t ship with it because bugs in the current version of Unity 5 cause memory leaks and other issues on the Gear VR hardware. Unity is working on a fix but I haven’t heard any ETA on Gear VR support in Unity 5.

This is a bummer since the Cardboard SDK for Unity 5 supports skyboxes and other features in addition to the improvements Unity 5 has over 4. Unfortunately you’re stuck with Unity 4 when making a cross-platform Gear VR and Cardboard app.

Dealing With Cardboard’s Lack of Input

Although Gear VR’s simplistic touch controls are a challenge to develop for, the vast majority of Cardboards have no controls at all! Yes, Google Cardboard includes a clever magnetic trigger for a single input event. Yet, the sad fact is most Android devices don’t have the necessary dock connector to use this.

You have a few other control options that are universal to all Android devices: the microphone and Bluetooth controllers. By keeping the microphone open, you can use loud sounds (such as a shout) to trigger an action. You can probably use something like the Pitch Detector plug-in for this. Or, if your cardboard has a head strap for hands-free operation, you can use a Bluetooth gamepad for controls.

Because of this general lack of input, many Cardboard apps use what I call “stare buttons” for GUIs. These are buttons that trigger if you look at them long enough. I’ve implemented my own version. The prefab is here, the code is here. It even hooks into the new Unity UI event system so you can use it with my Oculus world space cursor code.

Gear VR apps must be redesigned to fit within Cardboard’s constraints. Whether it’s for limited controls or the performance constraints of low end devices. Most of my Cardboard ports are slimmed down Gear VR experiences. In the case of Caldera Defense, I’m designing a simplified auto-firing survival mode for the Cardboard port. I’ll merge this mode back into the Gear VR version as an extra game mode in the next update.

Swapping SDKs

This is surprisingly easy. You can install the Cardboard and Gear VR SDKs in a single Unity project with almost no problems. The only conflict is they both overwrite the Android manifest in the plugin folder. I wrote an SDK swapper that lets you switch between the Google Cardboard and Oculus manifests before you do a build. You can get it here. This editor script has you pick where each manifest file is for Cardboard and Gear VR and will simply copy over the appropriate file to the plugin folder. Of course for iOS Cardboard apps this isn’t an issue.

Supporting Both Prefabs

Both Oculus and Cardboard have their own prefabs that represent the player’s head and eye cameras. In Caldera Defense, I originally attached a bunch of game objects to the player’s head to use for traces, GUI positioning, HUDs, and other things that need to use the player’s head position and orientation. In order for these to work on both Cardboard and Oculus’ prefabs, I placed all objects attached to the head on another prefab which is attached to the Cardboard or Oculus’ head model at runtime.

Wrapping Both APIs

Not only do both SDK’s have similar prefabs for the head model, they also have similar APIs. In both Cardboard and Oculus versions, I need to refer to the eye and head positions for various operations. To do this, I created a simple class that detects which prefab is present in the scene, and grabs the respective class to wrap the eye position reference around. The script is in the prefab’s package.

Conclusion

For the final step, I made separate Cardboard versions of all my relevant Gear VR scenes which include the Cardboard prefabs and modified gameplay and interfaces. If no actual Oculus SDK code is in any of the classes used in the Cardboard version, the Oculus SDK should be stripped out of that build and you’ll have no problem running on Cardboard. This probably means I really need to make an Oculus and Cardboard specific versions of that CameraBody script.

The upcoming Unity 5.1 includes native Oculus support which may make this process a bit more complicated. Until then, these steps are the best way I can find to support both Cardboard and Gear VR in one project. I’m a big fan of mobile VR, and I think it’s necessary for any developer at this early stage of the market to get content out to as many users as possible.

My Week With Project Tango

A few weeks back I got into Google’s exclusive Project Tango developers program. I’ve had a Tango tablet for about a week and have been experimenting with the available apps and Unity3D SDK.

Project Tango uses Movidius’ Myriad 1 Vision Processor chip (or “VPU”), paired with a depth camera not too unlike the original Kinect for the XBOX 360. Except instead of being a giant hideous block, it’s small enough to stick in a phone or tablet.

I’m excited about Tango because it’s an important step in solving many of the problems I have with current Augmented Reality technology. What issues can Tango solve?

POSITIONAL TRACKING

First, the Tango tablet has the ability to determine the tablet’s pose. Sure, pretty much every mobile device out there can detect its precise orientation by fusing together compass and gyro information. But by using the Tango’s array of sensors, the Myriad 1 processor can detect position and translation. You can walk around with the tablet and it knows how far and where you’ve moved. This makes SLAM algorithms much easier to develop and more precise than strictly optical solutions.

Also, another problem with AR as it exists now is that there’s no way to know whether you or the image target moved. Rendering-wise, there’s no difference. But, this poses a problem with game physics. If you smash your head (while wearing AR glasses) into a virtual box, the box should go flying. If the box is thrown at you, it should bounce off your head–big distinction!

Pose and position tracking has the potential to factor out the user’s movement and determine the motion of both the observer and the objects that are being tracked. This can then be fed into a game engine’s physics system to get accurate physics interactions between the observer and virtual objects.

OCCLUDING VIRTUAL CHARACTERS WITH THE REAL WORLD

Anyway, that’s kind of an esoteric problem. The biggest issue with AR is most solutions can only overlay graphics on top of a scene. As you can see in my Ether Drift project, the characters appear on top of specially designed trading cards. However, wave your hand in front of the characters, and they will still draw on top of everything.

Ether Drift uses Vuforia to superimpose virtual characters on top of trading cards.

Ether Drift uses Vuforia to superimpose virtual characters on top of trading cards.

With Tango, it is possible to reconstruct the 3D geometry of your surroundings using point cloud data received from the depth camera. Matterport already has an impressive demo of this running on the Tango. It allows the user to scan an area with the tablet (very slowly) and it will build a textured mesh out of what it sees. When meshing is turned off the tablet can detect precisely where it is in the saved environment mesh.

This geometry can possibly be used in Unity3D as a mesh collider which is also rendered to the depth buffer of the scene’s camera while displaying the tablet camera’s video feed. This means superimposed augmented reality characters can accurately collide with the static environment, as well as be occluded by real world objects. Characters can now not only appear on top of your table, but behind it–obscured by a chair leg.

ENVIRONMENTAL LIGHTING

Finally, this solves the challenge of how to properly light AR objects. Most AR apps assume there’s a light source on the ceiling and place a directional light pointing down. With a mesh built from local point cloud data, you can generate a panoramic render of where the observer is standing in the real world. This image can be used as a cube map for Image-based lighting systems like Marmoset Skyshop. This produces accurate lighting on 3D objects which when combined with environmental occlusion makes this truly a next generation AR experience.

A QUICK TEST

The first thing I did with the Unity SDK is drop the Tango camera in a Camera Birds scene. One of the most common requests for Camera Birds was to be able to walk through the forest instead of just rotating in place. It took no programming at all for me to make this happen with Tango.

This technology still has a long way to go–it has to become faster and more precise. Luckily, Movidius has already produced the Myriad 2, which is reportedly 3-5X faster and 20X more power efficient than the chip currently in the Tango prototypes. Vision Processing technology is a supremely nerdy topic–after all it’s literally rocket science. But it has far reaching implications for wearable platforms.

Samsung Gear VR Development Challenges with Unity3D

As you may know, I’m a huge fan of Oculus and Samsung’s Gear VR headset. The reason isn’t about the opportunity Gear VR presents today. It’s about the future of wearables–specifically of self-contained wearable devices. In this category, Gear VR is really the first of its kind. The lessons you learn developing for Gear VR will carry over into the bright future of compact, self-contained, wearable displays and platforms. Many of which we’ve already started to see.

The Gear VR in the flesh (plastic).

The Gear VR in the flesh (plastic).


Gear VR development can be a challenge. Rendering two cameras and a distortion mesh on a mobile device at a rock solid 60fps requires a lot of optimization and development discipline. Now that Oculus’ mobile SDK is public and having worked on a few launch titles (including my own original title recently covered in Vice), I figured I’d share some Unity3D development challenges I’ve dealt with.

THERMAL ISSUES

The biggest challenge with making VR performant on a mobile devices is throttling due to heat produced by the chipset. Use too much power and the entire device will slow itself down to cool off and avoid damaging the hardware. Although the Note 4 approaches the XBOX 360 in performance characteristics, you only have a fraction of its power available. This is because the phone must take power and heat considerations in mind when keeping the CPU and GPU running at full speed.

With the Gear VR SDK you can independently tell the device how fast the GPU and CPU should run. This prevents you from eating up battery when you don’t need the extra cycles, as well as tune your game for performance at lower clock speeds. Still, you have to be aware of what types of things eat up GPU cycles or consume GPU resources. Ultimately, you must choose which to allocate more power for.

GRAPHICAL DETAIL

The obvious optimization is lowering graphical detail. Keep your polycount under 50k triangles. Avoid as much per pixel and per vertex processing as possible. Since you have tons of RAM but relatively little GPU power available–opt for more texture detail over geometry. This includes using lightmaps instead of dynamic lighting. Of course, restrict your usage of alpha channel to a minimum–preferably for quick particle effects, not for things that stay on the screen for a long period of time.

Effects you take for granted on modern mobile platforms, like skyboxes and fog, should be avoided on Gear VR. Find alternatives or design an art style that doesn’t need them. A lot of these restrictions can be made up for with texture detail.

A lot of standard optimizations apply here–for instance, use texture atlasing and batching to reduce draw calls. The target is under 100 draw calls, which is achievable if you plan your assets correctly. Naturally, there are plenty of resources in the Asset Store to get you there. Check out Pro Draw Call Optimizer for a good texture atlasing tool.

CPU OPTIMIZATIONS

There are less obvious optimizations you might not be familiar with until you’ve gone to extreme lengths to optimize a Gear VR application. This includes removing as many Update methods as possible. Most update code spent waiting for stuff to happen (like an AI that waits 5 seconds to pick a new target) can be changed to a coroutine that is scheduled to happen in the future. Converting Update loops to coroutines will take the burden of waiting off the CPU. Even empty Update functions can drain the CPU–death by a thousand cuts. Go through your code base and remove all unnecessary Update methods.

As in any mobile game, you should be pooling prefabs. I use Path-o-Logical’s PoolManager, however it’s not too hard to write your own. Either way, by recycling pre-created instances of prefabs, you save memory and reduce hiccups due to instantiation.

IN CONCLUSION

There’s nothing really new here to most mobile developers, but Gear VR is definitely one of the bigger optimization challenges I’ve had in recent years. The fun part about it is we’re kind of at the level of Dreamcast-era poly counts and effects but using modern tools to create content. It’s better than the good old days!

It’s wise to build for the ground up for Gear VR than to port existing applications. This is because making a VR experience that is immersive and performant with these parameters requires all disciplines (programming, art, and design) to build around these restrictions from the start of the project.

A Weekend at Oculus Connect

I spent this past weekend at Oculus Connect and have just now had the time to process what I saw. For Oculus to go from a humble Kickstarter project a few years ago to a capacity filled conference rife with amazing demos and prototypes by countless developers is mind-boggling. I know I said VR in 2014 is like Mobile in 2002, but the pace of progress is staggering. The maturation path for VR is going to be MUCH quicker. Is it 2005 already?

...and all I got was this lousy t-shirt.

…and all I got was this lousy t-shirt.

As I stated before, Gear VR is the most important wearable platform in the universe. I’ve been developing Gear VR games for a while and am thoroughly convinced this wireless, lightweight platform will have far more reach than VR tethered to your desktop.

The GearVR demo area.

The GearVR demo area.

The apps on display were great, but I even saw a few Gear VR demos from random developers in the hotel hallways that blew away what were officially shown in Samsung’s display area. Developer interest for Gear VR is very high. Once it’s commercially available, a flood of content is soon upon us.

Despite the intense interest in the platform, I spoke to a few desktop and console developers who dismissed Gear VR as a distraction and are ignoring it–which I think is really short-sighted.

It’s true that there may be a division in audiences. Gear VR may be the larger, casual audience while apps built around Oculus’ astounding Crescent Bay platform could be for a highly monetizable market of core enthusiasts. Either route is smart business. Depending on how long you can hold out for customer traction, that is.

Oh, and Crescent Bay…was a revolution. There’s probably not much more to be said about it that hasn’t already–but the ridiculous momentum behind Oculus’ path from the DK1 to Crescent Bay makes me question the competition. Oculus has hired all of the smartest people I know and have billions of dollars to spend on VR R&D–which is their main business, not a side project. Will competitors like Sony really commit enough resources to compete with the relentless pace of Oculus’ progress?