My Week With Project Tango

A few weeks back I got into Google’s exclusive Project Tango developers program. I’ve had a Tango tablet for about a week and have been experimenting with the available apps and Unity3D SDK.

Project Tango uses Movidius’ Myriad 1 Vision Processor chip (or “VPU”), paired with a depth camera not too unlike the original Kinect for the XBOX 360. Except instead of being a giant hideous block, it’s small enough to stick in a phone or tablet.

I’m excited about Tango because it’s an important step in solving many of the problems I have with current Augmented Reality technology. What issues can Tango solve?

POSITIONAL TRACKING

First, the Tango tablet has the ability to determine the tablet’s pose. Sure, pretty much every mobile device out there can detect its precise orientation by fusing together compass and gyro information. But by using the Tango’s array of sensors, the Myriad 1 processor can detect position and translation. You can walk around with the tablet and it knows how far and where you’ve moved. This makes SLAM algorithms much easier to develop and more precise than strictly optical solutions.

Also, another problem with AR as it exists now is that there’s no way to know whether you or the image target moved. Rendering-wise, there’s no difference. But, this poses a problem with game physics. If you smash your head (while wearing AR glasses) into a virtual box, the box should go flying. If the box is thrown at you, it should bounce off your head–big distinction!

Pose and position tracking has the potential to factor out the user’s movement and determine the motion of both the observer and the objects that are being tracked. This can then be fed into a game engine’s physics system to get accurate physics interactions between the observer and virtual objects.

OCCLUDING VIRTUAL CHARACTERS WITH THE REAL WORLD

Anyway, that’s kind of an esoteric problem. The biggest issue with AR is most solutions can only overlay graphics on top of a scene. As you can see in my Ether Drift project, the characters appear on top of specially designed trading cards. However, wave your hand in front of the characters, and they will still draw on top of everything.

Ether Drift uses Vuforia to superimpose virtual characters on top of trading cards.

Ether Drift uses Vuforia to superimpose virtual characters on top of trading cards.

With Tango, it is possible to reconstruct the 3D geometry of your surroundings using point cloud data received from the depth camera. Matterport already has an impressive demo of this running on the Tango. It allows the user to scan an area with the tablet (very slowly) and it will build a textured mesh out of what it sees. When meshing is turned off the tablet can detect precisely where it is in the saved environment mesh.

This geometry can possibly be used in Unity3D as a mesh collider which is also rendered to the depth buffer of the scene’s camera while displaying the tablet camera’s video feed. This means superimposed augmented reality characters can accurately collide with the static environment, as well as be occluded by real world objects. Characters can now not only appear on top of your table, but behind it–obscured by a chair leg.

ENVIRONMENTAL LIGHTING

Finally, this solves the challenge of how to properly light AR objects. Most AR apps assume there’s a light source on the ceiling and place a directional light pointing down. With a mesh built from local point cloud data, you can generate a panoramic render of where the observer is standing in the real world. This image can be used as a cube map for Image-based lighting systems like Marmoset Skyshop. This produces accurate lighting on 3D objects which when combined with environmental occlusion makes this truly a next generation AR experience.

A QUICK TEST

The first thing I did with the Unity SDK is drop the Tango camera in a Camera Birds scene. One of the most common requests for Camera Birds was to be able to walk through the forest instead of just rotating in place. It took no programming at all for me to make this happen with Tango.

This technology still has a long way to go–it has to become faster and more precise. Luckily, Movidius has already produced the Myriad 2, which is reportedly 3-5X faster and 20X more power efficient than the chip currently in the Tango prototypes. Vision Processing technology is a supremely nerdy topic–after all it’s literally rocket science. But it has far reaching implications for wearable platforms.

Samsung Gear VR Development Challenges with Unity3D

As you may know, I’m a huge fan of Oculus and Samsung’s Gear VR headset. The reason isn’t about the opportunity Gear VR presents today. It’s about the future of wearables–specifically of self-contained wearable devices. In this category, Gear VR is really the first of its kind. The lessons you learn developing for Gear VR will carry over into the bright future of compact, self-contained, wearable displays and platforms. Many of which we’ve already started to see.

The Gear VR in the flesh (plastic).

The Gear VR in the flesh (plastic).


Gear VR development can be a challenge. Rendering two cameras and a distortion mesh on a mobile device at a rock solid 60fps requires a lot of optimization and development discipline. Now that Oculus’ mobile SDK is public and having worked on a few launch titles (including my own original title recently covered in Vice), I figured I’d share some Unity3D development challenges I’ve dealt with.

THERMAL ISSUES

The biggest challenge with making VR performant on a mobile devices is throttling due to heat produced by the chipset. Use too much power and the entire device will slow itself down to cool off and avoid damaging the hardware. Although the Note 4 approaches the XBOX 360 in performance characteristics, you only have a fraction of its power available. This is because the phone must take power and heat considerations in mind when keeping the CPU and GPU running at full speed.

With the Gear VR SDK you can independently tell the device how fast the GPU and CPU should run. This prevents you from eating up battery when you don’t need the extra cycles, as well as tune your game for performance at lower clock speeds. Still, you have to be aware of what types of things eat up GPU cycles or consume GPU resources. Ultimately, you must choose which to allocate more power for.

GRAPHICAL DETAIL

The obvious optimization is lowering graphical detail. Keep your polycount under 50k triangles. Avoid as much per pixel and per vertex processing as possible. Since you have tons of RAM but relatively little GPU power available–opt for more texture detail over geometry. This includes using lightmaps instead of dynamic lighting. Of course, restrict your usage of alpha channel to a minimum–preferably for quick particle effects, not for things that stay on the screen for a long period of time.

Effects you take for granted on modern mobile platforms, like skyboxes and fog, should be avoided on Gear VR. Find alternatives or design an art style that doesn’t need them. A lot of these restrictions can be made up for with texture detail.

A lot of standard optimizations apply here–for instance, use texture atlasing and batching to reduce draw calls. The target is under 100 draw calls, which is achievable if you plan your assets correctly. Naturally, there are plenty of resources in the Asset Store to get you there. Check out Pro Draw Call Optimizer for a good texture atlasing tool.

CPU OPTIMIZATIONS

There are less obvious optimizations you might not be familiar with until you’ve gone to extreme lengths to optimize a Gear VR application. This includes removing as many Update methods as possible. Most update code spent waiting for stuff to happen (like an AI that waits 5 seconds to pick a new target) can be changed to a coroutine that is scheduled to happen in the future. Converting Update loops to coroutines will take the burden of waiting off the CPU. Even empty Update functions can drain the CPU–death by a thousand cuts. Go through your code base and remove all unnecessary Update methods.

As in any mobile game, you should be pooling prefabs. I use Path-o-Logical’s PoolManager, however it’s not too hard to write your own. Either way, by recycling pre-created instances of prefabs, you save memory and reduce hiccups due to instantiation.

IN CONCLUSION

There’s nothing really new here to most mobile developers, but Gear VR is definitely one of the bigger optimization challenges I’ve had in recent years. The fun part about it is we’re kind of at the level of Dreamcast-era poly counts and effects but using modern tools to create content. It’s better than the good old days!

It’s wise to build for the ground up for Gear VR than to port existing applications. This is because making a VR experience that is immersive and performant with these parameters requires all disciplines (programming, art, and design) to build around these restrictions from the start of the project.

A Weekend at Oculus Connect

I spent this past weekend at Oculus Connect and have just now had the time to process what I saw. For Oculus to go from a humble Kickstarter project a few years ago to a capacity filled conference rife with amazing demos and prototypes by countless developers is mind-boggling. I know I said VR in 2014 is like Mobile in 2002, but the pace of progress is staggering. The maturation path for VR is going to be MUCH quicker. Is it 2005 already?

...and all I got was this lousy t-shirt.

…and all I got was this lousy t-shirt.

As I stated before, Gear VR is the most important wearable platform in the universe. I’ve been developing Gear VR games for a while and am thoroughly convinced this wireless, lightweight platform will have far more reach than VR tethered to your desktop.

The GearVR demo area.

The GearVR demo area.

The apps on display were great, but I even saw a few Gear VR demos from random developers in the hotel hallways that blew away what were officially shown in Samsung’s display area. Developer interest for Gear VR is very high. Once it’s commercially available, a flood of content is soon upon us.

Despite the intense interest in the platform, I spoke to a few desktop and console developers who dismissed Gear VR as a distraction and are ignoring it–which I think is really short-sighted.

It’s true that there may be a division in audiences. Gear VR may be the larger, casual audience while apps built around Oculus’ astounding Crescent Bay platform could be for a highly monetizable market of core enthusiasts. Either route is smart business. Depending on how long you can hold out for customer traction, that is.

Oh, and Crescent Bay…was a revolution. There’s probably not much more to be said about it that hasn’t already–but the ridiculous momentum behind Oculus’ path from the DK1 to Crescent Bay makes me question the competition. Oculus has hired all of the smartest people I know and have billions of dollars to spend on VR R&D–which is their main business, not a side project. Will competitors like Sony really commit enough resources to compete with the relentless pace of Oculus’ progress?

Oculus Rift World Space Cursors for World Space Canvases in Unity 4.6

Unity 4.6 is here! (Well, in public beta form). Finally–the GUI that I’ve waited YEARS for is in my hands. Just in time, too. I’ve just started building the GUI for my latest Oculus Rift project.

The new GUI in action.

The new GUI in action from Unity’s own demo.

One of the trickiest things to do in VR is a GUI. It seems easy at first but many lessons learned from decades of designing for the web, apps, and general 2D interfaces have to be totally reinvented. Given we don’t know what the standard controls may be for the final kit, many VR interfaces at least partially use your head as a mouse. This usually means having a 3D cursor floating around in world space which bumps into or traces through GUI objects.

Unity 4.6′s GUI features the World Space Canvas–which helps greatly. You can design beautiful, fluid 2D interfaces that exist on a plane in the game world making it much more comfortable to view in VR. However, by default Unity’s new GUI assumes you’re using a mouse, keyboard, or gamepad as an input device. How do you get this GUI to work with your own custom world-space VR cursor?

The answer is the use of Input Modules. However, in the current beta these are mostly undocumented. Luckily, Stramit at Unity has put up the source to many of the new GUI components as part of Unity’s announced open source policy. Using this code, I managed to write a short VRInputModule class that uses the result of a trace from my world space VR cursor and feeds it into the GUI. The code is here. Add this behavior to the EventSystem object where the default ones are.

In my current project, I have a 3D crosshair object that floats around the world, following the user’s view direction. The code that manages this object performs a trace, seeing if it hit anything in the UI layer. I added box colliders to the buttons in my World Space Canvas. Whenever the cursor trace hits one of these objects, I call SetTargetObject in the VRInputModule and pass it the object the trace hit. VRInputModule does the rest.

Note that the Process function polls my own input code to see if a select button has been hit–and if so, it executes the Submit action on that Button. I haven’t hooked up any event callbacks to my Buttons yet–but visually it’s responding to events (highlighting, clicking etc.)

It’s quick and dirty, but this should give you a good start in building VR interfaces using Unity’s new GUI.

VR in 2014 = Mobile Games in 2002?

The first VRLA Meetup last week was awesome.  The performance capture studio at Digital Domain in Marina Del Rey hosted a series of impressive demos as well as live presentations on the current state and future of VR applications.  The venue could only hold 100 people, but 300 registered.  Mobs of interested VR consumers, developers, and producers had to be turned away at the door.

VRLA winding down. (Photo via John Root)

VRLA winding down. (Photo via John Root)

After this event, it struck me that VR in 2014 is reminiscent of mobile in the early 2000s.  Back in 2002 I attended the first GDC Mobile Gaming Summit.  It was at a jam-packed lecture hall in San Jose where presenters demoed the latest in technology and gave their thoughts on where the industry was heading.

At that point, mobile phone hardware was clunky and primitive.  Most phones were still sporting 80×50 monochrome screens with maybe 100k of RAM available for programs to run.  Even if you were ‘lucky’ enough to have one of these devices, it was nearly impossible to figure out how to download games.

In 2002 almost nobody knew how to monetize mobile games.  The hardware could barely run games anyway.  Yet, these people knew it was going to be a big deal.  The room was filled with excitement and anything could happen.

Since then, mobile gaming has created a huge new audience for games that has disrupted the traditional game industry, forcing a shift in how console games are designed and delivered.  Now mobile gaming is obvious, but back in 2002 there were many naysayers–despite the fact that in Japan iMode had been successfully delivering mobile games since the late ‘90s.

To me, VR in its current state feels the same way.  The hardware is huge and clumsy.  There is some precedent for VR applications stretching way back to the 1990s with Virtuality and Battletech Centers.  And there’s a lot of consumer interest–evidenced by all the successful VR and AR hardware kickstarters in addition to the attendance of VRLA this month.

The top question on everyone’s mind is “how do I make money in VR?”  This was the same question asked by many about mobile in 2002.  Back then, the path was more obvious.  Qualcomm’s BREW and Japan’s iMode already had established billing models for mobile content.  Right now, it’s unknown who will pay for VR experiences and what form they will take. A lot of this is a hardware question. Nobody really knows what the iPhone of wearable gaming will be like–but when it arrives, it will be revolutionary.

These definitely are uncertain and exciting times for this new medium–which makes it much more fun to develop for than established platforms.

Towerfall: The Re-Return of Social Gaming

Social gaming was hot.  Then it ‘died’.  And now it’s hot?  The fact is, video games have always been social.  In the earliest era of computer games there weren’t enough CPU cycles (or CPUs at all!) for AI.  Players had to move everything themselves–Steve Russell’s Spacewar being the earliest example.  But just look classic coin-ops like Pong, Warlords, Sprint, etc.  Same-screen multiplayer was just how things were done.  Arcades in the ‘80s weren’t solely the domain of nerds–a broad spectrum of people showed up and played games together.  Imagine that!

Towerfall

Local multiplayer ruled well into the ‘90s.  Games like GoldenEye, Mario Party, and Bomberman ensured there was always something to do when you had people over your place.  Yet, once Internet multiplayer hit in the early ‘00s, console games became strangely anti-social.  Today when someone comes over my house and wants to play a game with me–well, it’s complicated.  There really aren’t many games people can play together on the market.

That’s why Towerfall Ascension is so interesting to me.  At first I thought it was yet another pixel-art indie game over promoted by Ouya due to a lack of content.  After playing it with others its significance dawned on me.  Finally there’s something to play with other people!  It had been so long since I’d had a local multiplayer experience that it took actually playing it for me to recognize this one fact:  the local multiplayer brawler may very well be where the MOBA was when DOTA was merely a Warcraft III mod.

At GDC I noticed the beginning of this trend.  There were a few Towerfall clones already in progress or on the market.  In fact, some similar games even shortly preceded Towerfall.  Not to mention Towerfall’s release on the PS4 and Steam has been highly successful.  I really think a new (old) genre is born.

 

Unity3D vs. Unreal 4 vs. Crytek: GDC 2014 Engine Wars

GDC 2014 is over, and one thing is clear:  The engine wars are ON!

Morpheus

For at least a few years, Unity has clearly dominated the game engine field.  Starting with browser and mobile games, then gobbling up the entire ecosystem Innovator’s Dilemma style, Unity has become the engine of choice for startups, mobile game companies, and downloadable console titles.

Until now, Unreal seemed unphased.  The creation of an entire generation of studios based on Unity technology seemed to completely pass Epic by as Unreal continued to be licensed out for high fees and revenue share by AAA studios cranking out $50 million blockbusters.

Lately, the AAA market has been contracting–leaving only a handful of high-budget tent pole games in development every year.  Many of those mega studios have started to use their own internal engine tech, avoiding Epic’s licensing fees altogether.  Surely this trend was a big wakeup call.

This year Epic strikes back with a new business model aimed at the small mammals scurrying underfoot the AAA dinosaurs.  Offering Unreal 4 on desktop and mobile platforms for a mere $19 a month and a 5% revenue cut seems like a breakthrough, but it really isn’t.

One of Unity’s biggest obstacles for new teams is its $1500 per-seat platform fee.  When you need to buy 20 licenses of Unity for 3 platforms, things get costly.  Unity’s monthly plan can help lower initial costs, but over time this can be far more expensive than just paying for the license up front.  Even when you add up all the monthly costs for each platform license subscription, it’s still a better deal than Unreal.

Giving up 5% of your revenue to Epic when profit margins are razor-thin is a non starter for me.  Unreal’s AAA feature set creates unparalleled results, even with Unity 5’s upgrades, but it’s that 5% revenue cut that still makes it an unattractive choice to me.

Epic is also aping Unity’s Asset Store with their Unreal Marketplace.  This is absolutely critical.  The Asset Store is Unity’s trojan horse–allowing developers to add to the engine’s functionality as well as provide pre-made graphics and other items invaluable for rapid prototyping or full production.  While Unreal’s Marketplace is starting out rather empty, this is a big move for the survival of the engine.

Unreal 4 throws a lot of tried and true Unreal technologies out the window, starting with UnrealScript.  The reason why Unreal comes with the source is that you have to write your game code in native C++, not a scripting language.  The new Blueprints feature is intended to somewhat replace UnrealScript for designers, but this is completely new territory.  Unreal advertises full source as a benefit over Unity, but source-level access for Unity is almost always unnecessary.  Although, it is possible now that Unreal 4 source is on Github that the community can patch bugs in the engine before Epic does.  Unity developers have to wait until Unity performs updates themselves.

Unreal 4 is so radically different from previous versions, that a lot of Unreal developers may have very good reasons for escaping to Unity or other competing engines.  For some, learning Unreal 4’s new features may not be any easier than switching to a new engine altogether.

Oh, and Crytek is basically giving their stuff away.  At $10 a month and no revenue share, I’m not sure why they are charging for this at all.  That can’t possibly cover even the marketing costs.  I’m not very familiar with Crytek, but my biggest issue with the current offering is Crytek for mobile is a completely different engine.  The mobile engine Crytek built their iOS games with is not yet publicly available to developers.

Which brings me to the latest version of Unity.  I’m sure it’s getting harder to come up with new stuff that justifies a point release.  Still, I need almost none of the features announced in Unity 5.  This is irrelevant as Unity has won the war for developers.  Which is why Unity is moving on to the next problem:  making money for developers.

Unity Cloud is Unity’s new service that is starting as a referral network for Unity games.  Developers can trade traffic between games within a huge network of Unity apps on both Android and iOS.  Unity’s purchase of Applifier shows they are dead serious about solving monetization and discovery–two of the biggest problems in mobile right now.

While other engines are still focused on surpassing Unity’s features or business model, Unity have moved into an entirely different space.  Ad networks and app traffic services may start to worry if what happened to Epic and Crytek is about to happen to them.

Anyone who reads this blog knows I’m a huge Unity fanboy.  But having one insanely dominant engine is not healthy for anyone.  I’m glad to see the other engine providers finally make a move.  I still don’t think any of them have quite got it right yet.

Oh–and in other news, YoYo Game’s GameMaker announcement at GDC, as well as some more recent examples of its capabilities make me wonder why I even bothered to get a computer science degree in the first place!