Samsung Gear VR Development Challenges with Unity3D

As you may know, I’m a huge fan of Oculus and Samsung’s Gear VR headset. The reason isn’t about the opportunity Gear VR presents today. It’s about the future of wearables–specifically of self-contained wearable devices. In this category, Gear VR is really the first of its kind. The lessons you learn developing for Gear VR will carry over into the bright future of compact, self-contained, wearable displays and platforms. Many of which we’ve already started to see.

The Gear VR in the flesh (plastic).

The Gear VR in the flesh (plastic).


Gear VR development can be a challenge. Rendering two cameras and a distortion mesh on a mobile device at a rock solid 60fps requires a lot of optimization and development discipline. Now that Oculus’ mobile SDK is public and having worked on a few launch titles (including my own original title recently covered in Vice), I figured I’d share some Unity3D development challenges I’ve dealt with.

THERMAL ISSUES

The biggest challenge with making VR performant on a mobile devices is throttling due to heat produced by the chipset. Use too much power and the entire device will slow itself down to cool off and avoid damaging the hardware. Although the Note 4 approaches the XBOX 360 in performance characteristics, you only have a fraction of its power available. This is because the phone must take power and heat considerations in mind when keeping the CPU and GPU running at full speed.

With the Gear VR SDK you can independently tell the device how fast the GPU and CPU should run. This prevents you from eating up battery when you don’t need the extra cycles, as well as tune your game for performance at lower clock speeds. Still, you have to be aware of what types of things eat up GPU cycles or consume GPU resources. Ultimately, you must choose which to allocate more power for.

GRAPHICAL DETAIL

The obvious optimization is lowering graphical detail. Keep your polycount under 50k triangles. Avoid as much per pixel and per vertex processing as possible. Since you have tons of RAM but relatively little GPU power available–opt for more texture detail over geometry. This includes using lightmaps instead of dynamic lighting. Of course, restrict your usage of alpha channel to a minimum–preferably for quick particle effects, not for things that stay on the screen for a long period of time.

Effects you take for granted on modern mobile platforms, like skyboxes and fog, should be avoided on Gear VR. Find alternatives or design an art style that doesn’t need them. A lot of these restrictions can be made up for with texture detail.

A lot of standard optimizations apply here–for instance, use texture atlasing and batching to reduce draw calls. The target is under 100 draw calls, which is achievable if you plan your assets correctly. Naturally, there are plenty of resources in the Asset Store to get you there. Check out Pro Draw Call Optimizer for a good texture atlasing tool.

CPU OPTIMIZATIONS

There are less obvious optimizations you might not be familiar with until you’ve gone to extreme lengths to optimize a Gear VR application. This includes removing as many Update methods as possible. Most update code spent waiting for stuff to happen (like an AI that waits 5 seconds to pick a new target) can be changed to a coroutine that is scheduled to happen in the future. Converting Update loops to coroutines will take the burden of waiting off the CPU. Even empty Update functions can drain the CPU–death by a thousand cuts. Go through your code base and remove all unnecessary Update methods.

As in any mobile game, you should be pooling prefabs. I use Path-o-Logical’s PoolManager, however it’s not too hard to write your own. Either way, by recycling pre-created instances of prefabs, you save memory and reduce hiccups due to instantiation.

IN CONCLUSION

There’s nothing really new here to most mobile developers, but Gear VR is definitely one of the bigger optimization challenges I’ve had in recent years. The fun part about it is we’re kind of at the level of Dreamcast-era poly counts and effects but using modern tools to create content. It’s better than the good old days!

It’s wise to build for the ground up for Gear VR than to port existing applications. This is because making a VR experience that is immersive and performant with these parameters requires all disciplines (programming, art, and design) to build around these restrictions from the start of the project.

Oculus Rift World Space Cursors for World Space Canvases in Unity 4.6

Unity 4.6 is here! (Well, in public beta form). Finally–the GUI that I’ve waited YEARS for is in my hands. Just in time, too. I’ve just started building the GUI for my latest Oculus Rift project.

The new GUI in action.

The new GUI in action from Unity’s own demo.

One of the trickiest things to do in VR is a GUI. It seems easy at first but many lessons learned from decades of designing for the web, apps, and general 2D interfaces have to be totally reinvented. Given we don’t know what the standard controls may be for the final kit, many VR interfaces at least partially use your head as a mouse. This usually means having a 3D cursor floating around in world space which bumps into or traces through GUI objects.

Unity 4.6’s GUI features the World Space Canvas–which helps greatly. You can design beautiful, fluid 2D interfaces that exist on a plane in the game world making it much more comfortable to view in VR. However, by default Unity’s new GUI assumes you’re using a mouse, keyboard, or gamepad as an input device. How do you get this GUI to work with your own custom world-space VR cursor?

The answer is the use of Input Modules. However, in the current beta these are mostly undocumented. Luckily, Stramit at Unity has put up the source to many of the new GUI components as part of Unity’s announced open source policy. Using this code, I managed to write a short VRInputModule class that uses the result of a trace from my world space VR cursor and feeds it into the GUI. The code is here. Add this behavior to the EventSystem object where the default ones are.

In my current project, I have a 3D crosshair object that floats around the world, following the user’s view direction. The code that manages this object performs a trace, seeing if it hit anything in the UI layer. I added box colliders to the buttons in my World Space Canvas. Whenever the cursor trace hits one of these objects, I call SetTargetObject in the VRInputModule and pass it the object the trace hit. VRInputModule does the rest.

Note that the Process function polls my own input code to see if a select button has been hit–and if so, it executes the Submit action on that Button. I haven’t hooked up any event callbacks to my Buttons yet–but visually it’s responding to events (highlighting, clicking etc.)

It’s quick and dirty, but this should give you a good start in building VR interfaces using Unity’s new GUI.

Unity3D vs. Unreal 4 vs. Crytek: GDC 2014 Engine Wars

GDC 2014 is over, and one thing is clear:  The engine wars are ON!

Morpheus

For at least a few years, Unity has clearly dominated the game engine field.  Starting with browser and mobile games, then gobbling up the entire ecosystem Innovator’s Dilemma style, Unity has become the engine of choice for startups, mobile game companies, and downloadable console titles.

Until now, Unreal seemed unphased.  The creation of an entire generation of studios based on Unity technology seemed to completely pass Epic by as Unreal continued to be licensed out for high fees and revenue share by AAA studios cranking out $50 million blockbusters.

Lately, the AAA market has been contracting–leaving only a handful of high-budget tent pole games in development every year.  Many of those mega studios have started to use their own internal engine tech, avoiding Epic’s licensing fees altogether.  Surely this trend was a big wakeup call.

This year Epic strikes back with a new business model aimed at the small mammals scurrying underfoot the AAA dinosaurs.  Offering Unreal 4 on desktop and mobile platforms for a mere $19 a month and a 5% revenue cut seems like a breakthrough, but it really isn’t.

One of Unity’s biggest obstacles for new teams is its $1500 per-seat platform fee.  When you need to buy 20 licenses of Unity for 3 platforms, things get costly.  Unity’s monthly plan can help lower initial costs, but over time this can be far more expensive than just paying for the license up front.  Even when you add up all the monthly costs for each platform license subscription, it’s still a better deal than Unreal.

Giving up 5% of your revenue to Epic when profit margins are razor-thin is a non starter for me.  Unreal’s AAA feature set creates unparalleled results, even with Unity 5’s upgrades, but it’s that 5% revenue cut that still makes it an unattractive choice to me.

Epic is also aping Unity’s Asset Store with their Unreal Marketplace.  This is absolutely critical.  The Asset Store is Unity’s trojan horse–allowing developers to add to the engine’s functionality as well as provide pre-made graphics and other items invaluable for rapid prototyping or full production.  While Unreal’s Marketplace is starting out rather empty, this is a big move for the survival of the engine.

Unreal 4 throws a lot of tried and true Unreal technologies out the window, starting with UnrealScript.  The reason why Unreal comes with the source is that you have to write your game code in native C++, not a scripting language.  The new Blueprints feature is intended to somewhat replace UnrealScript for designers, but this is completely new territory.  Unreal advertises full source as a benefit over Unity, but source-level access for Unity is almost always unnecessary.  Although, it is possible now that Unreal 4 source is on Github that the community can patch bugs in the engine before Epic does.  Unity developers have to wait until Unity performs updates themselves.

Unreal 4 is so radically different from previous versions, that a lot of Unreal developers may have very good reasons for escaping to Unity or other competing engines.  For some, learning Unreal 4’s new features may not be any easier than switching to a new engine altogether.

Oh, and Crytek is basically giving their stuff away.  At $10 a month and no revenue share, I’m not sure why they are charging for this at all.  That can’t possibly cover even the marketing costs.  I’m not very familiar with Crytek, but my biggest issue with the current offering is Crytek for mobile is a completely different engine.  The mobile engine Crytek built their iOS games with is not yet publicly available to developers.

Which brings me to the latest version of Unity.  I’m sure it’s getting harder to come up with new stuff that justifies a point release.  Still, I need almost none of the features announced in Unity 5.  This is irrelevant as Unity has won the war for developers.  Which is why Unity is moving on to the next problem:  making money for developers.

Unity Cloud is Unity’s new service that is starting as a referral network for Unity games.  Developers can trade traffic between games within a huge network of Unity apps on both Android and iOS.  Unity’s purchase of Applifier shows they are dead serious about solving monetization and discovery–two of the biggest problems in mobile right now.

While other engines are still focused on surpassing Unity’s features or business model, Unity have moved into an entirely different space.  Ad networks and app traffic services may start to worry if what happened to Epic and Crytek is about to happen to them.

Anyone who reads this blog knows I’m a huge Unity fanboy.  But having one insanely dominant engine is not healthy for anyone.  I’m glad to see the other engine providers finally make a move.  I still don’t think any of them have quite got it right yet.

Oh–and in other news, YoYo Game’s GameMaker announcement at GDC, as well as some more recent examples of its capabilities make me wonder why I even bothered to get a computer science degree in the first place!

The Next Problems to Solve in Augmented Reality

I’m totally amped up about Project Tango. After having worked with augmented reality for a few years, most of the problems I’ve seen with current platforms could be solved with a miniaturized depth-sensing Kinect-style sensor. The Myriad 1 is a revolutionary chip that will dramatically change the quality of experience you get from augmented reality applications–both on mobile devices and wearables.

There’s a few other issues in AR I’d like to see addressed. Perhaps they are in research papers, but I haven’t seen anything real yet. Maybe they require some custom hardware as well.

Real-world lighting simulation.

One of the reasons virtual objects in augmented reality look fake is because AR APIs can’t simulate the real-world lighting environment in a 3D engine. For most applications, you place a directional light pointing down to and turn up the ambient for a vague approximation of overhead lighting. This is assuming the orientation of the object you’re tracking is upright, of course.

Camera Birds AR mode using an overhead directional light.

Camera Birds AR mode using an overhead directional light.

What I’d really like to use is Image Based Lighting. Image based Lighting is a computationally efficient way to simulate environmental lighting without filling a scene up with dynamic lights. It uses a combination of cube maps built from HDR photos with custom shaders to produce great results. A good example of this is the Marmoset Skyshop plug-in for Unity3D.

Perhaps with a combination of sensors and 360 cameras you can build HDR cubemaps out of the viewer’s local environment in real-time to match environmental lighting. Using these with Image Based Lighting will be a far more accurate lighting model than what’s currently available. Maybe building rudimentary cubemaps out of the video feed is a decent half-measure.

Which object is moving?

In a 3D engine, virtual objects drawn on top of image targets are rendered with two types of cameras. Ether the camera is moving around the object, or the object is moving around the camera. In real life, the ‘camera’ is your eye–so the it should move if you move your head. If you move an image target, that is effectively moving the virtual object.

Current AR APIs have no way of knowing whether the camera or the object is moving. With Qualcomm’s Vuforia, you can either tell it to always move the camera around the object, or to move the objects around the camera. This can cause problems with lighting and physics.

For instance, on one project I was asked to make liquid pour out of a virtual glass when you tilt the image target it rest upon. To do this I had to force Vuforia to assume the image target was moving–so then the image target tilted, so would the 3D object in the game engine and liquid would pour. Only problem is, this would also happen if I had moved the phone as well. Vuforia can’t tell what’s actually moving.

There needs to be a way to accurately track the ‘camera’ movement of either the wearable or mobile device so that in the 3D scene the camera and objects can be positioned accurately. This will allow for lighting to be realistically applied and for moving trackable objects to behave properly in a 3D engine. Especially with motion tracking advances such as the M7 chip, I suspect there are some good algorithmic solutions to factoring out the movement of the object and the observer to solve this problem.

Anyway, these are the kind of problems you begin to think about when staring at augmented reality simulations for years. Once you get over the initial appeal of AR’s gimmick, the practical implications of the technology poses many questions. I’ve applied for my Project Tango devkit and really hope I get my hands on one soon!

Ludum Dare: Ten Seconds of Thrust

This past weekend I participated in Ludum Dare 48, a contest where you make a game by yourself in 48 hours. The theme is revealed at the start of the contest–Iron Chef style. All code, graphics, and sounds have to be made from scratch. Voting began on Sunday night and will extend for a few weeks. I’m not even sure what you win, but that’s not the point. It’s an awesome experience in GETTING IT DONE.

Ten Seconds of Thrust!

My entry is the Lunar Lander-esque Ten Seconds of Thrust. (Please rate it!) Attempt to land at the bottom of increasingly difficult randomly generated space caverns with only ten seconds of thruster time. It’s crude, ugly, and buggy–especially on Windows where it doesn’t seem to detect landing. I didn’t have time to fix this bug as I only discovered it in the last half hour, but it does seem like a strange Unity Web Player bug since it works fine on OSX browsers. (PROTIP: Make sure you have a few friends around during the weekend to test your game!)

One of the best things about the contest is watching games evolve quickly through Twitter, Vine, Facebook, and Instagram posts. I put up a few videos in progress over the weekend.

I used a lot of the tools mentioned in my rapid prototyping posts, including a new tool I found called Sprite Gen which creates randomly generated animated character and tile sprites in tiny 12×12 blocks. Naturally, the game was developed in Unity along with 2DToolkit and HOTween for plug-ins.

I’d like to fix the landing bug as it makes the game useless on Windows, but the rules are somewhat unclear on bug-fixes that don’t add any content. This game was actually based on an idea for a Lunar Lander roguelike I was developing earlier this year. The LD48 version is highly simplified and way more fun. I abandoned my prototype in disgust back in February. This quick and dirty version is much better–I might run with it and make a full game.

A Few Quick Notes: GDC2013 Edition

Before we get started, vote for evolve.la

Blatant plug!–please vote for evolve.la in the My LA2050 grant contest. I’m in the running to build a social gaming experiment that will attempt to analyze social media activities of Los Angelenos to determine how they want the future of Los Angeles to look. I need your votes to get evolve.la off the ground! We now continue with your irregularly scheduled blog post.

GDC 2013 Rundown

GDC has become increasingly irrelevant over the past 5 years or so as influence has moved away from the realm of cloistered AAA console game teams and to so-called “indie” developers and the disruptive platforms of mobile and social. Because of this, you can get much better information having conversations with other developers. I spent most of GDC talking to people–you can always watch the good presentations on the GDC Vault.

The trend for 2013 is an industry wide panic over free2play. Presentations and panels worried over whether f2p games are ethical and how the game industry is supposed to survive through this disruption. Considering this is a conversation game developers have been having since 2009, it just goes to show how long it takes for GDC to catch on to major trends.

“Indie” developers were the big celebrities this year. So much so that formerly closed platforms from Nintendo and Sony bent over backwards to encourage garage developers to create content. Nintendo greatly loosened requirements for their development program and even revealed HTML5 support for the Wii U. Sony eliminated concept approval. This shows there are some radical changes ahead for the next generation–Changes I suggested years ago on this blog.

The biggest star of the show was Oculus VR. The wait time to try the Oculus Rift headset grew to over 2 and a half hours by the final day of GDC. I got in to see it and came away hopeful, but unimpressed. The current prototype headset is uncomfortable, but I didn’t spent much time adjusting it. The display resolution is low, causing a screen door effect. When I turned my head, the screen smeared to the point where I couldn’t see anything.

These problems are being addressed. They showed me the physical part for the new screen–the retail version of Oculus will fix the resolution and latency issues. The current kit is strictly for developers and mega-nerdy early adopters. It’s pretty neat for a $300 prototype, but far from a finished product.

I was more impressed with Infinite Z’s zSpace virtual holography system that was on display at Unity3D’s booth. It costs over 10X what Oculus does for no apparent reason. Still, being able to draw 3D splines in thin air and look around them was really cool.

Overall, GDC had a lot of opportunity on display as far as new devices, markets, and tools–but a lot of uncertainty on how to actually make money producing games.

Favorite Quotes of GDC

  • “Cokeheads are better than publishers.”

  • “They said they’d publish my game if I turn it into a Skinner-box.”

  • “The reason why you won’t close the deal is because you’re too competent.”

Displaying Maps in Unity3D

There have been a few recent examples of real-world maps displayed in Unity3D apps. The first one I noticed was the playfield in the infamous Halo 4 iPhone app that came out late last year. For unknown reasons, I was really into this game for a few months. I hung around my local 7-11 scanning bags of Doritos so much that I thought I was going to get arrested for shoplifting. Eventually this obsession led to me wanting to duplicate the map display used in the game. Here’s how I did it.

Google Maps Plug-In

Naturally the first place I looked was the Asset Store. It turns out there is a free Google Maps plug-in available. The only catch is that it requires UniWeb to work. UniWeb lets you call REST APIs and generally have more control over HTTP requests than Unity’s own WWW class allows. It can be a necessity if you’re using REST API calls but it restricts your code stripping options. This will bump up your binary size.

This asset’s sample scene works flawlessly. It downloads a map from the Google Static Map API and textures it on a cube. The code is clean and well documented, featuring the ability to request paths and markers to be added to the static map. Most attributes can be tweaked through the inspector–such as map resolution, location, etc.

I made a lot of changes to this package. I really wish it was open source. Free code assets really should be in most cases. I will try to isolate my changes into another C# file and post a Gist.

The first change I made was to add support for themed Static Maps. If you look at this wizard, you can see that there are a lot of styling options. This appears to be the same technique used in the Halo 4 app because with the right set of options you can get something that looks really close. Supporting styling in Unity3D is just a simple act of appending the style parameters to the end of the URL used by the Google Maps plug-in.

Displaying Markers in 3D

The next thing I wanted to do is display the markers as 3D objects on top of the map instead of having them inside the texture itself. This requires 3 steps:

  1. Determine where the markers are in pixel coordinates in the static map texture.
  2. Calculate the UV coordinate of the pixel coordinate.
  3. Calculate the world coordinate of the texel the UV coordinate resides at.

Step 1 can be tricky. You have to project the latitude and longitude of the marker with the Mercator projection Google Maps uses to get the pixel coordinate. Luckily, this guy already did it in PHP to create image maps from static maps. I adapted this code to C# and it works perfectly. You can grab the Google Maps utility functions here. (All this great free code on the net is making me lazy–but I digress)

Step 2 is easy. This code snippet does the trick. The only catch is that you have to flip the V so that it matches with how Unity uses UV coordinates.

Step 3 is also tricky. However, someone with much better math skills than I wrote a JavaScript method to compute the world coordinate from a UV coordinate. It searches through each triangle in the mesh and sees if the UV coordinate is contained inside it. If so, it then calculates the resultant world coordinate. The key to using this is to put the static map on a plane (the default scene in the plug-in uses a cube) and use the C# version of this function I wrote here.

3D objects floating over marker locations on a Google Static Map.

3D objects floating over marker locations on a Google Static Map.

Here’s the end result–in this case it’s a display for the Donut Dazzler prototype. 3D donuts are floating over real-world donut shops and cupcakes over cupcake bakeries. I got the locations from the Foursquare API. This is quite easy to do using UniWeb.

Slippy Maps

The aforementioned technique works great if you just want a static map to display stuff around the user’s current location. What if you want to be able to scroll around and see more map tiles, just like Google Maps when you move around with your mouse? This is called a Slippy Map. Slippy Maps are much more elaborate–they require dynamically downloading map tiles and stitching them together as the user moves around the world.

Thankfully Jonathan Derrough wrote an amazing free Slippy Map implementation for Unity3D. It really is fantastic. It displays markers in 3D and pulls map tiles from multiple sources–including OpenStreetMap and Bing/VirtualEarth. It doesn’t use Google Maps because of possible TOS violations.

I couldn’t find a way to style map tiles like Google Static Maps can. So the end result was impressive but kind of ugly. It is possible with OpenStreetMap to run your own tile server and run a custom renderer to draw styled tiles. I suspect that’s how Rescue Rush styles their OpenStreetMap tiles–unless they are doing some image processing on the client.

Either Or

For my prototype I ended up using Google Static Maps because Slippy Maps were overkill. Also, pulling tiles down from the servers seemed much slower than grabbing a single static map. I suppose I could add some tile caching, but in the end static maps worked fine for my purposes.

Keep in mind that Google Maps has some pretty fierce API usage costs. If your app goes viral, you will likely be on the hook for a huge bill. Which is why it might be worth figuring out how to style free OpenStreetMap tiles.