Developing Applications for HoloLens with Unity3D: First Impressions

I started work on HoloLens game development with Unity3D over the past week. This included going through all of the example projects, as well as building simple games and applications to figure out how all of the platform’s features work.  Here’s a some takeaways from my first week as a HoloLens developer.

CiPSvqKVAAAsvoP

Baby steps…

The Examples Are Great, But Lack Documentation

If you go through all of the Holo Academy examples Microsoft provides, you’ll go from displaying a basic cube to a full-blown multi user Augmented Reality experience. However, most of the examples involve dragging and dropping pre-made prefabs and scripts into the scene. Not a lot about the actual SDK is explained. The examples are a good way to get acquainted with HoloLens features, but you’re going to have to do more work to figure out how to write your own applications.

HoloToolkit is Incredibly Full Featured

All of the examples are based on HoloToolkit, Microsoft’s collection of scripts and prefabs that handle just about every major HoloLens application feature: input, spatial mapping, gesture detection, speech recognition, and even some networking.

I also found that features I needed (such as the placement of objects in the real world using real-time meshing as a collider) are features in the examples I could easily strip out and modify for my own C# scripts. Using these techniques I was able to get a very simple carnival milk bottle game running in a single Saturday afternoon.

Multiplayer Gets Complicated

I’m working on moving my award winning Tango RTS, InnAR Wars, to HoloLens. However, multiplayer experiences work much differently on HoloLens than Tango. In the case of Tango, each device shares a single room scan file and is localized in the same coordinate space. This means that once the game starts, placing an object (like a floating planet or asteroid) at any position will make it appear in the same real-world location on both Tangos.

HoloLens shares objects between devices using what are called Spatial Anchors. Spatial Anchors mark parts of the scanned room geometry as an anchored position. You can then place virtual objects in the real world relative to this anchor. When you share a Spatial Anchor with another device, the other HoloLens will look for a similar location in its own scan of the room to position the anchor. These anchors are constantly being updated as the scan continues, which is part of the trick to how HoloLens’ tracking is so rock solid.

Sure, having a single coordinate frame on the Tango is easier to deal with, but the Tango also suffers from drift and inaccuracies that may be symtomatic of its approach. Spatial Anchoring is a rather radical change from how Tango works–which means a lot of refactoring for InnAR Wars, or even a redesign.

First Week Down

This first week has been an enlightening experience. Progress has been fast but also made me aware of how much work it will be to produce a great HoloLens app. At least two independently published HoloLens games popped up in the Windows Store over the past few days. The race is on for the first great indie HoloLens application!

My Week With HoloLens

holobox

Microsoft ships the HoloLens and Clicker accessory in the box

My HoloLens development kits finally arrived a week ago. I’ve spent a great deal of time using the device over the past week. I figured I’d post my impressions here.

This Really Works

When I first put my HoloLens on, I placed an application window floating above my kitchen table. Suddenly, I realized I hadn’t taken out the garbage. Still wearing the device, I ran downstairs to put something in the trash. I narrowly missed my neighbor–successfully avoiding an awkward conversation about what this giant contraption on my face does.

When I returned to my kitchen, the application was still there–hovering in space.

As I’ve stated before, Microsoft blew me away. HoloLens is an absolutely incredible leap from previous generation AR glasses (and MUCH cheaper, believe it or not). It also does everything Tango does but at a much higher level of performance and precision. Which means most applications built on Tango can be directly moved over to HoloLens.

HoloLens is fast and intuitive enough to attempt getting actual work done with it. Yet, a lot of my time spent is just trying to make silly videos like this.

It’s A Full Blown Windows Machine

HoloLens isn’t just a prototype headset–it’s a full featured desktop Windows PC on your face. Not only can you run “Windows Holographic” apps, but any Universal Windows App from the Windows Store. Instead of these applications running in a window on a monitor, they float around in space–positioned wherever you choose.

Although HoloLens really does need a taskbar of some kind. It’s way too easy to forget where Skype physically is because you launched it in the bathroom.

It also helps to connect a Bluetooth keyboard and mouse when running standard applications. Gestures can’t give you the input fidelity of a traditional mouse, and typing in the air is a chore.

HoloLens’ narrow FOV makes using a regular Windows app problematic–as the screen will get cut off and require you to move your head around to see most of it. Also, if you push a window far enough into the background so you can see the whole thing, you’ll notice HoloLens’ resolution is a little low to read small text. We’re going to need a next generation display for HoloLens to really be useful for everyday computing.

Microsoft Has Created A New Input Paradigm

HoloLens can seemingly only recognize two gestures: bloom and “air tap”. Blooming is cool–I feel like a person in a sci-fi movie making the Windows start menu appear in the air by tossing it up with a simple gesture.

The air tap can be unintuitive. Most people I let try the HoloLens poke at the icons by stabbing them with a finger. That’s not what the air tap is for. You still have to gaze at a target by moving your head and then perform the lever-like air tap gesture within the HoloLens camera’s view to select what the reticule is on.

HoloLens can track the motion of your finger and use it as input to move stuff around (such as application windows), but not detect collisions between it and virtual objects. It’s as if it can detect the amount your finger moves but not its precise location in 3D space.

Using apps while holding your hand out in front of the headset is tiring. This is why Microsoft includes the clicker. This is a simple Bluetooth button that when pressed triggers the air tap gesture. Disappointingly, the clicker isn’t trackable–so you can’t use it as a true finger replacement.

Microsoft has adapted Windows to the holographic model successfully. This is the first full blown window manager and gesture interface for augmented reality I’ve ever seen and it’s brilliant. After a few sessions with the device, most people I’ve let use it are launching apps and moving windows around the room like a pro.

This Thing Is Heavy

Although the industrial design is cool in a retro ‘90s way, this thing is really uncomfortable to use for extended periods of time. Maybe I don’t have it strapped on correctly, but after a 20 minute Skype session I had to take the device off. I felt pain above the bridge of my nose. When I looked in the mirror, I saw what can only be described as ‘HoloHead’

holohead

The unfortunate symptom of “HoloHead”

The First Generation Apps Are Amazing

There already are great free apps in the Windows Store that show off the power of the HoloLens platform. Many made by Asobo Studio–a leader in Augmented Reality game development.

Young Conker

Young Conker is a great example of HoloLens as a games platform. The game is simple: after scanning your surroundings, play a familiar platform game over the floors, walls, tables and chairs as Conker runs, jumps and collects coins scattered about your room.

Conker will jump on top of your coffee table, run into walls, or be occluded by a chair as if he were walking behind it–well, depending on how accurate your scan is. The fact that this works as well as it does is amazing to me.

Fragments

One of the first true game experiences I’ve ever played in augmented reality. You play the part of a futuristic detective, revisiting memories of crimes as their events are re-created holographically in your location. Characters sit on your furniture. You’ll hunt for pieces of evidence scattered about your room–even under tables. It really is an incredible experience, As with Conker, it requires some pre-scanning of your environment. However, applications apparently can share scans between each other as Fragments was able to re-use a scan of my office I previously made with another app.

Skype

When Skyping with a person not using HoloLens, you simply place their video on a wall in your surroundings. It’s almost like talking to someone on the bridge of the Enterprise, depending on how big you make the video window.

When Skyping with another HoloLens user, you can swap video feeds so either participant can see through the other’s first person view. While looking at someone else’s video feed as a floating window, you can sketch over it with drawing tools or even place pictures from your photos folder in the other person’s environment. 2D lines drawn over the video feed will form around the other user’s real-world in 3D–bending around corners, or sticking to the ceiling. 

Conclusion

As a consumer electronics device, HoloLens is clearly beta–maybe even alpha, but surprisingly slick. It needs more apps. With Wave 2 underway, developers are working on just that. In my case, I’m moving all of my Tango projects to HoloLens–so you’ll definitely be seeing cool stuff soon!

HoloLens is Ready for Prime Time

Microsoft recently invited me to try out HoloLens at their Venice space on Abbot-Kinney. Having just won the AR/VR award in the Tango App Contest for InnAR Wars, I jumped at the chance.  After developing for Google Tango for over a year, I had a long wishlist of features I’m looking for in an AR platform. In the unlikely event that HoloLens was for real, I could make InnAR Wars exactly how I envisioned it at the start of the project.

Entering the Hololens Demo Zone

 

My skepticism was well warranted. Having worked in AR for the past 5 years, I’ve seen my share of fake demo videos and smoke-and-mirrors pitches. Every AR company is obligated to put out faked demo videos that they inevitably fail to live up to. Just look at this supercut of utterly ridiculous promotional videos. I was curious if the staged HoloLens demos weren’t much better.

I had heard many polarizing opinions about HoloLens from people who have tried it. Some felt it was an incredible experience while others told me it was the single most overhyped disappointment in consumer electronics history.

After trying it for myself, I can say HoloLens is the real deal.

The demo I tried was the “X-Ray” game shown at BUILD no too long ago. This version was a little simpler than this staged demonstration–your arm isn’t magically turned into a plasma cannon. Instead, your hold your hand out in front of the device and “air tap” to fire at enemies that appear to be coming out of cracks in the walls. Occasionally you can give a voice command to freeze time, Matrix-style, and take out enemies while they are vulnerable.

A simple game, for sure, but a great demonstration of the HoloLens’ capabilities.  

The device is clearly a prototype. It’s bulky, looks like a vision of the future from a ’90s sci-fi flick, and it even BSODed on me which was kind of hilarious.  Still, I was thoroughly impressed with HoloLens and here’s why:

Meshing

When the game starts, you have to look around the room and watch it build geometry out of the walls and other objects in the area. HoloLens builds a mesh out of the real world with a depth camera that is then used in gameplay. It’s kind of like building a video game level out of all the walls and floors in your room. This mesh is then used to place virtual wall cracks and spawn points for enemies during gameplay. Once you’ve build a complete mesh out of the room, the game begins.

This same meshing process is possible with Google Tango, but it’s slow and temperamental. Still, very impressive given it’s in a tablet you can buy right now. In fact, I used Tango’s meshing capabilities to place floor traps in InnAR Wars.

I was impressed with HoloLens’ rapid meshing as I moved around my environment. It even handled dynamic objects, such as the woman guiding me through the demo. When I looked at her during the meshing phase, she quickly transformed into a blocky form of red polygons.

Display

Initially I was disappointed in the display. Much like Epson’s Moverio or ODG’s R7 glasses, it projects the augmentation on a part of the “glass” in front of your eyes. This means you see a distracting bright rectangle in the middle of your vision where the augmentation will appear. Compared to ODG’s R7s, HoloLens seemed to have higher contrast between the part of the display that’s augmented and the part that’s not. There also is an annoying reflection that looks like a blue and green “shadow” of the augmentation above and below the square.

While playing the game this didn’t matter at all. Although everything is somewhat translucent, if the colors are bright enough the virtual objects appeared solid. Combined with rock solid tracking on the environment, I soon forgot all about contrast issues and internal reflection problems on the display. A these issues can be dealt with through art and the lighting of the room you are playing in. Plus, a Microsoft representative assured me the display is even better in the current version still in their labs.

Field of View

The top issue people have with HoloLens is the field of view. People complain that it only shows graphics in a postcard sized space in front of your vision. I had rock bottom expectations for this having developed applications on previous generation wearable AR displays. Although HoloLens’ augmentation is limited to a small square in front of your vision, this space is easily 2X the size of other platforms. In the heat of the action while playing X-Ray, I mostly forgot about this restriction.

Field of view is not an easy thing to solve–it’s a fundamental problem with the physics of waveguide optics. I’m not sure how much progress Microsoft can make here. But the FOV is already wide enough for a lot of applications.

I’m All In

Part of the pitch is the “Windows Holographic” platform. That is, in the future you won’t have screens. With HoloLens you’ll be able to place Windows applications in mid-air anywhere in your office. Applications will virtually float around you like monitors hovering in space. (Magic Leap’s fake demo video shows an example of how this would work) This can actually be done right now with HoloLens and existing Windows applications. Supposedly, some Microsoft engineers wear HoloLens and have integrated the “holographic” workspace into their daily use.

Beyond gaming applications, I am on board with this screen-free future. Your tablet, computer, even your phone or smartwatch will merely be a trackable object to display virtual screens on. Resolution and screen size will be unlimited, and you can choose to share these virtual screens with other AR users for collaboration.  No need to re-arrange your office for that huge 34 inch monitor. You can simply place it in 3D space and overlay it on top of reality. Think of all the extra stuff your phone could do if it didn’t have to power a giant LCD display! It’s definitely on its way. I’m just not sure exactly when.

The Challenge of Building Augmented Reality Games In The Real World

InnAR Wars Splash Image - B

Last week I submitted the prototype build of my latest augmented reality project, InnAR Wars, to Google’s Build a Tango App Contest. It’s an augmented reality multiplayer space RTS built for Google’s Tango tablet that utilizes the environment around you as a game map. The game uses the Tango’s camera and Area Learning capabilities to superimpose an asteroid-strewn space battlefield over your real-world environment. Two players holding Tangos walk around the room hunting for each other’s bases while sending attack fleets at the other player’s structures.

Making InnAR Wars fun is tricky because I essentially have no control over the map. The battlefield has to fit inside the confines of the real-world environment the tablets are in. Using the Tango’s Area Learning capabilities with the positions of players, I know the rough size of the play area. With this information I adjust the density of planetoids and asteroids based on the size of the room. It’s one small way I can make sure the game at least has an interesting number of objects in the playfield regardless of the size of the area. As you can see from the videos in this post, it’s already being played in a variety of environments.

This brings up the biggest challenge of augmented reality games–How do you make a game fun when you have absolutely no control over the environment in which it’s played? One way is to require the user to set up the play space as if she were playing a board game. By using Tango’s depth camera, you could detect the shapes and sizes of objects on a table and use those as the playfield. It’s up to the user to set it up in a way that’s fun–much like playing a tabletop war game.

For the final release, I’m planning on using Tango’s depth camera to figure out where the room’s walls, ceilings, and floors are. Then I can have ships launch from portals that appear to open on the surfaces of the room. Dealing with the limited precision and performance of the Tango depth camera along with the linear algebra involved in plane estimation is a significant challenge. Luckily, there are a few third-party solutions for this I’m evaluating.

Especially when looking at augmented reality startups’ obligatory fake demo videos, the future of AR gaming seems exciting. But the practical reality of designing a game to be played in reality–which is itself rather poorly designed–can prevent even the most amazing technology from enabling great games. It’s probably going to take a few more hardware generations to not only make the technology usable, but also develop the design language to make great games that work in AR.

If you want to try out the game, I’ll have a few Tangos on hand at FLARB’s VRLA Summer Expo table. Stop by and check it out!

The Coming Public Point Cloud

One of the most important elements of Augmented Reality is the ability to seamlessly mesh 3D graphics with the real world.  Current AR technology simply overlays graphics on top of video–even when tracking and recognizing objects like cards and markers. The AR SDK gives the position and orientation of the tracked object to a 3D engine which then renders geometry on top of the video frame coming from the device’s camera.

A 3D scan of myself overlaid on an AR card with Vuforia.

A 3D scan of myself overlaid on an AR card with Vuforia.

New technologies like Google’s Tango Tablet use Kinect-style depth cameras to store not only the color of each pixel, but the depth and position, too. (Well, sort of–the depth camera’s resolution is much lower than that of the color camera). This means that you can build a 3D model out of what the tablet’s camera sees as you move around an environment.

Tango displaying point cloud data of what it currently sees.

Tango displaying point cloud data of what it currently sees.

This feature has huge ramifications. Tango uses this data to do what is called “localization.” This means once an area is scanned, the tablet can compare the internal 3D model of the current environment it has stored with what the camera is currently seeing. When fused with compass and gyro data, the Tango tablet can compute its precise location inside the scanned area. This doesn’t take long either. Tango starts building the model immediately. Walk back to where you started using the tablet, and Tango knows where it is.

Usually this 3D data is stored as a point cloud. This is basically a 3D point for every position the 3D camera records.  Hence, a sufficiently complicated area will look like a cloud of dots–a point cloud. You can see an example of the Tango building a point cloud with the Room Scanner Tango app.

These point clouds are important for not only localization, but AR graphical effects such as occluding rendered 3D objects with the real world.  This is because a 3D mesh can be built out of these points which can be used for occlusion, collision and other features. Having objects in between you and the augmentation occlude the 3D render is essential to nailing the feeling that an AR object is really there.

Point clouds are awesome, but building them can be frustrating. Current point cloud scanners are bulky and slow, not to mention their accuracy issues can lead to jitter and other artifacts. Also, some depth cameras run at a frame rate low enough to make it hard to create point clouds without moving very slowly through an environment. Who wants to play a game where you have to walk around and meticulously scan a room before you can start?

In order for AR games and apps to succeed, devices need to effortlessly be able to sense and detect the 3D geometry of their surroundings. Yet, quick and instant generation of point clouds is far beyond the capabilities of current mobile sensor technology.

That’s where the public point cloud comes in.

A truly great Augmented Reality platform needs to upload point clouds generated by devices to the cloud.  Then, when a user uses some hot new wearable AR glasses, it can pull down a pre-made point cloud for the current location off of a server and use that until the glasses can update it from its own sensors. The device will then upload a fresh point cloud which can be used to refine the version stored online.

You can kind of see this already–Google and Apple Maps’ 3D satellite mode use similar point cloud reconstruction techniques presumably from aerial photos and other sources. Whereas these 3D models often look like something you’d see on the original PlayStation, the public point cloud will have to be much more detailed.  As sensors on mobile devices become more advanced, the crowdsourced point cloud data will become incredibly detailed.

Apple Maps' 3D reconstruction kind of looks like an original PlayStation game. The public point cloud will have to be higher resolution.

This 3D reconstruction kind of looks like an original PlayStation game. The public point cloud will have to be higher resolution. Oh, this is also where you can bet the best pastrami sandwich on the planet.

A massive, publicly accessible point cloud is not just necessary for the next generation of AR wearable devices. But also for self driving cars, drone navigation, and robotics (which is indeed where many of these algorithms came from in the first place). Privacy implications do exist, but perhaps not more so than Google Maps’ street view, or other current technologies that give you very precise information about your location.

In the near future, almost every public place on the planet will be stored in the cloud as 3D reconstructed geometry–passively built up and constantly refined by sensors embedded on countless mobile and wearable devices, perhaps without the user even knowing.

My Week With Project Tango

A few weeks back I got into Google’s exclusive Project Tango developers program. I’ve had a Tango tablet for about a week and have been experimenting with the available apps and Unity3D SDK.

Project Tango uses Movidius’ Myriad 1 Vision Processor chip (or “VPU”), paired with a depth camera not too unlike the original Kinect for the XBOX 360. Except instead of being a giant hideous block, it’s small enough to stick in a phone or tablet.

I’m excited about Tango because it’s an important step in solving many of the problems I have with current Augmented Reality technology. What issues can Tango solve?

POSITIONAL TRACKING

First, the Tango tablet has the ability to determine the tablet’s pose. Sure, pretty much every mobile device out there can detect its precise orientation by fusing together compass and gyro information. But by using the Tango’s array of sensors, the Myriad 1 processor can detect position and translation. You can walk around with the tablet and it knows how far and where you’ve moved. This makes SLAM algorithms much easier to develop and more precise than strictly optical solutions.

Also, another problem with AR as it exists now is that there’s no way to know whether you or the image target moved. Rendering-wise, there’s no difference. But, this poses a problem with game physics. If you smash your head (while wearing AR glasses) into a virtual box, the box should go flying. If the box is thrown at you, it should bounce off your head–big distinction!

Pose and position tracking has the potential to factor out the user’s movement and determine the motion of both the observer and the objects that are being tracked. This can then be fed into a game engine’s physics system to get accurate physics interactions between the observer and virtual objects.

OCCLUDING VIRTUAL CHARACTERS WITH THE REAL WORLD

Anyway, that’s kind of an esoteric problem. The biggest issue with AR is most solutions can only overlay graphics on top of a scene. As you can see in my Ether Drift project, the characters appear on top of specially designed trading cards. However, wave your hand in front of the characters, and they will still draw on top of everything.

Ether Drift uses Vuforia to superimpose virtual characters on top of trading cards.

Ether Drift uses Vuforia to superimpose virtual characters on top of trading cards.

With Tango, it is possible to reconstruct the 3D geometry of your surroundings using point cloud data received from the depth camera. Matterport already has an impressive demo of this running on the Tango. It allows the user to scan an area with the tablet (very slowly) and it will build a textured mesh out of what it sees. When meshing is turned off the tablet can detect precisely where it is in the saved environment mesh.

This geometry can possibly be used in Unity3D as a mesh collider which is also rendered to the depth buffer of the scene’s camera while displaying the tablet camera’s video feed. This means superimposed augmented reality characters can accurately collide with the static environment, as well as be occluded by real world objects. Characters can now not only appear on top of your table, but behind it–obscured by a chair leg.

ENVIRONMENTAL LIGHTING

Finally, this solves the challenge of how to properly light AR objects. Most AR apps assume there’s a light source on the ceiling and place a directional light pointing down. With a mesh built from local point cloud data, you can generate a panoramic render of where the observer is standing in the real world. This image can be used as a cube map for Image-based lighting systems like Marmoset Skyshop. This produces accurate lighting on 3D objects which when combined with environmental occlusion makes this truly a next generation AR experience.

A QUICK TEST

The first thing I did with the Unity SDK is drop the Tango camera in a Camera Birds scene. One of the most common requests for Camera Birds was to be able to walk through the forest instead of just rotating in place. It took no programming at all for me to make this happen with Tango.

This technology still has a long way to go–it has to become faster and more precise. Luckily, Movidius has already produced the Myriad 2, which is reportedly 3-5X faster and 20X more power efficient than the chip currently in the Tango prototypes. Vision Processing technology is a supremely nerdy topic–after all it’s literally rocket science. But it has far reaching implications for wearable platforms.

Samsung Gear VR Development Challenges with Unity3D

As you may know, I’m a huge fan of Oculus and Samsung’s Gear VR headset. The reason isn’t about the opportunity Gear VR presents today. It’s about the future of wearables–specifically of self-contained wearable devices. In this category, Gear VR is really the first of its kind. The lessons you learn developing for Gear VR will carry over into the bright future of compact, self-contained, wearable displays and platforms. Many of which we’ve already started to see.

The Gear VR in the flesh (plastic).

The Gear VR in the flesh (plastic).


Gear VR development can be a challenge. Rendering two cameras and a distortion mesh on a mobile device at a rock solid 60fps requires a lot of optimization and development discipline. Now that Oculus’ mobile SDK is public and having worked on a few launch titles (including my own original title recently covered in Vice), I figured I’d share some Unity3D development challenges I’ve dealt with.

THERMAL ISSUES

The biggest challenge with making VR performant on a mobile devices is throttling due to heat produced by the chipset. Use too much power and the entire device will slow itself down to cool off and avoid damaging the hardware. Although the Note 4 approaches the XBOX 360 in performance characteristics, you only have a fraction of its power available. This is because the phone must take power and heat considerations in mind when keeping the CPU and GPU running at full speed.

With the Gear VR SDK you can independently tell the device how fast the GPU and CPU should run. This prevents you from eating up battery when you don’t need the extra cycles, as well as tune your game for performance at lower clock speeds. Still, you have to be aware of what types of things eat up GPU cycles or consume GPU resources. Ultimately, you must choose which to allocate more power for.

GRAPHICAL DETAIL

The obvious optimization is lowering graphical detail. Keep your polycount under 50k triangles. Avoid as much per pixel and per vertex processing as possible. Since you have tons of RAM but relatively little GPU power available–opt for more texture detail over geometry. This includes using lightmaps instead of dynamic lighting. Of course, restrict your usage of alpha channel to a minimum–preferably for quick particle effects, not for things that stay on the screen for a long period of time.

Effects you take for granted on modern mobile platforms, like skyboxes and fog, should be avoided on Gear VR. Find alternatives or design an art style that doesn’t need them. A lot of these restrictions can be made up for with texture detail.

A lot of standard optimizations apply here–for instance, use texture atlasing and batching to reduce draw calls. The target is under 100 draw calls, which is achievable if you plan your assets correctly. Naturally, there are plenty of resources in the Asset Store to get you there. Check out Pro Draw Call Optimizer for a good texture atlasing tool.

CPU OPTIMIZATIONS

There are less obvious optimizations you might not be familiar with until you’ve gone to extreme lengths to optimize a Gear VR application. This includes removing as many Update methods as possible. Most update code spent waiting for stuff to happen (like an AI that waits 5 seconds to pick a new target) can be changed to a coroutine that is scheduled to happen in the future. Converting Update loops to coroutines will take the burden of waiting off the CPU. Even empty Update functions can drain the CPU–death by a thousand cuts. Go through your code base and remove all unnecessary Update methods.

As in any mobile game, you should be pooling prefabs. I use Path-o-Logical’s PoolManager, however it’s not too hard to write your own. Either way, by recycling pre-created instances of prefabs, you save memory and reduce hiccups due to instantiation.

IN CONCLUSION

There’s nothing really new here to most mobile developers, but Gear VR is definitely one of the bigger optimization challenges I’ve had in recent years. The fun part about it is we’re kind of at the level of Dreamcast-era poly counts and effects but using modern tools to create content. It’s better than the good old days!

It’s wise to build for the ground up for Gear VR than to port existing applications. This is because making a VR experience that is immersive and performant with these parameters requires all disciplines (programming, art, and design) to build around these restrictions from the start of the project.