​ARKit, ARCore, Facebook and Snapchat or THE BATTLE FOR SMARTPHONE AR WORLD SUPREMACY

I haven’t written a blog post in awhile. Over the past 6 months, I’d try to pontificate on the topic of Augmented Reality but some major new development would always occur. I have a bunch of scrapped posts sitting in Google Drive that are now totally irrelevant. Cruising through December, I figured the coast was clear. I was considering writing a dull year in review post when the final paradigm shift occurred with Snap’s release of Lens Studio. So, let’s try and get this out before it’s obsolete!

The Return of Smartphone AR

Smartphone AR is definitely back.  After Apple’s announcement, everyone wanted to talk about ARKit. Despite developing the award-winning Holographic Easter Egg Hunt for HoloLens with Microsoft this past Spring, discussions with clients and investors became laser-focused on smartphone AR instead of mixed reality.

It looks like 2018 will be a big year for these platforms while mixed reality headset makers gear up for 2019 and beyond. Because of this renewed interest in smartphone AR, this is a good time to investigate your options if you’re looking to get into this platform.

ARKit and ARCore

Despite being announced after Facebook’s AR Camera Effects platform, it really was Apple’s ARKit’s announcement that set off this new hype cycle for smartphone AR. Google’s announcement of ARCore for Android was seemingly a me-too move, but also quite significant.

This isn’t about ARKit versus ARCore since there is no competition. They both do similar things on different platforms. ARCore and ARKit have a common set of features but implement them in ways that are subtly different from the user’s perspective. Because of this, it’s not super difficult to port applications between the two platforms if you are using Unity.

The biggest limitation of both ARKit and ARCore is that when you quit the application, it forgets where everything is. Although you can place anchors in the scene to position virtual objects in the real world, there is no persistence between sessions. I suspect ARCore might advance quicker in this department as Google’s ill-fated Tango technology had this in their SDK for years. I’m assuming we’ll see more and more Tango features merged into ARCore in 2018. Rumors suggest ARKit 2.0 will also see similar improvements.

ARKit does one up ARCore with the addition of face tracking for the iPhone X. This is the most advanced facial tracking system currently available on mobile phones. However, it’s only on one device–albeit a wildly popular one. ARKit’s facial tracking seems to produce results far beyond current mask filter SDKs as it builds a mesh out of your face using the TrueDepth camera. However, there doesn’t seem to be a reason why many of the basic facial tracking features can’t be brought over to phones with standard cameras. Maybe we’ll see a subset of these features trickle down into other iOS devices in the near future.

ARKit has far more penetration than ARCore. ARCore runs on a tiny fraction of Android devices, and this isn’t likely to improve. ARKit requires an iPhone 6S and above, but that’s still a large chunk of iOS devices. There probably is zero business case for focusing on ARCore first. If you truly need to develop a standalone AR app, your best bet is to target iOS primarily and Android second (if at all). If ARCore starts to get some of Tango’s features added to it ahead of ARKit, then there will be compelling use cases for ARCore exclusive apps.

Facebook Camera Effects Platform vs. Snapchat World Lens

When ARKit was first announced, I had a few meetings at large companies. They all thought it was cool, but didn’t want to develop standalone apps. Getting users to download yet another app is expensive and somewhat futile as most go unused after a few tries. There’s a lot more interest in distributing AR experiences inside apps people already have installed. Before Facebook Camera Effects was announced, the only option was Blippar. Which really isn’t an option since hardly anyone uses it.

I got access to Facebook Camera Effects early on and was really impressed with the tools. Leading up to the public release, Facebook has added a lot of features. I’ve seen everything from simple masks to full-blown multiplayer games built with Facebook’s AR Studio.

Screen Shot 2017-12-18 at 6.09.19 PM

Facebook’s AR Studio

Facebook developed an entire 3D engine inside the Facebook Camera. It has an impressive array of features such as a full-featured JavaScript API, facial tracking, SLAM/plane detection, bones (sadly only animated in code), 2D sprite animation, particles, shaders, UI, and advanced lighting and material options. You also can access part of the Facebook graph as well as any external URL you want. If you can fit it inside the filter’s size, poly count, and community guideline restrictions–you can make a fairly elaborate AR app far beyond simple masks.

The great thing about Camera Effects Platform is you are able to distribute an AR experience through an app that already has hundreds of millions of users. Because of this reach, a filter must be tested on a wide variety of phones to account for per-platform limitations and bugs. This is because Facebook AR filters run on a huge number of devices–whether they have native AR SDKs or not.

What’s tricky is after getting approval for distribution of your filter, you still have to somehow tell users to use it. Facebook provides a few options, such as attaching a filter to a promoted Facebook page, but discovery is still a challenge.

As Camera Effects Platform opened to all, Snap released Lens Studio for both Windows and Mac. This platform allows developers to create World Lens effects for Snapchat. I was really excited about this because a lot of clients were just not very enthusiastic about Facebook’s offering. I kept hearing that the valuable eyeballs are all on Snapchat and not Facebook, despite Snapchat’s flatlining growth. Brands and and marketers were chomping at the bit to produce content for Snapchat without navigating Snap’s opaque advertising platform.

Screen Shot 2017-12-18 at 6.07.56 PM

Snap’s Lens Studio

Lens Studio shares many similarities to Facebook’s AR Studio, including the use of JavaScript as a language. The big difference here is that Lens Studio does not expose Snapchat’s facial tracking features. You can only make World Lenses–basically placing animated 3D objects on a plane recognized by the rear camera.

World Lenses also have much tighter size and polycount restrictions than Facebook Camera Effects. However, Lens Studio supports the importing of FBX bone animations and morph targets, along with a JavaScript API to play and blend simultaneous animations. Lens Studio also supports Substance Designer for texturing and a lot of great material and rendering options that make it easier to build a nice looking World Lens despite having lower detail than Facebook.

As for distribution, you still have to go through an approval process which includes making sure your lens is performant on low-end devices as well as current phones. Once available you can link your lens to a Snapcode which you can distribute any way you want.

Which should you develop for? Unlike ARCore and ARKit, Facebook and Snapchat have wildly different feature sets. You could start with a Facebook Camera Effect and then produce a World Lens with a subset of features using detail reduced assets.

The easier path may be to port up. Start with a simple World Lens and then build a more elaborate Facebook AR filter with the same assets. Given how few people use Facebook’s stories feature, I feel that it may be smarter to target Snapchat first. Once Facebook’s Camera Effects Platform works on Instagram I’d probably target Facebook first. It really depends on what demographic you are trying to hit.

App vs. Filters

Should you develop a standalone AR app or a filter inside a social network platform? It really depends on what you’re trying to accomplish. If you want to monetize users, the only option is a standalone ARKit or ARCore app. You are free to add in-app purchases and ads in your experience as you would any other app. Facebook and Snap’s guidelines don’t allow this on their respective platforms. Are you using AR to create branded content? In the case of AR filters, they are usually ads in themselves. If you are trying to get as much reach as possible, a properly marketed and distributed AR filter is a no-brainer. A thorough mobile AR strategy may involve a combination of both native apps and filters–and in the case of Facebook’s Camera Effects Platform, they can even link to each other via REST calls.

spectrum

How each platform ranks sorted by feature complexity

2018 is going to be an exciting year for smartphone AR. With the explosive growth of AR apps on the AppStore and the floodgates opening for filters on social media platforms, you should be including smartphone AR into your mixed reality strategy. Give your users a taste of the real thing before the mixed reality revolution arrives.

Designing HoloLens Apps For A Small FOV

In my recent VRDC talk I spent a slide talking about limitations of the platform. The most common complaint about HoloLens and just about any other AR or MR platform is the small window in which the augmentations appear. This low FOV issue is a huge problem of physics that’s not likely to be solved according to Moore’s Law. Get used to it. We’re going to be stuck with it for awhile.  (Please someone prove me wrong!)

It’s not the end of the world. It’s just that developers have to learn how to build applications around this limitation.

GUIDE THE USER

Most VR applications require the user to look around. After all, that’s the whole point of being immersed in a virtual environment. Even if it’s a seated experience, usually the player is encouraged to search the scene for things to look at or interact with.

In mixed reality, the lack of peripheral vision (or anything near it) due to FOV limitations makes visually searching for objects frustrating. A quick scan of the scene won’t catch your eye on something interesting, you have to look more deliberately for stuff in the scene.

HoloLens’ HoloToolkit provides a solution to this with the DirectionIndicator class. This is a directional indicator arrow attached to the cursor that points in the direction of a targeted object.

Perhaps a more natural version of this is used in Young Conker. The directional indicator is 3D, naturally sliding along and colliding with the environment.

USE AUDIO CUES

Unity makes it incredibly easy to add spatial sound to a HoloLens app. Simply enable the Microsoft HRTF Spatializer plugin in the audio settings and check off “spatialize” on your positional audio sources. This is more than just a technique for immersion–the positional audio is so convincing you can use it to direct the user’s attention anywhere in the environment. If the object is way out of the user’s view, emit a sound from it to encourage the player to look at it.

DESIGN ART ACCORDINGLY

Having art break the limited FOV frame is a real problem. To a certain degree, this can’t be solved–get close enough to anything and it will be big enough to go beyond the FOV’s augmentation area.

2016-07-19-2

Ether Wars uses small objects to prevent breaking the frame

This is why I design most HoloLens games to work with lots of smaller models instead of large game characters or objects. If the thing of interest to the user isn’t breaking the frame, he might not notice the rest of the graphics are getting clipped.  Also, Microsoft recommends keeping the clipping plane a few feet out from the user–so if you can design the game such that the player isn’t supposed to get close enough to the holograms, you might be able to prevent most frame-breaking cases.

CONCLUSION

For AR/MR developers, limited FOV is a fact of life. In enterprise apps where you are focused on a specific task, it’s not so bad. For games, most average players will be put off if they have to wrestle too much with this limitation. Microsoft’s showcase games still play very well with this restriction, and show some creative ways to get around it.

How To Demo HoloLens Apps In Public

Last week’s VRLA Summer Expo was the first time the public got a look at my current HoloLens project, Ether Wars. Tons of people lined up to try it. I must have done well over 100 demos over the two day event. Since then, I’ve showed it to a variety of developers, executives, and investors ranging from zero experience to those who have used HoloLens quite a bit. Combined with all the demoing done at HoloHacks a few months ago, I’ve learned a lot of common sense tips when demoing mixed reality apps. I figured I’d sum up some of my presentation tricks here.

Know Your Space

HoloLens can be a very temperamental device. Although it features the most robust tracking I’ve ever seen with an AR headset, areas with a lot of moving objects (pets, crowds of people), featureless walls, windows, and mirrors can really mess things up. Also, rooms that are too dark or too bright can make the display look not so great.

If you are travelling somewhere to show your app, try to find out ahead of time what the room is like that you are demoing in. It might be possible to ask for a few alternative rooms if the space they’ve got you in is inappropriate.

And, how do you know if the space is inappropriate? Scan the room before the demo starts. In the case of Ether Wars, you have to scan the room before you play the game. This scan is saved, so subsequent games don’t have to go through that process. When I demo the game, I scan the room myself to make sure the room works before I let others use it. This not only lets me know if the room works but allows the rest of the users to skip this sometimes lengthy step.

Consider building demo-specific safety features. For instance, Ether Drift needs ceilings to spawn space stations from. In the case of a room with a vaulted ceiling the HoloLens can’t scan, a safety feature would be one that automatically spawns the bases at ceiling height for demo purposes.

Teach The Air Tap

Microsoft’s mantra for HoloLens the interfaces is “Gaze, Gesture, and Voice“-essentially a conroller-free interface for all HoloLens apps. Very cool in concept, but I find at least half the people who try the device can’t reliably perform the air tap. It’s a tricky and unnatural gesture. Most people want to reach out and poke the holograms with their finger. It takes quite a bit of explanation to teach users that they must aim with their head and perform that weird air tap motion to click on whatever is highlighted by the cursor.

airtap

Teach the user how to perform the air tap before the demo–perhaps by having them actually launch and pin the app on a wall. It might help to put a training exercise in the app itself. For instance, to start Ether Wars you have to gaze and air tap on a button to start the experience. I use this moment to teach the player how to navigate menus and use the air tap.

Worst case scenario, you can stick your arm over the player’s shoulder in view of the HoloLens and perform the air tap yourself if the user just can’t figure it out.

Check The Color Stack

Unlike VR, it’s difficult to see what the user is viewing when demoing a HoloLens app. You can get a live video preview from the Windows Device Portal. However, this can affect the speed and resolution of the app. Thus, degrading the performance of your demo. One trick I’ve used to figure out where the user currently is in the demo is to learn what the colors of the stacked display look like on different screens.

IMG_2687

Each layer of the display shows different colors

If you look at the side of the HoloLens display you’ll see a stack of colored lights. These colors change depending on what is being shown on the screen. By observing this while people are playing Ether Wars, I’ve learned to figure out what screen people are on based on how the lights look on the side of the device. Now I don’t have to annoyingly ask “what are you seeing right now” during the demo.

None of this is rocket science–just some tips and tricks I’ve learned while demoing Hololens projects over the past month or so. Let me know if you’ve got any others to add to the list.

So, You Wanna Make A Pokemon Go Clone?

I told you not to do it.

IMG_3003

But suddenly my 2013 blog post about displaying maps in Unity3D is now my top page of the month. There are lots of Pokemon Go clones being built right now.

Well, if you absolutely insist, here’s how I’d go about it.

Step 1: Raise tons of money

You’re going to need it. And it’s not just for user acquisition. You’ll need a lot of dry powder for scaling costs in the unlikely event this game is as successful as you’ve claimed to your investors. For small apps, accessing something like the Foursquare API may be free–but it will require an expensive licensing deal to use it at the scale you’re thinking of and without restrictions.

Step 2: Buy every single location based game you can

Just having access to a places API such as Foursquare or Factual isn’t enough. You need location data relevant to a game–such as granular details about places inside of larger locations that are of interest to players. Pokemon Go has this from years of Ingress players submitting and verifying locations around the world.

Nearly 10 years ago, there was a frenzy of investment in location based games. The App Store is now littered with dead husks of old LBS games and ones that are on life support. With that pile of money you raised, it should be easy to go on a shopping spree and buy up these games. Not for their users, or even the technology, but for the data. Most of these games may have been fallow for years, making their location data stale. Yet, it may be possible with machine learning or old fashioned elbow grease to work that data into a layer of interesting sub-locations for your game to be designed around.

Step 3: Plan for Database Hell

Designing for scale at the start is a classic mistake for any startup. You’re effectively building a football stadium for a carload of people. That doesn’t mean you shouldn’t entertain the idea of scaling up a service once it’s successful.

Full disclosure, I’ve never built an app at the scale of Pokemon Go. Few people have. I suspect many of the server issues are related to scaling a geospatial database with that many users. It’s much harder to optimize your data around location than other usage patterns. Don’t take my word for it, check out this analysis.

It’s been years since I’ve looked at geospatial databases. Despite some announcements, it doesn’t look like a lot has changed. A cursory search suggests PostGIS is still a solid choice. Plus, there are a lot of Postgres experts out there that can help with scaling issues. MongoDB’s relatively new spatial features may also be an option.

As for fancier alternatives–Google App Engine is an easy way to “magically” scale an app. They have also started releasing really interesting new geospatial services. Not to mention some great support for mobile apps that may make integrating with Unity3D a bit easier. However, GAE  is very expensive at scale, and the location features are still in alpha. Choosing Google App Engine is a risky decision, but also may be an easy way to get started.

To avoid vendor lock-in, have a migration strategy in mind. One of which may be using your pile of money to recruit backend people from startups with large amounts of users.

Step 4: Get Ready for the Disappointing State of Mobile AR

Pokemon Go has sparked a lot of renewed interest in AR. Much like geospatial databases, not much has changed in the past 5 years as far as what your average smartphone can do. Sure, beefier processors and higher res cameras can get away with some limited SLAM functionality. But, these features are very finicky. Your best bet is to keep AR to a minimum, as Pokemon Go smartly did. Placing virtual objects on real world surfaces in precise locations, especially outdoors, is the realm of next generation hardware.

Step 5: ??????

Ok, this isn’t a precise recipe for a Pokemon Go clone. But hey, if you’ve completed step one, maybe you should contact me for more details?

There’s Nothing To Be Learned From Pokemon Go

Pokemon Go is a watershed moment in gaming. I’ve never seen a game have this much traction this fast. My neighborhood is filled with wandering players of all demographics, strolling around with phone in hand looking for Pokemon. Since the game’s launch, everyday has looked like Halloween without the costumes.

In general, the job of a venture capitalist is really easy. For most, you simply wait around for another firm to invest in something and then add to that round. Or, you can wait for something to be really successful and cultivate clones of it. I can guarantee there are now a few VCs with deals in motion to build a “fast follow” mimic of Pokemon Go.

Please don’t.

There is absolutely no way another developer can duplicate the success of this game. In fact, it remains to be seen if this game will be a success beyond its initial pop. No game has ever had an opening weekend of this scale–but still, remember Draw Something or maybe even Fallout Shelter? I’m enjoying Pokemon Go myself, but many of my colleagues are questioning whether it has legs. Regardless of that, any location-based game you may be thinking of making is probably missing a few key ingredients to Pokemon Go’s success.

IMG_2963

My pathetically low level character

Niantic has the Best Location Data in the Business

I’ve spent time building location-based service apps in the past. The biggest problem with making games that play over the real world is populating the map with interesting stuff to do. Firstly, there’s access to map data–on Pokemon Go’s scale, this is not cheap (although there are open source solutions). Simply having a map is one piece of the puzzle–you need to have information about how the locations are used. Which places are busiest? Where do players like to group up at?

Niantic has this data from years of running Ingress–pretty much the largest location-based game ever made. Over the years Ingress was running as a project fully funded and supported by Google, Niantic built an incredibly valuable data layer on top of the real world that has been repurposed for Pokemon Go.

You could possibly license similar information from other companies (Foursquare comes to mind), but Niantic’s data is probably more geared towards the activity patterns of mobile gamers than those who want to Instagram their lunch. (Granted, there’s a lot of overlap there)

Pokemon Is One of the Biggest IPs in the World

Previous to Pokemon Go, even prior to Ingress, there have been plenty of location-based games. Anyone remember Shadow Cities? Or Booyah? They may have just been too early–back then there weren’t enough smartphones to solve the density problem you have with location-based games. Now that smartphones are ubiquitous, how do you get enough players to fill up the world map? One way is to use one of the biggest video game IPs on the planet.

The demand for Nintendo IPs on other platforms is unprecedented.  The fervor for Pokemon in particular is huge–with lots of false Pokemon apps taken off Google Play and the App Store over the years. Investors have responded to this craze, with Nintendo’s stock jumping 25% since the release of Pokemon Go.

There really isn’t another IP as big as Pokemon that can be applied to a game of this scale. Sprinkle a little Pokemon on to a little Ingress and the results are explosive.

There’s nobody else on the planet that can do this. 

HoloHacks Is A Step Towards Mixed Reality Domination for Microsoft

Microsoft has been holding a series of hackathons for their new HoloLens platform in a number of cities across the US. I couldn’t make it to the original event in Seattle, but managed to compete in HoloHacks Los Angeles at the Creative Technology Center downtown.

IMG_2916

The event went from Friday evening to Sunday afternoon, concluding with final presentations and the judges awarding prizes to the winning apps. My team’s app, “A Day at the Museum,” won the visual design prize with a mixed reality replacement for museum audio tour earpieces.

The event worked like most hackathons I’ve attended–Come up with ideas, form teams, and get to hacking. Microsoft had us covered on the hardware front–not only did each team get to borrow two HoloLens devices, but you could even get a loaner Surface Book for the event if you don’t have a Windows PC to hack on. (As a MacBook Pro user, that option was a lifesaver)

I was lucky to sit at a table with Leone Ermer, Edward Dawson-Taylor, Chris Horton, Ed Hougardy, and Steven Winston. After brainstorming a few ideas we came up with the museum tour concept and got to work. We worked brilliantly as a team and everyone was critical in taking this project across the finish line.

image

HoloHacks isn’t just a promotional event. It’s an important step in building a community around the platform. By iterating with refreshing openness and allowing even complete novice developers to build apps to learn about HoloLens’ capabilities, Microsoft is placing themselves way ahead of the competition not just in technology but in the ecosystem that supports it.

No other augmented reality platform can operate at this scale. By the time other platforms can catch up to Hololens’ features, Microsoft will have a thriving ecosystem of Windows Holographic developers that other hardware vendors just can’t compete with. Why use some other platform when you can easily find developers and tools for HoloLens to get the job done? If the competition were smart, they’d focus on the developer community aspect just as much as the technology.

Debugging HoloLens Apps in Unity3D

I’ve been developing on HoloLens for a few weeks now, and I’m being re-acquainted with the tricky part of debugging hardware-specific augmented reality apps in Unity3D. I went through a lot of these issues with my Google Tango project, InnAR Wars, and so I’m sort of used to it.  However, having to wear the display on your head while testing code brings a whole new dimension of difficulty to debugging Augmented Reality applications. I figured I’d share a few tips that I use when debugging Unity3D HoloLens apps other than the standard Unity3d remote debugging tools you are used to from mobile development.

IMG_2757

Debugging in the Editor vs. Device

The first thing you need to do is to figure out how test code without deploying to the device. Generating a Visual Studio project, compiling, and uploading your application to one (or more) HoloLens headsets is a real pain when trying to iterate on simple code changes. It’s true, Unity3D can do none of HoloLens’ AR features in the editor, but there are times when you just have to test basic gameplay code that doesn’t require spatialization, localization, or any HoloLens specific features. There’s a few steps to make this easier.

Make A Debug Keyboard Input System

HoloLens relies mostly on simple gestures (Air Tap) and voice for input. The first thing you need to test HoloLens code in the Unity3D editor is a simple way to trigger whatever event fires off via Air Tap or speech commands through the keyboard. In my case, I wrote a tiny bit of code to use the space bar to trigger the Air Tap. Basically–anywhere you add a delegate to handle an Air Tap or speech command, you need to add some input code to trigger that same method via keyboard.

Use An Oculus Rift Headset

I was pleasantly surprised to find out Unity HoloLens Technical Preview supports Oculus Rift. Keep your Rift plugged in when developing for HoloLens. When you run your application in the Unity editor, it will show up inside the Rift–albeit against a black background. This is extremely helpful debugging code that uses gaze, positional audio, and even limited movement of the player via Oculus’ positional tracking.

Use The HoloLens Companion App

Microsoft provides a HoloLens companion app in the Windows Store with a few handy features. The app connects to HoloLens via WiFi and lets you record videos live from the headset (very useful for documented reproducible bugs and crashes). It lets you stop and start apps remotely which can be useful when trying to launch an app on multiple HoloLenses at the same time. You can also use your PC’s keyboard to send input to a remote HoloLens. This is convenient for multiplayer testing–use Air Tap with the one on your face, the companion app to trigger input on the other device.

These tips may make building HoloLens apps a little easier, but I really hope Microsoft adds more debugging features to future versions of the SDK. There are some simple things Microsoft could do to make development more hassle-free, although there’s really a limit to what you can do in the Unity Editor versus the device.