Location Based VR World Tour or THE VOID VS ZERO LATENCY VS VRCADE VS IMAX VR

Ever since developing last year’s Holographic Easter Egg Hunt with Microsoft for VRLA, I’ve been interested in creating location-based VR and AR experiences. These are cool projects to me since you can build hardware specific to the experience, design software for one fixed hardware configuration, and really go wild within the constraints of your budget, location, and audience. Plus, there’s the additional challenge of keeping the event profitable based on the number of customers you can run through the exhibit per hour.

Throughout the past year, I’ve managed to try most major location-based VR experiences. After finally trying The VOID this week at Disneyland, I figured I’d write up a quick series of impressions of all the ones I’ve tried.

The VOID / Secrets of the Empire

The newest location-based VR I’ve experienced is “Secrets of the Empire” by The VOID installed at Downtown Disney in Anaheim. Taking place before the events of Rogue One, this is a Star Wars adventure that puts you and a friend in the roles of two Rebel Alliance agents disguised as Stormtroopers who have to sneak into an Imperial base on Mustafar and retrieve critical intelligence for the Rebellion’s survival.

The VOID uses a custom headset and vest with backpack PC. The first thing I noticed is that it was really heavy–it felt like I was wearing at least 20 pounds of gear. However, the vest and headset have a lot of innovative features. My favorite is the force feedback pads placed all around your body. When you are hit by blaster fire you can feel the impact and know where it’s coming from.

The headset has image quality comparable to Oculus Rift and uses LEAP Motion so you can see your hands. This is important because you can reach out and grab real-world objects such as blaster rifles that are tracked in VR when you pick them up or even hit real buttons on virtual control panels to unlock doors. If you see a droid, reach out and touch it! It’s really there! The hands don’t quite line up with the real world position of the objects you see in VR, but it’s close enough.

The game itself is a 20 or so minute experience where you team up with another player to infiltrate an Imperial base. While sneaking around you’ll be shot at by Stormtroopers, clamber out on perilous ledges over lakes of molten lava (you can feel the heat!), and use teamwork to solve puzzles and defend against waves of enemies.

The graphics are great and tracking for both the player and your weapon is rock solid. The redirected walking and other tricks done with space and movement effectively give the sensation of exploring a small section of a large Imperial base. Everything does kind of feel cramped and constrained, but this adds to the tension of firefights when you and your partner are jammed up in a room with hordes of Stormtroopers firing through the door.

IMG_0226

Mission complete!

I really enjoyed Secrets of the Empire–it’s perhaps less ambitious than Zero Latency’s offering, but executed FAR better than anything else I’ve tried. At $30 a pop (not to mention merch sales), they’re supposedly doing 700-800 people a day on weekends. I’m not sure how the math works out, but this seems like a success to me.

Zero Latency / Singularity

I tried Zero Latency’s “Singularity” experience at LevelUp in the Las Vegas MGM Grand several months back. Zero Latency’s “Free Roam VR” platform shares similarities with The VOID in that it uses a backpack PC with a positionally tracked weapon. However, instead of teams of two moving around inside a constrained area that you can reach out and touch, Zero Latency accommodates up to 8 players at once in a large, empty trackable space.

The Singularity is a shooter game where your team has to exit a shuttlecraft and venture into a dangerous, killer robot-infested base ruled by a hostile AI. Armed with a gun that can be switched between various ammo types (shotgun, laser, blaster, etc.) you and your team must journey to the core and take out the AI once and for all in an epic boss battle.

The experience amounts to a lot of mindless shooting. The gameplay itself doesn’t seem very well designed as robots get stuck on parts of the scenery, different weapon types don’t seem to do much, and the visuals at times can be just downright bad. I guess it has positional audio, but it’s not very well done as I kept getting surprised by enemies firing from behind that I simply didn’t notice.

There are flashes of brilliance–and, dare I say, ambition. Zero Latency does some pretty crazy things with redirected walking and developed one particularly thrilling scenario where your party gets split in half and both groups must fend off drone attacks while carefully walking along a catwalk suspended hundreds of feet in the air. There’s even a part that does the whole 2001 thing where you walk up a wall in zero gravity. They take a lot of chances in this experience which makes those parts of Singularity very memorable.

Zero Latency’s backpack is much lighter than The VOID’s.  However, they are using vastly inferior OSVR headsets with terrible positional tracking on both the player and the weapon. I’m assuming the backpack PC has a much lower spec because the visuals are quite a step down from The VOID.

Tracking is an issue. Singularity was a jittery, janky mess. Characters skidded all around while their IK made them contort in unnatural poses. The game also blares a klaxon in your ear when someone is in the wrong position or close to touching another player. This got super annoying after awhile.

img_2082.jpg

After finishing the 30-minute experience, I came to the conclusion that it’s a really solid alpha. I can’t tell if the game is underwhelming because of weak game development or there isn’t enough juice in the hardware. I tend to think it’s the former, given the quality of VR I’ve experienced on far less powerful platforms. Content aside, the tracking is just so awful that I can’t imagine even a better game would fix this alone. They need to upgrade the hardware, too.

VRStudios / VR Showdown in Ghost Town

On the lower end is VR Studios’ “VR Showdown in Ghost Town” which you can currently play at Knott’s Berry Farm in Southern California. This has to be judged on a different scale because it’s much smaller in scope. This game is a $6 6-minute experience using much simpler hardware in a single-room sized tracking volume. It seems much less expensive for the operator to install and maintain, and cheaper for the user to play (although the price per minute is the about same as The VOID).

It uses VRStudios’ VRCade platform which seems to be like Gear VR on steroids. You wear a somewhat unwieldy, self-contained VR headset with tracking balls on it, along with a gun that is also tracked with the same technology. Two players in the same room defend against a seemingly infinite amount of zombies attacking an old west town. You can pick up power-ups to give you more effective shots and some cool bullet-time effects, but at the end of 6 minutes, it’s over regardless.

IMG_2366

The headset is clunky with a low refresh rate and narrow FOV, and the game itself really isn’t very good. But it’s a cheap way for people to try VR for the first time and a seemingly inexpensive way for locations to provide a VR experience. Still, you can have far better experiences at home with a game like Farpoint.

IMAX VR

IMAX VR is perhaps the most disappointing as it has the ambiance of a dentist’s office with a bunch of VR you can largely experience at home on Rift, Vive, or PSVR. IMAX VR is notable for being one of the few places you can try the Starbreeze’s StarVR wide FOV headset. However, the John Wick StarVR game I tried isn’t even as good as Time Crisis, and that came out over 20 years ago! Honestly, they need to gut this place and start over. Doing something ambitious like what The VOID or Zero Latency has done makes more sense than a bunch of kiosks playing games you can already get at home.

IMG_0471

The sterile, featureless waiting room at IMAX VR

Then again, maybe the economics work out–it might be easier to sell individual tickets to solo experiences than waiting to fill up an 8-player co-op session at a premium price. Last year they were bragging about how much money the site was bringing in–but $15,000 a week isn’t a lot. I bet a Starbucks at the same location would do 3 times the business. In fact, the VOID does 3 times that on any given Saturday.

The Future of Location-Based VR

I’m really encouraged by the range of experiences I’ve tried at these different VR facilities. Many of these platforms seem to boast a similar set of features–including the ability to update the physical location with a new experience in a matter of minutes. A representative from The VOID told me it would be possible to swap out Secrets of the Empire for a new game (say, Ghostbusters) in about 15 minutes.

I can’t help but think a lot of companies that build these locations will be disrupted by a new generation of developers who can use off the shelf tracking solutions and next generation backpack computers to build far more compelling experiences. With the Vive Pro including vastly improved lighthouse tracking and removing the need for cables with the Vive Wireless Adapter, we might see a generational leap in quality as experienced game developers will be able to enter the market instead of companies that managed to shoehorn in a tracking solution and stick it in a random mall storefront they have access to.

​ARKit, ARCore, Facebook and Snapchat or THE BATTLE FOR SMARTPHONE AR WORLD SUPREMACY

I haven’t written a blog post in awhile. Over the past 6 months, I’d try to pontificate on the topic of Augmented Reality but some major new development would always occur. I have a bunch of scrapped posts sitting in Google Drive that are now totally irrelevant. Cruising through December, I figured the coast was clear. I was considering writing a dull year in review post when the final paradigm shift occurred with Snap’s release of Lens Studio. So, let’s try and get this out before it’s obsolete!

The Return of Smartphone AR

Smartphone AR is definitely back.  After Apple’s announcement, everyone wanted to talk about ARKit. Despite developing the award-winning Holographic Easter Egg Hunt for HoloLens with Microsoft this past Spring, discussions with clients and investors became laser-focused on smartphone AR instead of mixed reality.

It looks like 2018 will be a big year for these platforms while mixed reality headset makers gear up for 2019 and beyond. Because of this renewed interest in smartphone AR, this is a good time to investigate your options if you’re looking to get into this platform.

ARKit and ARCore

Despite being announced after Facebook’s AR Camera Effects platform, it really was Apple’s ARKit’s announcement that set off this new hype cycle for smartphone AR. Google’s announcement of ARCore for Android was seemingly a me-too move, but also quite significant.

This isn’t about ARKit versus ARCore since there is no competition. They both do similar things on different platforms. ARCore and ARKit have a common set of features but implement them in ways that are subtly different from the user’s perspective. Because of this, it’s not super difficult to port applications between the two platforms if you are using Unity.

The biggest limitation of both ARKit and ARCore is that when you quit the application, it forgets where everything is. Although you can place anchors in the scene to position virtual objects in the real world, there is no persistence between sessions. I suspect ARCore might advance quicker in this department as Google’s ill-fated Tango technology had this in their SDK for years. I’m assuming we’ll see more and more Tango features merged into ARCore in 2018. Rumors suggest ARKit 2.0 will also see similar improvements.

ARKit does one up ARCore with the addition of face tracking for the iPhone X. This is the most advanced facial tracking system currently available on mobile phones. However, it’s only on one device–albeit a wildly popular one. ARKit’s facial tracking seems to produce results far beyond current mask filter SDKs as it builds a mesh out of your face using the TrueDepth camera. However, there doesn’t seem to be a reason why many of the basic facial tracking features can’t be brought over to phones with standard cameras. Maybe we’ll see a subset of these features trickle down into other iOS devices in the near future.

ARKit has far more penetration than ARCore. ARCore runs on a tiny fraction of Android devices, and this isn’t likely to improve. ARKit requires an iPhone 6S and above, but that’s still a large chunk of iOS devices. There probably is zero business case for focusing on ARCore first. If you truly need to develop a standalone AR app, your best bet is to target iOS primarily and Android second (if at all). If ARCore starts to get some of Tango’s features added to it ahead of ARKit, then there will be compelling use cases for ARCore exclusive apps.

Facebook Camera Effects Platform vs. Snapchat World Lens

When ARKit was first announced, I had a few meetings at large companies. They all thought it was cool, but didn’t want to develop standalone apps. Getting users to download yet another app is expensive and somewhat futile as most go unused after a few tries. There’s a lot more interest in distributing AR experiences inside apps people already have installed. Before Facebook Camera Effects was announced, the only option was Blippar. Which really isn’t an option since hardly anyone uses it.

I got access to Facebook Camera Effects early on and was really impressed with the tools. Leading up to the public release, Facebook has added a lot of features. I’ve seen everything from simple masks to full-blown multiplayer games built with Facebook’s AR Studio.

Screen Shot 2017-12-18 at 6.09.19 PM

Facebook’s AR Studio

Facebook developed an entire 3D engine inside the Facebook Camera. It has an impressive array of features such as a full-featured JavaScript API, facial tracking, SLAM/plane detection, bones (sadly only animated in code), 2D sprite animation, particles, shaders, UI, and advanced lighting and material options. You also can access part of the Facebook graph as well as any external URL you want. If you can fit it inside the filter’s size, poly count, and community guideline restrictions–you can make a fairly elaborate AR app far beyond simple masks.

The great thing about Camera Effects Platform is you are able to distribute an AR experience through an app that already has hundreds of millions of users. Because of this reach, a filter must be tested on a wide variety of phones to account for per-platform limitations and bugs. This is because Facebook AR filters run on a huge number of devices–whether they have native AR SDKs or not.

What’s tricky is after getting approval for distribution of your filter, you still have to somehow tell users to use it. Facebook provides a few options, such as attaching a filter to a promoted Facebook page, but discovery is still a challenge.

As Camera Effects Platform opened to all, Snap released Lens Studio for both Windows and Mac. This platform allows developers to create World Lens effects for Snapchat. I was really excited about this because a lot of clients were just not very enthusiastic about Facebook’s offering. I kept hearing that the valuable eyeballs are all on Snapchat and not Facebook, despite Snapchat’s flatlining growth. Brands and and marketers were chomping at the bit to produce content for Snapchat without navigating Snap’s opaque advertising platform.

Screen Shot 2017-12-18 at 6.07.56 PM

Snap’s Lens Studio

Lens Studio shares many similarities to Facebook’s AR Studio, including the use of JavaScript as a language. The big difference here is that Lens Studio does not expose Snapchat’s facial tracking features. You can only make World Lenses–basically placing animated 3D objects on a plane recognized by the rear camera.

World Lenses also have much tighter size and polycount restrictions than Facebook Camera Effects. However, Lens Studio supports the importing of FBX bone animations and morph targets, along with a JavaScript API to play and blend simultaneous animations. Lens Studio also supports Substance Designer for texturing and a lot of great material and rendering options that make it easier to build a nice looking World Lens despite having lower detail than Facebook.

As for distribution, you still have to go through an approval process which includes making sure your lens is performant on low-end devices as well as current phones. Once available you can link your lens to a Snapcode which you can distribute any way you want.

Which should you develop for? Unlike ARCore and ARKit, Facebook and Snapchat have wildly different feature sets. You could start with a Facebook Camera Effect and then produce a World Lens with a subset of features using detail reduced assets.

The easier path may be to port up. Start with a simple World Lens and then build a more elaborate Facebook AR filter with the same assets. Given how few people use Facebook’s stories feature, I feel that it may be smarter to target Snapchat first. Once Facebook’s Camera Effects Platform works on Instagram I’d probably target Facebook first. It really depends on what demographic you are trying to hit.

App vs. Filters

Should you develop a standalone AR app or a filter inside a social network platform? It really depends on what you’re trying to accomplish. If you want to monetize users, the only option is a standalone ARKit or ARCore app. You are free to add in-app purchases and ads in your experience as you would any other app. Facebook and Snap’s guidelines don’t allow this on their respective platforms. Are you using AR to create branded content? In the case of AR filters, they are usually ads in themselves. If you are trying to get as much reach as possible, a properly marketed and distributed AR filter is a no-brainer. A thorough mobile AR strategy may involve a combination of both native apps and filters–and in the case of Facebook’s Camera Effects Platform, they can even link to each other via REST calls.

spectrum

How each platform ranks sorted by feature complexity

2018 is going to be an exciting year for smartphone AR. With the explosive growth of AR apps on the AppStore and the floodgates opening for filters on social media platforms, you should be including smartphone AR into your mixed reality strategy. Give your users a taste of the real thing before the mixed reality revolution arrives.

VRLA Mixed Reality Easter Egg Hunt: Behind the Scenes

Late last year, John Root of Virtual Reality Los Angeles approached me with a crazy idea: what if we built a giant fake forest, placed it in the middle of the Los Angeles Convention Center, and used HoloLens to allow people to hunt for virtual Easter eggs inside a mixed reality experience? I wasn’t sure exactly how I’d go about creating it, but based on my time building and demoing Ether Wars at VRLA’s previous event, I knew it was possible and people would love it. I was all in.

VRLA_Easter_Egg

The project got a late start–a mere matter of weeks before the show. Regardless, everything came together at the right time. We were able to have Microsoft on as a partner who donated all the HoloLens devices for the event as well as brought in their own developers to help with technical issues, project management, and the logistics of running a large public HoloLens experience.

 

LESSONS LEARNED

There haven’t been many projects of this kind. Certainly very few people have experience designing real-world mixed reality installations. I figured I’d give a brain dump of what I learned which might be useful for developers attempting similar feats.

DESIGNING FOR REALITY

Before anything could be built, we had to work out the math behind the user flow of the whole experience. People such as Disney Imagineers who design amusement park rides are very familiar with this process. We had to figure out how many people we could fit in the exhibit’s space and how long it would take to get users in and out of the experience. From here we determined how many people could move through the Easter Egg hunt per day, how much staff would be needed to run it, and how many HoloLens devices we’d need in total. We also had to design the set to accommodate potentially large lines that wouldn’t descend into chaos and mayhem by the middle of the day.

PHYSICAL SET DESIGN IS HARD

The VRLA Mixed Reality Easter Egg Hunt involved typical software development issues and the added challenge of building a unique physical set that works properly with mixed reality technology.

The first step was designing the set itself. Mike Murdock of TriHelix designed the set, using a Vive to make sure it fit inside the dimensions set by the booth. This allowed him to not only judge the size, but preview what it would be like to walk through the set before construction began.

IMG_0779

Choosing paint colors to match Mike’s virtual set design.

It’s important to get the art director of the set and the app on the same page–Primarily to make sure the colors of the real world work best against the HoloLens’ additive display while still being highly trackable.  Unifying the look of the virtual and physical objects is also critical to creating a seamless experience. This means you need to organize the art process far in advance. Nathan Fulton, my art director on the app, made sure the color swatches for the set matched his vision of the virtual objects in the experience.

Lighting is also very important. We spent a lot of time designing the placement and type of lights so the space would be as trackable as possible without having the display overpowered by the environment.

 

To build the physical set, we turned to Fonco Studios, an experienced Hollywood set and prop design company who constructed an amazing N64-stylized forest out of literally a ton of styrofoam. This set was then sliced up into chunks and transported to the LA Convention Center where it was reassembled.

ANCHORING TO THE REAL WORLD

One of the first challenges when designing the software was to determine how the virtual objects were going to be placed on the set. In the beginning I thought we’d simply drop spatial anchors on the physical location and then share these anchors with each HoloLens. This proved to be impractical for a number of reasons.

Firstly, I’ve had lots of issues sharing spatial anchors across devices. They often appear in the wrong places. Not only that, but anyone who has been to a conference will tell you wifi access is spotty, if available at all. Having to use the Internet to transfer spatial anchors around might not be possible.

Plus, without having the physical set to scan we would have no way to test the final layout of eggs and other objects until we got to the location a day or so before the event.

For most of the app’s development, we used Mike’s 3D model of the set design in Unity3D to place the objects on. Then we used AfterNow’s technique of placing three spatial anchors in the corners of the set, saving them, and then spawning the game level at the center of these points. This real world alignment process is a little error prone, but does work.

These anchors are saved locally to the HoloLens so the setup process only has to be done once. Whenever players put on the headset after the app is restarted, they can jump right into the experience–no alignment necessary.

IMG_0854 2

The final set, assembled at the Los Angeles Convention Center

We also attempted to match lighting as accurately as possible by taking spherical photos using a RICOH THETA camera at different points inside the set and using them as cubemaps in Unity3D. These cubemaps provided convincing reflections, and can also be used with IBL shaders to help make virtual objects match the real world environment.

TO OCCLUDE OR NOT TO OCCLUDE

The added bonus of having a fixed set is we know the exact shape of the world. This meant we had the option of using a LIDAR scan of the environment as an occlusion mesh instead of the one built internally by HoloLens.

There are lots of advantages to this. Most importantly, the 3D scan can be optimized for performance by reducing the number of polygons and adding detail where the scan may have missed a few things. This makes the occlusion much more accurate while giving a bonus in performance.

Mimic3D made an amazing scan of the set once it was finally set up in the convention center. We saw quite a reduction in poly count from our own retopologized mesh vs. the dynamically generated HoloLens one. This highly optimized mesh also let us do a neat “The Matrix” style grid effect that appears to pour out over the world with a simple shader.

IMG_0828

Mimic3D’s amazing LIDAR scan

Perhaps most importantly, with over a dozen HoloLens devices to set up before the show, not having to generate a good occlusion mesh on each headset saved a lot of preparation time.

WHERE DO I LOOK?

The limited FOV of the HoloLens made it very important to guide the user’s gaze to where the action is. In some cases our assumptions of where users would look and how they moved through the environment was not exactly lined up with reality. One prime example is the rabbits.

In the back right corner of the forest there is a group of three rabbits that scurry away when you get close. We spent a lot of time tweaking the trigger size, position, and animation of the rabbits to make sure people had plenty of time to notice them. We even had Somatone create a positional audio effect to call attention to this. However, it was far too easy for someone to back into the trigger volume and have the bunnies escape without seeing them. A lot of participants reported not seeing rabbits at all–perhaps, the most important creature in the entire experience!

IN CONCLUSION

The lines were huge but everyone seemed to have a smile on their face when leaving the booth. The press loved it, too. I consider this a huge success for all parties involved and a leading example of how Mixed Reality can be used for things far more interesting than enterprise apps.

Building on what we’ve learned at VRLA, I’m totally ready to tackle a much more elaborate Mixed Reality entertainment experiences. Since the event, I’ve recieved a lot of interest in doing this type of project on a larger scale. Who knows what we might build next!

Designing HoloLens Apps For A Small FOV

In my recent VRDC talk I spent a slide talking about limitations of the platform. The most common complaint about HoloLens and just about any other AR or MR platform is the small window in which the augmentations appear. This low FOV issue is a huge problem of physics that’s not likely to be solved according to Moore’s Law. Get used to it. We’re going to be stuck with it for awhile.  (Please someone prove me wrong!)

It’s not the end of the world. It’s just that developers have to learn how to build applications around this limitation.

GUIDE THE USER

Most VR applications require the user to look around. After all, that’s the whole point of being immersed in a virtual environment. Even if it’s a seated experience, usually the player is encouraged to search the scene for things to look at or interact with.

In mixed reality, the lack of peripheral vision (or anything near it) due to FOV limitations makes visually searching for objects frustrating. A quick scan of the scene won’t catch your eye on something interesting, you have to look more deliberately for stuff in the scene.

HoloLens’ HoloToolkit provides a solution to this with the DirectionIndicator class. This is a directional indicator arrow attached to the cursor that points in the direction of a targeted object.

Perhaps a more natural version of this is used in Young Conker. The directional indicator is 3D, naturally sliding along and colliding with the environment.

USE AUDIO CUES

Unity makes it incredibly easy to add spatial sound to a HoloLens app. Simply enable the Microsoft HRTF Spatializer plugin in the audio settings and check off “spatialize” on your positional audio sources. This is more than just a technique for immersion–the positional audio is so convincing you can use it to direct the user’s attention anywhere in the environment. If the object is way out of the user’s view, emit a sound from it to encourage the player to look at it.

DESIGN ART ACCORDINGLY

Having art break the limited FOV frame is a real problem. To a certain degree, this can’t be solved–get close enough to anything and it will be big enough to go beyond the FOV’s augmentation area.

2016-07-19-2

Ether Wars uses small objects to prevent breaking the frame

This is why I design most HoloLens games to work with lots of smaller models instead of large game characters or objects. If the thing of interest to the user isn’t breaking the frame, he might not notice the rest of the graphics are getting clipped.  Also, Microsoft recommends keeping the clipping plane a few feet out from the user–so if you can design the game such that the player isn’t supposed to get close enough to the holograms, you might be able to prevent most frame-breaking cases.

CONCLUSION

For AR/MR developers, limited FOV is a fact of life. In enterprise apps where you are focused on a specific task, it’s not so bad. For games, most average players will be put off if they have to wrestle too much with this limitation. Microsoft’s showcase games still play very well with this restriction, and show some creative ways to get around it.

Why I Don’t Care About Your New Mixed Reality Headset

I’m often approached by entrepreneurs in the AR/MR space offering me demos of new hardware.  Competition in this space is fierce. You need three major elements for me to take a new platform seriously.

vrlatryon_2

You Need These Three Things To Have A Successful Mixed Reality Device

The three requirements for any successful AR (or more specifically MR) device are: Display, Computer Vision, Operating System

Display

This is the first element of an AR/MR wearable, and usually this what all hardware companies have. There are a number of different displays out there, but they all seem to share the same limitations: additive translucent graphics, small FOV, and relatively low resolution. Often times devices with claims of wider FOVs end up with even lower resolution visuals as a compromise. Both low and high resolution displays I’ve seen are all additive, thus images appear as translucent. Some companies claim to have solved these problems. As far as I’ve seen, we’re a long ways off from a commercial reality.

getimage

Operating System

When I got my HoloLens devkits, the first thing that impressed me is that Microsoft ported the entirety of Windows 10 to Mixed Reality. Up until now, most AR headsets had simple gaze-optimized skins for Android. Windows Holographic makes even traditional 2D applications able to be run in mixed reality as application windows floating in space or attached to your walls. It’s all tied to a bulletproof content delivery ecosystem (Windows App Store) so distribution is solved as well.

2786932-hololens

Your device needs to be more than just something worn only to run a specific app. Mixed reality wearables will one day replace your computer, phone, and just about anything with a screen. You need a complete Mixed Reality operating system that can run everything from the latest games to a browser and your email client in this inevitable use case.

Computer Vision

I can’t tell you how many device manufacturers have shown me their new display but “just don’t have the computer vision stuff in.” Sorry, but this is the most important element of mixed reality. Amazing localization, spatialization, tracking, and surface reconstruction features are what puts HoloLens light years ahead of its nearest competition.

This stuff is hard to do. Computer Vision was formerly an obscure avenue of computer science not many people studied. Now augmented reality has created a war for talent in this sector, with a small (but growing) number of Computer Vision PhDs commanding huge salaries from well funded startups. There are very few companies that have the Computer Vision expertise to make mixed reality work, and this talent is jealously guarded.

[BONUS] Cloud Super-intelligence

The AR headset of the future is a light, comfortable, and truly mobile device you wear everywhere. This requires a constant, fast connection to the Internet. HoloLens is Wifi only for now, but LTE support must be on the horizon. Not only is this critical for everyday-everywhere use, but many advanced computer vision functions such as object recognition need cloud-based AI systems to analyze images and video. With the explosion of deep learning and machine learning technology, a fast 5G connection to these services will make Mixed Reality glasses something you never want to leave the house without.

Don’t Waste My Time

A lot of people seem impressed with highly staged demos of half baked hardware. It’s only when you begin to develop mixed reality apps that you understand what’s really needed to make these platforms successful. Demos without the critical elements listed in this post will be harder to impress with once more people are familiar with the technology.

My Week with PSVR

Full disclosure, I’ve had a PSVR devkit for some time now, so this isn’t my first experience with the device. However, this certainly is my first taste of most PSVR launch content. I figured I’d post my impressions after a week with my PSVR launch bundle.

Best Optics In the Business

PSVR does not use fresnel lenses, thus you don’t see any god rays and glare on high contrast screens. Vive and Rift both suffer from these problems, which makes PSVR look a lot better than the competition. Many cite the lower resolution of the PSVR display as a problem, but I don’t think numbers tell the whole story. The screen door effect is not very noticeable, and I suspect there’s some way PSVR is packing those pixels together that make the slightly lower resolution a non-issue. PSVR looks great.

Fully Integrated With Sony’s Ecosystem

The great thing about the platform is they are combining a mature online store and gaming social network with VR. In many cases PSVR is ahead of the competition in community features. When you first don the PSVR headset, you’ll see the standard PlayStation 4 interface hovering in front of you as a giant virtual screen. Thus, all current PSN features are available to you in VR already. You can even click the Share button and stream VR gameplay live. There’s also a pop up menu to manage your friends list, invites, etc. inside any VR experience. The only weird thing is when you get an achievement you hear the sound, but don’t see any overlay telling you what you did.

Tracking Issues

PSVR uses colored LED lights for optical tracking–essentially the same solution Sony created for their PS3 Move controllers in 2010. In fact, the launch bundle comes with what seem to be new, deadstock Move controllers as its hand tracking solution.

Tracking is iffy. It seems that lamps, bright lights, and sunlight streaking through windows can throw PSVR’s tracking off. I find that it works much better at night with the room lights visible to the PS4 Eye camera turned off. I also replaced my original PlayStation 4 Eye camera with the V2 version in the launch bundle to no avail.

Even more annoying is calibration. Holding the PSVR up in precise positions so that the lights are visible to the camera can be quite a pain. Not only that, but many games require their own calibration involving standing in a place where your head fits inside a camera overlay representing the best position to play in.

The hand controllers are jittery even under the best circumstances. Some games seem to have smoother tracking than others–probably via filtering Move input data. Still, given the price of the bundle, Move is an acceptable solution. Just not ideal.

One advantage to this approach is PSVR can also track the DualShock 4 via that previously annoying light bar on the back. Having a positionally tracked controller adds an element of immersion to non-hand tracked games previously unseen.

The Content

Despite PSVR using a PS4 which pales in power compared to, say, a juiced up Oculus-ready PC, the PSVR launch experiences are second to none. Sony is an old pro at getting together strong titles to launch a new platform. They have made some great choices here.

Worlds

The amount of free content you get with the Launch Bundle is staggering. In addition to the new VR version of Playroom and a disc filled with free demos, you also get Worlds–Sony London’s brilliant showcase of VR mini games and experiences. The Deep is a perfect beginner’s VR introduction–a lush, underwater experience that rivals anything I’ve seen on Rift or Vive. London Heist is my favorite, combining storytelling and hand-tracked action in what is often compared to a VR Guy Ritchie film.

Arkham VR

This is the single coolest VR experience I’ve ever had. It’s really more like a narrative experience with some light gameplay elements. Some are complaining that this barely qualifies as a game and is way too short for $20, but I disagree. This is the gold standard in VR storytelling–a truly unique experience that a lot of developers can learn from. It combines puzzle solving, story, interactive props, and immersive environments into a VR experience that makes you really feel like the Caped Crusader. This is the game I use to showcase PSVR and nobody has left disappointed.

cvb9h4uusae0dk4

Battlezone

Battlezone is my other favorite launch title right now, if I can find other people online (a definite problem given the small, but growing PSVR user base). This is a VR update to Atari’s coin-op classic in the form of a co-op multiplayer vehicle shooter. Guide a team of futuristic tank pilots over a randomly generated hexagonal map as you journey on a quest to destroy the enemy base. This game requires great teamwork and voice communication, which makes it all the more immersive. The positionally tracked DualShock 4 adds to the immersion in the cockpit as well.

Rigs

Guerilla does everything wrong (including uninterruptible tutorials) in VR here, defying all conventions. I have no problems with it, but this makes almost everyone I know violently ill. Apparently I am immune to VR sickness. Rigs is probably unplayable by the vast majority of players even with all the comfort modes turned on. If you want to test your so-called “VR Legs”, then try this game. If you can manage to play this without puking, you’re in for a great competitive online experience–that is, if you can find other players easily.

Wayward Sky

This game started out last year as a Gear VR launch title called Ikarus, which was pulled from the store shortly after its release. Uber’s small mobile VR demo has now reappeared on PSVR as the expanded and enhanced Wayward Sky–an innovative take on point-and-click adventure games in VR. The first stage is essentially a remixed and remastered version of the short Gear VR demo that came out last year. Once you complete this stage, the game opens up with a lot more levels and an all new story line. This is another gentle introduction to VR as it doesn’t involve a lot of movement or complicated mechanics. It’s largely point and click puzzle solving affair, with a few areas that require you to use your hands to manipulate objects.

In Conclusion

img_3678

My dream VR platform would be PSVR’s optics, Vive’s tracking, and Oculus’ controllers. Until that singularity happens, we’re stuck with all of these different systems. PSVR is incredibly compelling, and the platform I recommend to most people. It’s cheap and surprisingly good. Most of my current favorite VR games are on PSVR right now. I personally don’t find its limitations a problem–but it will be interesting to see how the average gaming public responds. Initial sales are promising, and there is way more high profile VR content on the horizon. Dare I say Sony has won this first round?

How To Demo HoloLens Apps In Public

Last week’s VRLA Summer Expo was the first time the public got a look at my current HoloLens project, Ether Wars. Tons of people lined up to try it. I must have done well over 100 demos over the two day event. Since then, I’ve showed it to a variety of developers, executives, and investors ranging from zero experience to those who have used HoloLens quite a bit. Combined with all the demoing done at HoloHacks a few months ago, I’ve learned a lot of common sense tips when demoing mixed reality apps. I figured I’d sum up some of my presentation tricks here.

Know Your Space

HoloLens can be a very temperamental device. Although it features the most robust tracking I’ve ever seen with an AR headset, areas with a lot of moving objects (pets, crowds of people), featureless walls, windows, and mirrors can really mess things up. Also, rooms that are too dark or too bright can make the display look not so great.

If you are travelling somewhere to show your app, try to find out ahead of time what the room is like that you are demoing in. It might be possible to ask for a few alternative rooms if the space they’ve got you in is inappropriate.

And, how do you know if the space is inappropriate? Scan the room before the demo starts. In the case of Ether Wars, you have to scan the room before you play the game. This scan is saved, so subsequent games don’t have to go through that process. When I demo the game, I scan the room myself to make sure the room works before I let others use it. This not only lets me know if the room works but allows the rest of the users to skip this sometimes lengthy step.

Consider building demo-specific safety features. For instance, Ether Drift needs ceilings to spawn space stations from. In the case of a room with a vaulted ceiling the HoloLens can’t scan, a safety feature would be one that automatically spawns the bases at ceiling height for demo purposes.

Teach The Air Tap

Microsoft’s mantra for HoloLens the interfaces is “Gaze, Gesture, and Voice“-essentially a conroller-free interface for all HoloLens apps. Very cool in concept, but I find at least half the people who try the device can’t reliably perform the air tap. It’s a tricky and unnatural gesture. Most people want to reach out and poke the holograms with their finger. It takes quite a bit of explanation to teach users that they must aim with their head and perform that weird air tap motion to click on whatever is highlighted by the cursor.

airtap

Teach the user how to perform the air tap before the demo–perhaps by having them actually launch and pin the app on a wall. It might help to put a training exercise in the app itself. For instance, to start Ether Wars you have to gaze and air tap on a button to start the experience. I use this moment to teach the player how to navigate menus and use the air tap.

Worst case scenario, you can stick your arm over the player’s shoulder in view of the HoloLens and perform the air tap yourself if the user just can’t figure it out.

Check The Color Stack

Unlike VR, it’s difficult to see what the user is viewing when demoing a HoloLens app. You can get a live video preview from the Windows Device Portal. However, this can affect the speed and resolution of the app. Thus, degrading the performance of your demo. One trick I’ve used to figure out where the user currently is in the demo is to learn what the colors of the stacked display look like on different screens.

IMG_2687

Each layer of the display shows different colors

If you look at the side of the HoloLens display you’ll see a stack of colored lights. These colors change depending on what is being shown on the screen. By observing this while people are playing Ether Wars, I’ve learned to figure out what screen people are on based on how the lights look on the side of the device. Now I don’t have to annoyingly ask “what are you seeing right now” during the demo.

None of this is rocket science–just some tips and tricks I’ve learned while demoing Hololens projects over the past month or so. Let me know if you’ve got any others to add to the list.