Designing HoloLens Apps For A Small FOV

In my recent VRDC talk I spent a slide talking about limitations of the platform. The most common complaint about HoloLens and just about any other AR or MR platform is the small window in which the augmentations appear. This low FOV issue is a huge problem of physics that’s not likely to be solved according to Moore’s Law. Get used to it. We’re going to be stuck with it for awhile.  (Please someone prove me wrong!)

It’s not the end of the world. It’s just that developers have to learn how to build applications around this limitation.

GUIDE THE USER

Most VR applications require the user to look around. After all, that’s the whole point of being immersed in a virtual environment. Even if it’s a seated experience, usually the player is encouraged to search the scene for things to look at or interact with.

In mixed reality, the lack of peripheral vision (or anything near it) due to FOV limitations makes visually searching for objects frustrating. A quick scan of the scene won’t catch your eye on something interesting, you have to look more deliberately for stuff in the scene.

HoloLens’ HoloToolkit provides a solution to this with the DirectionIndicator class. This is a directional indicator arrow attached to the cursor that points in the direction of a targeted object.

Perhaps a more natural version of this is used in Young Conker. The directional indicator is 3D, naturally sliding along and colliding with the environment.

USE AUDIO CUES

Unity makes it incredibly easy to add spatial sound to a HoloLens app. Simply enable the Microsoft HRTF Spatializer plugin in the audio settings and check off “spatialize” on your positional audio sources. This is more than just a technique for immersion–the positional audio is so convincing you can use it to direct the user’s attention anywhere in the environment. If the object is way out of the user’s view, emit a sound from it to encourage the player to look at it.

DESIGN ART ACCORDINGLY

Having art break the limited FOV frame is a real problem. To a certain degree, this can’t be solved–get close enough to anything and it will be big enough to go beyond the FOV’s augmentation area.

2016-07-19-2

Ether Wars uses small objects to prevent breaking the frame

This is why I design most HoloLens games to work with lots of smaller models instead of large game characters or objects. If the thing of interest to the user isn’t breaking the frame, he might not notice the rest of the graphics are getting clipped.  Also, Microsoft recommends keeping the clipping plane a few feet out from the user–so if you can design the game such that the player isn’t supposed to get close enough to the holograms, you might be able to prevent most frame-breaking cases.

CONCLUSION

For AR/MR developers, limited FOV is a fact of life. In enterprise apps where you are focused on a specific task, it’s not so bad. For games, most average players will be put off if they have to wrestle too much with this limitation. Microsoft’s showcase games still play very well with this restriction, and show some creative ways to get around it.

The Beginner’s Guide: Dave the Madman Edition

I recently played The Beginner’s Guide after buying it during the annual Holiday Steam Sale over the break. It’s a quick play through, and an interesting way to tell a story within a game. Without giving too much away, the experience reminded me of a similar event in my young-adulthood–When I encountered an amazing game developer who created incredible works I couldn’t hope to match. I’ve since forgotten his real name and don’t know much about him.  But I do have the 4 double-sided floppy disks he sent me of all his games at the time.

48C306D0-095A-4165-840F-0978DF42B7E7

Madsoft 1-4, recovered in great condition

This was the early ‘90s–I’d say around 1990-1991. I had made a bunch of Commodore 64 games (often with my late friend Justin Smith) using Shoot ‘Em Up Construction Kit: an early game development tool that let you build neat scrolling shooters without any programming knowledge.

ais

Adventures in Stupidity, one of my SEUCK creations

I used to upload my games to local BBSes in the New England area and wait for the response on the message boards. In the process, I downloaded some games made by a user known by the handle “MADMAN.”  Some of his games also used the moniker, “Dave the Madman.” He made seemingly professional quality games using Garry Kitchen’s Game Maker.  Not to be confused with YoYo’s GameMaker Studio.

Garry Kitchen’s Game Maker was an early game development tool published by Activision in 1985. I actually got it for my birthday in 1986, thinking that this was my key to becoming a superstar game designer. The thing is, Game Maker was a full blown programming language that, strangely, used the joystick to edit. It also included a sprite designer, music editor, and other tools. Everything a budding game developer would need to get started, right?

Although I did make a few simple games in Game Maker, its complexity was beyond my grasp at the time. Which is why Madman’s creations blew me away. They were so polished! He had developed so many completely different types of games! They all had cool graphics, animation, music, and effects I couldn’t figure out how to duplicate! My favorite was Space Rage: a sprawling, multi-screen space adventure that I simply could not comprehend. I had so many questions about how these games were made!

spacerage

SPACE RAGE!

We messaged each other on a local BBS. I blathered about how much of a fan I was of his work and he said he liked my games, too. I figured he was just being kind. After all, this was a MASTER saying this! We eventually exchanged phone numbers.

I have vague memories of talking to him on the phone, asking how he accomplished such amazing feats using Game Maker. I think he was a little older than me, but many of his games had a 1987 copyright date. Considering I was probably the same age at this time as he was in 1987, this made me feel quite inadequate.

As I recall, Madman was humble and didn’t have many aspirations beyond distributing his little games on BBSes. He seemed like a hobbyist that figured out Game Maker and really liked making games with it–nothing more, nothing less.

fatcat2

Fat Cat probably has the best animation of them all

After our call, he mailed me a complete collection of his games. A few years ago I found these floppy disks and copied them to my Mac using a 1541 transfer cable. The disks bear his handwriting, labeled “Madsoft” 1 – 4. I was able to rescue all of the disks, converting them to d64 format.

Playing through his creations was a real trip down memory lane. The most shocking thing I discovered is on the 2nd side of the 4th disk. His Archon-like game, Eliminators, features the text “Distributed by Atomic Revolution” on the bottom of the title screen. Atomic Revolution was a game ‘company’ I briefly formed with childhood friend, Cliff Bleszinski, around 1990 or so. It was a merger of sorts between my label, “Atomic Games”, and Cliff’s, “Revolution Games.” (The story about the C64 game he made in my parents’ basement is a whole other post!)

eliminators_title_2

An Atomic Revolution production?

I must have discussed handling the distribution of Eliminators with Dave; by uploading and promoting his awesome game all over the local BBS scene and sending them to mail-order shareware catalogs. At least that’s my best guess–I really have no recollection of how close we worked together. I must have done a terrible job since this game was almost completely lost to the mists of time.

I think we talked about meeting up and making a game together–but I didn’t even have my learner’s permit yet. On-line communication tools were primitive if they existed at all. We never really collaborated. I wonder what happened to “Dave the Madman” and his “Madsoft” empire? Is he even still alive? Did he go on to become a game developer, or at least a software engineer? Maybe he’ll somehow see this post and we’ll figure out the answer to this mystery!

ataxx

I remember he was most proud of his Ataxx homage

Until then, I’ll add the disk images of Madsoft 1-4 to this post. Check the games out, and let me know what you think. I’ve also put up some screenshots and videos of his various games–but I’m having problems finding a truly accurate C64 emulator for OSX. If anyone has any suggestions, let me know!

Here’s the link to the zip file. Check these games out for yourself!

 

The Basics of Hand Tracked VR Input Design

Ever since my revelation at Oculus Connect I’ve been working on a project using hand tracking and VR. For now, it’s using my recently acquired Vive devkit. However, I’ve been researching design techniques for PSVR and Oculus Touch to keep the experience portable across many different hand tracking input schemes. Hand tracking has presented a few new problems to solve, similar to my initial adventures in head tracking interfaces.

The Vive's hand controller

Look Ma, No Hands!

The first problem I came across when designing an application that works on both Vive and Oculus Touch is the representation of your hands in VR. With Oculus Touch, most applications feature a pair of “ghost hands” that mimic the current pose of your hands and fingers. Since Oculus’ controllers can track your thumb and first two fingers, and presumably the rest are gripped around the handle, these ghost hands tend to accurately represent what your hands are doing in real life.

Oculus Touch controller

This metaphor breaks down with Vive as it doesn’t track your hands, but the position of the rod-like controllers you are holding. Vive games I’ve tried that show your hands end up feeling like waving around hands on a stick–there’s a definite disconnect between the visual of your hands in VR and where your brain thinks they are in real life. PSVR has this problem as well, as the Move controllers used with the current devkit are similar to Vive’s controllers.

You can alleviate this somewhat. Because there is a natural way most users tend to grip Move and Vive controllers, you can model and position the “hand on a stick” in the most likely way the controllers are gripped. This can make static hands in VR more convincing.

In any case, you have a few problems when you grab an object.

For Oculus, the act of grabbing is somewhat natural–you can clench your first two fingers and thumb into a “grab” type motion to pick something up. In the case of Bullet Train, this is how you pick up guns. The translucent representation of your hands means you can still see your hand pose and the gripped object at the same time. There’s not much to think about other than where you attach the held object to the hand model.

It also helps that in Bullet Train the objects you can grab have obvious handles and holding points. You can pose the hand to match the most likely hand position on a grabbed object without breaking immersion.

With Vive and PSVR you have a problem if you are using the “hand on a stick” technique. When you “grab” a virtual object by pressing the trigger, how do you show the hand holding something? It seems like the best answer is, you don’t! Check this video of Uber Entertainment’s awesome Wayward Sky PSVR demo:

Notice anything? When you grab something, the hand disappears. All you can see is the held object floating around in front of you.

This is a great solution for holding arbitrary shaped items because you don’t have to create a potentially infinite amount of hand grip animations. Because the user isn’t really grabbing anything and is instead clicking a trigger on a controller, there is no “real” grip position for your hand anyway. You also don’t have the problem of parts of the hands intersecting with the held object.

This isn’t a new technique. In fact, one of the earliest Vive demos, Job Simulator, does the exact same thing. Your brain fills in the gaps and it feels so natural that I just never noticed it!

Virtual Objects, Real Boundaries

The next problem I encountered is what do you do when your hand passes through virtual objects, but the objects can’t? For instance, you can be holding an object, and physically move your real, tracked hand through a virtual wall. The held object, bound by the engine’s physics simulation, will hit the wall while your hand continues to drag it through. Chaos erupts!

You can turn off collisions while an object is held, but what fun is that? You want to be able to knock things over and otherwise interact with the world while holding stuff. Plus, what happens when you let go of an object while inside a collision volume?

What I ended up doing is making the object detach, or fall out of your virtual hand, as soon as it hits something else. You can tweak this by making collisions with smaller, non-static objects less likely to detach the held object since they will be pushed around by your hand.

For most VR developers these are the first two things you encounter when designing and experience for hand-tracking VR systems. It seems Oculus Touch makes a lot of these problems go away, but we’ve just scratched the surface of the issues needed to be solved when your real hands interact with a virtual world.

My Week With Project Tango

A few weeks back I got into Google’s exclusive Project Tango developers program. I’ve had a Tango tablet for about a week and have been experimenting with the available apps and Unity3D SDK.

Project Tango uses Movidius’ Myriad 1 Vision Processor chip (or “VPU”), paired with a depth camera not too unlike the original Kinect for the XBOX 360. Except instead of being a giant hideous block, it’s small enough to stick in a phone or tablet.

I’m excited about Tango because it’s an important step in solving many of the problems I have with current Augmented Reality technology. What issues can Tango solve?

POSITIONAL TRACKING

First, the Tango tablet has the ability to determine the tablet’s pose. Sure, pretty much every mobile device out there can detect its precise orientation by fusing together compass and gyro information. But by using the Tango’s array of sensors, the Myriad 1 processor can detect position and translation. You can walk around with the tablet and it knows how far and where you’ve moved. This makes SLAM algorithms much easier to develop and more precise than strictly optical solutions.

Also, another problem with AR as it exists now is that there’s no way to know whether you or the image target moved. Rendering-wise, there’s no difference. But, this poses a problem with game physics. If you smash your head (while wearing AR glasses) into a virtual box, the box should go flying. If the box is thrown at you, it should bounce off your head–big distinction!

Pose and position tracking has the potential to factor out the user’s movement and determine the motion of both the observer and the objects that are being tracked. This can then be fed into a game engine’s physics system to get accurate physics interactions between the observer and virtual objects.

OCCLUDING VIRTUAL CHARACTERS WITH THE REAL WORLD

Anyway, that’s kind of an esoteric problem. The biggest issue with AR is most solutions can only overlay graphics on top of a scene. As you can see in my Ether Drift project, the characters appear on top of specially designed trading cards. However, wave your hand in front of the characters, and they will still draw on top of everything.

Ether Drift uses Vuforia to superimpose virtual characters on top of trading cards.

Ether Drift uses Vuforia to superimpose virtual characters on top of trading cards.

With Tango, it is possible to reconstruct the 3D geometry of your surroundings using point cloud data received from the depth camera. Matterport already has an impressive demo of this running on the Tango. It allows the user to scan an area with the tablet (very slowly) and it will build a textured mesh out of what it sees. When meshing is turned off the tablet can detect precisely where it is in the saved environment mesh.

This geometry can possibly be used in Unity3D as a mesh collider which is also rendered to the depth buffer of the scene’s camera while displaying the tablet camera’s video feed. This means superimposed augmented reality characters can accurately collide with the static environment, as well as be occluded by real world objects. Characters can now not only appear on top of your table, but behind it–obscured by a chair leg.

ENVIRONMENTAL LIGHTING

Finally, this solves the challenge of how to properly light AR objects. Most AR apps assume there’s a light source on the ceiling and place a directional light pointing down. With a mesh built from local point cloud data, you can generate a panoramic render of where the observer is standing in the real world. This image can be used as a cube map for Image-based lighting systems like Marmoset Skyshop. This produces accurate lighting on 3D objects which when combined with environmental occlusion makes this truly a next generation AR experience.

A QUICK TEST

The first thing I did with the Unity SDK is drop the Tango camera in a Camera Birds scene. One of the most common requests for Camera Birds was to be able to walk through the forest instead of just rotating in place. It took no programming at all for me to make this happen with Tango.

This technology still has a long way to go–it has to become faster and more precise. Luckily, Movidius has already produced the Myriad 2, which is reportedly 3-5X faster and 20X more power efficient than the chip currently in the Tango prototypes. Vision Processing technology is a supremely nerdy topic–after all it’s literally rocket science. But it has far reaching implications for wearable platforms.

Oculus Rift World Space Cursors for World Space Canvases in Unity 4.6

Unity 4.6 is here! (Well, in public beta form). Finally–the GUI that I’ve waited YEARS for is in my hands. Just in time, too. I’ve just started building the GUI for my latest Oculus Rift project.

The new GUI in action.

The new GUI in action from Unity’s own demo.

One of the trickiest things to do in VR is a GUI. It seems easy at first but many lessons learned from decades of designing for the web, apps, and general 2D interfaces have to be totally reinvented. Given we don’t know what the standard controls may be for the final kit, many VR interfaces at least partially use your head as a mouse. This usually means having a 3D cursor floating around in world space which bumps into or traces through GUI objects.

Unity 4.6’s GUI features the World Space Canvas–which helps greatly. You can design beautiful, fluid 2D interfaces that exist on a plane in the game world making it much more comfortable to view in VR. However, by default Unity’s new GUI assumes you’re using a mouse, keyboard, or gamepad as an input device. How do you get this GUI to work with your own custom world-space VR cursor?

The answer is the use of Input Modules. However, in the current beta these are mostly undocumented. Luckily, Stramit at Unity has put up the source to many of the new GUI components as part of Unity’s announced open source policy. Using this code, I managed to write a short VRInputModule class that uses the result of a trace from my world space VR cursor and feeds it into the GUI. The code is here. Add this behavior to the EventSystem object where the default ones are.

In my current project, I have a 3D crosshair object that floats around the world, following the user’s view direction. The code that manages this object performs a trace, seeing if it hit anything in the UI layer. I added box colliders to the buttons in my World Space Canvas. Whenever the cursor trace hits one of these objects, I call SetTargetObject in the VRInputModule and pass it the object the trace hit. VRInputModule does the rest.

Note that the Process function polls my own input code to see if a select button has been hit–and if so, it executes the Submit action on that Button. I haven’t hooked up any event callbacks to my Buttons yet–but visually it’s responding to events (highlighting, clicking etc.)

It’s quick and dirty, but this should give you a good start in building VR interfaces using Unity’s new GUI.

Towerfall: The Re-Return of Social Gaming

Social gaming was hot.  Then it ‘died’.  And now it’s hot?  The fact is, video games have always been social.  In the earliest era of computer games there weren’t enough CPU cycles (or CPUs at all!) for AI.  Players had to move everything themselves–Steve Russell’s Spacewar being the earliest example.  But just look classic coin-ops like Pong, Warlords, Sprint, etc.  Same-screen multiplayer was just how things were done.  Arcades in the ‘80s weren’t solely the domain of nerds–a broad spectrum of people showed up and played games together.  Imagine that!

Towerfall

Local multiplayer ruled well into the ‘90s.  Games like GoldenEye, Mario Party, and Bomberman ensured there was always something to do when you had people over your place.  Yet, once Internet multiplayer hit in the early ‘00s, console games became strangely anti-social.  Today when someone comes over my house and wants to play a game with me–well, it’s complicated.  There really aren’t many games people can play together on the market.

That’s why Towerfall Ascension is so interesting to me.  At first I thought it was yet another pixel-art indie game over promoted by Ouya due to a lack of content.  After playing it with others its significance dawned on me.  Finally there’s something to play with other people!  It had been so long since I’d had a local multiplayer experience that it took actually playing it for me to recognize this one fact:  the local multiplayer brawler may very well be where the MOBA was when DOTA was merely a Warcraft III mod.

At GDC I noticed the beginning of this trend.  There were a few Towerfall clones already in progress or on the market.  In fact, some similar games even shortly preceded Towerfall.  Not to mention Towerfall’s release on the PS4 and Steam has been highly successful.  I really think a new (old) genre is born.

 

Unity3D vs. Unreal 4 vs. Crytek: GDC 2014 Engine Wars

GDC 2014 is over, and one thing is clear:  The engine wars are ON!

Morpheus

For at least a few years, Unity has clearly dominated the game engine field.  Starting with browser and mobile games, then gobbling up the entire ecosystem Innovator’s Dilemma style, Unity has become the engine of choice for startups, mobile game companies, and downloadable console titles.

Until now, Unreal seemed unphased.  The creation of an entire generation of studios based on Unity technology seemed to completely pass Epic by as Unreal continued to be licensed out for high fees and revenue share by AAA studios cranking out $50 million blockbusters.

Lately, the AAA market has been contracting–leaving only a handful of high-budget tent pole games in development every year.  Many of those mega studios have started to use their own internal engine tech, avoiding Epic’s licensing fees altogether.  Surely this trend was a big wakeup call.

This year Epic strikes back with a new business model aimed at the small mammals scurrying underfoot the AAA dinosaurs.  Offering Unreal 4 on desktop and mobile platforms for a mere $19 a month and a 5% revenue cut seems like a breakthrough, but it really isn’t.

One of Unity’s biggest obstacles for new teams is its $1500 per-seat platform fee.  When you need to buy 20 licenses of Unity for 3 platforms, things get costly.  Unity’s monthly plan can help lower initial costs, but over time this can be far more expensive than just paying for the license up front.  Even when you add up all the monthly costs for each platform license subscription, it’s still a better deal than Unreal.

Giving up 5% of your revenue to Epic when profit margins are razor-thin is a non starter for me.  Unreal’s AAA feature set creates unparalleled results, even with Unity 5’s upgrades, but it’s that 5% revenue cut that still makes it an unattractive choice to me.

Epic is also aping Unity’s Asset Store with their Unreal Marketplace.  This is absolutely critical.  The Asset Store is Unity’s trojan horse–allowing developers to add to the engine’s functionality as well as provide pre-made graphics and other items invaluable for rapid prototyping or full production.  While Unreal’s Marketplace is starting out rather empty, this is a big move for the survival of the engine.

Unreal 4 throws a lot of tried and true Unreal technologies out the window, starting with UnrealScript.  The reason why Unreal comes with the source is that you have to write your game code in native C++, not a scripting language.  The new Blueprints feature is intended to somewhat replace UnrealScript for designers, but this is completely new territory.  Unreal advertises full source as a benefit over Unity, but source-level access for Unity is almost always unnecessary.  Although, it is possible now that Unreal 4 source is on Github that the community can patch bugs in the engine before Epic does.  Unity developers have to wait until Unity performs updates themselves.

Unreal 4 is so radically different from previous versions, that a lot of Unreal developers may have very good reasons for escaping to Unity or other competing engines.  For some, learning Unreal 4’s new features may not be any easier than switching to a new engine altogether.

Oh, and Crytek is basically giving their stuff away.  At $10 a month and no revenue share, I’m not sure why they are charging for this at all.  That can’t possibly cover even the marketing costs.  I’m not very familiar with Crytek, but my biggest issue with the current offering is Crytek for mobile is a completely different engine.  The mobile engine Crytek built their iOS games with is not yet publicly available to developers.

Which brings me to the latest version of Unity.  I’m sure it’s getting harder to come up with new stuff that justifies a point release.  Still, I need almost none of the features announced in Unity 5.  This is irrelevant as Unity has won the war for developers.  Which is why Unity is moving on to the next problem:  making money for developers.

Unity Cloud is Unity’s new service that is starting as a referral network for Unity games.  Developers can trade traffic between games within a huge network of Unity apps on both Android and iOS.  Unity’s purchase of Applifier shows they are dead serious about solving monetization and discovery–two of the biggest problems in mobile right now.

While other engines are still focused on surpassing Unity’s features or business model, Unity have moved into an entirely different space.  Ad networks and app traffic services may start to worry if what happened to Epic and Crytek is about to happen to them.

Anyone who reads this blog knows I’m a huge Unity fanboy.  But having one insanely dominant engine is not healthy for anyone.  I’m glad to see the other engine providers finally make a move.  I still don’t think any of them have quite got it right yet.

Oh–and in other news, YoYo Game’s GameMaker announcement at GDC, as well as some more recent examples of its capabilities make me wonder why I even bothered to get a computer science degree in the first place!