Oculus Rift World Space Cursors for World Space Canvases in Unity 4.6

Unity 4.6 is here! (Well, in public beta form). Finally–the GUI that I’ve waited YEARS for is in my hands. Just in time, too. I’ve just started building the GUI for my latest Oculus Rift project.

The new GUI in action.

The new GUI in action from Unity’s own demo.

One of the trickiest things to do in VR is a GUI. It seems easy at first but many lessons learned from decades of designing for the web, apps, and general 2D interfaces have to be totally reinvented. Given we don’t know what the standard controls may be for the final kit, many VR interfaces at least partially use your head as a mouse. This usually means having a 3D cursor floating around in world space which bumps into or traces through GUI objects.

Unity 4.6’s GUI features the World Space Canvas–which helps greatly. You can design beautiful, fluid 2D interfaces that exist on a plane in the game world making it much more comfortable to view in VR. However, by default Unity’s new GUI assumes you’re using a mouse, keyboard, or gamepad as an input device. How do you get this GUI to work with your own custom world-space VR cursor?

The answer is the use of Input Modules. However, in the current beta these are mostly undocumented. Luckily, Stramit at Unity has put up the source to many of the new GUI components as part of Unity’s announced open source policy. Using this code, I managed to write a short VRInputModule class that uses the result of a trace from my world space VR cursor and feeds it into the GUI. The code is here. Add this behavior to the EventSystem object where the default ones are.

In my current project, I have a 3D crosshair object that floats around the world, following the user’s view direction. The code that manages this object performs a trace, seeing if it hit anything in the UI layer. I added box colliders to the buttons in my World Space Canvas. Whenever the cursor trace hits one of these objects, I call SetTargetObject in the VRInputModule and pass it the object the trace hit. VRInputModule does the rest.

Note that the Process function polls my own input code to see if a select button has been hit–and if so, it executes the Submit action on that Button. I haven’t hooked up any event callbacks to my Buttons yet–but visually it’s responding to events (highlighting, clicking etc.)

It’s quick and dirty, but this should give you a good start in building VR interfaces using Unity’s new GUI.

VR in 2014 = Mobile Games in 2002?

The first VRLA Meetup last week was awesome.  The performance capture studio at Digital Domain in Marina Del Rey hosted a series of impressive demos as well as live presentations on the current state and future of VR applications.  The venue could only hold 100 people, but 300 registered.  Mobs of interested VR consumers, developers, and producers had to be turned away at the door.

VRLA winding down. (Photo via John Root)

VRLA winding down. (Photo via John Root)

After this event, it struck me that VR in 2014 is reminiscent of mobile in the early 2000s.  Back in 2002 I attended the first GDC Mobile Gaming Summit.  It was at a jam-packed lecture hall in San Jose where presenters demoed the latest in technology and gave their thoughts on where the industry was heading.

At that point, mobile phone hardware was clunky and primitive.  Most phones were still sporting 80×50 monochrome screens with maybe 100k of RAM available for programs to run.  Even if you were ‘lucky’ enough to have one of these devices, it was nearly impossible to figure out how to download games.

In 2002 almost nobody knew how to monetize mobile games.  The hardware could barely run games anyway.  Yet, these people knew it was going to be a big deal.  The room was filled with excitement and anything could happen.

Since then, mobile gaming has created a huge new audience for games that has disrupted the traditional game industry, forcing a shift in how console games are designed and delivered.  Now mobile gaming is obvious, but back in 2002 there were many naysayers–despite the fact that in Japan iMode had been successfully delivering mobile games since the late ‘90s.

To me, VR in its current state feels the same way.  The hardware is huge and clumsy.  There is some precedent for VR applications stretching way back to the 1990s with Virtuality and Battletech Centers.  And there’s a lot of consumer interest–evidenced by all the successful VR and AR hardware kickstarters in addition to the attendance of VRLA this month.

The top question on everyone’s mind is “how do I make money in VR?”  This was the same question asked by many about mobile in 2002.  Back then, the path was more obvious.  Qualcomm’s BREW and Japan’s iMode already had established billing models for mobile content.  Right now, it’s unknown who will pay for VR experiences and what form they will take. A lot of this is a hardware question. Nobody really knows what the iPhone of wearable gaming will be like–but when it arrives, it will be revolutionary.

These definitely are uncertain and exciting times for this new medium–which makes it much more fun to develop for than established platforms.

Towerfall: The Re-Return of Social Gaming

Social gaming was hot.  Then it ‘died’.  And now it’s hot?  The fact is, video games have always been social.  In the earliest era of computer games there weren’t enough CPU cycles (or CPUs at all!) for AI.  Players had to move everything themselves–Steve Russell’s Spacewar being the earliest example.  But just look classic coin-ops like Pong, Warlords, Sprint, etc.  Same-screen multiplayer was just how things were done.  Arcades in the ‘80s weren’t solely the domain of nerds–a broad spectrum of people showed up and played games together.  Imagine that!

Towerfall

Local multiplayer ruled well into the ‘90s.  Games like GoldenEye, Mario Party, and Bomberman ensured there was always something to do when you had people over your place.  Yet, once Internet multiplayer hit in the early ‘00s, console games became strangely anti-social.  Today when someone comes over my house and wants to play a game with me–well, it’s complicated.  There really aren’t many games people can play together on the market.

That’s why Towerfall Ascension is so interesting to me.  At first I thought it was yet another pixel-art indie game over promoted by Ouya due to a lack of content.  After playing it with others its significance dawned on me.  Finally there’s something to play with other people!  It had been so long since I’d had a local multiplayer experience that it took actually playing it for me to recognize this one fact:  the local multiplayer brawler may very well be where the MOBA was when DOTA was merely a Warcraft III mod.

At GDC I noticed the beginning of this trend.  There were a few Towerfall clones already in progress or on the market.  In fact, some similar games even shortly preceded Towerfall.  Not to mention Towerfall’s release on the PS4 and Steam has been highly successful.  I really think a new (old) genre is born.

 

Unity3D vs. Unreal 4 vs. Crytek: GDC 2014 Engine Wars

GDC 2014 is over, and one thing is clear:  The engine wars are ON!

Morpheus

For at least a few years, Unity has clearly dominated the game engine field.  Starting with browser and mobile games, then gobbling up the entire ecosystem Innovator’s Dilemma style, Unity has become the engine of choice for startups, mobile game companies, and downloadable console titles.

Until now, Unreal seemed unphased.  The creation of an entire generation of studios based on Unity technology seemed to completely pass Epic by as Unreal continued to be licensed out for high fees and revenue share by AAA studios cranking out $50 million blockbusters.

Lately, the AAA market has been contracting–leaving only a handful of high-budget tent pole games in development every year.  Many of those mega studios have started to use their own internal engine tech, avoiding Epic’s licensing fees altogether.  Surely this trend was a big wakeup call.

This year Epic strikes back with a new business model aimed at the small mammals scurrying underfoot the AAA dinosaurs.  Offering Unreal 4 on desktop and mobile platforms for a mere $19 a month and a 5% revenue cut seems like a breakthrough, but it really isn’t.

One of Unity’s biggest obstacles for new teams is its $1500 per-seat platform fee.  When you need to buy 20 licenses of Unity for 3 platforms, things get costly.  Unity’s monthly plan can help lower initial costs, but over time this can be far more expensive than just paying for the license up front.  Even when you add up all the monthly costs for each platform license subscription, it’s still a better deal than Unreal.

Giving up 5% of your revenue to Epic when profit margins are razor-thin is a non starter for me.  Unreal’s AAA feature set creates unparalleled results, even with Unity 5’s upgrades, but it’s that 5% revenue cut that still makes it an unattractive choice to me.

Epic is also aping Unity’s Asset Store with their Unreal Marketplace.  This is absolutely critical.  The Asset Store is Unity’s trojan horse–allowing developers to add to the engine’s functionality as well as provide pre-made graphics and other items invaluable for rapid prototyping or full production.  While Unreal’s Marketplace is starting out rather empty, this is a big move for the survival of the engine.

Unreal 4 throws a lot of tried and true Unreal technologies out the window, starting with UnrealScript.  The reason why Unreal comes with the source is that you have to write your game code in native C++, not a scripting language.  The new Blueprints feature is intended to somewhat replace UnrealScript for designers, but this is completely new territory.  Unreal advertises full source as a benefit over Unity, but source-level access for Unity is almost always unnecessary.  Although, it is possible now that Unreal 4 source is on Github that the community can patch bugs in the engine before Epic does.  Unity developers have to wait until Unity performs updates themselves.

Unreal 4 is so radically different from previous versions, that a lot of Unreal developers may have very good reasons for escaping to Unity or other competing engines.  For some, learning Unreal 4’s new features may not be any easier than switching to a new engine altogether.

Oh, and Crytek is basically giving their stuff away.  At $10 a month and no revenue share, I’m not sure why they are charging for this at all.  That can’t possibly cover even the marketing costs.  I’m not very familiar with Crytek, but my biggest issue with the current offering is Crytek for mobile is a completely different engine.  The mobile engine Crytek built their iOS games with is not yet publicly available to developers.

Which brings me to the latest version of Unity.  I’m sure it’s getting harder to come up with new stuff that justifies a point release.  Still, I need almost none of the features announced in Unity 5.  This is irrelevant as Unity has won the war for developers.  Which is why Unity is moving on to the next problem:  making money for developers.

Unity Cloud is Unity’s new service that is starting as a referral network for Unity games.  Developers can trade traffic between games within a huge network of Unity apps on both Android and iOS.  Unity’s purchase of Applifier shows they are dead serious about solving monetization and discovery–two of the biggest problems in mobile right now.

While other engines are still focused on surpassing Unity’s features or business model, Unity have moved into an entirely different space.  Ad networks and app traffic services may start to worry if what happened to Epic and Crytek is about to happen to them.

Anyone who reads this blog knows I’m a huge Unity fanboy.  But having one insanely dominant engine is not healthy for anyone.  I’m glad to see the other engine providers finally make a move.  I still don’t think any of them have quite got it right yet.

Oh–and in other news, YoYo Game’s GameMaker announcement at GDC, as well as some more recent examples of its capabilities make me wonder why I even bothered to get a computer science degree in the first place!

Why Consoles Didn’t Die

Yeah I was wrong. But hey, so were a lot of people. The PS4 barrels ahead with the fastest selling console ever. Microsoft is making a lot of similar but highly qualified statements about the XBOX One which leads me to believe it’s lagging behind significantly. Still, the recent European price cut and upcoming tent pole releases may perk things up.

photo (39)

Regardless, most console doom predictions haven’t come true. This is because Microsoft, and primarily Sony, changed their business models in response to the looming threat from mobile and tablets. If consoles kept going the direction they were in 2008, we would see a totally different story.

What changed?

No more loss leaders.

Consoles historically launched as high-end hardware sold at a loss–but still quite expensive. This peaked last generation with the ‘aspirational’ PS3 debuting at nearly $600 in 2006. The idea behind this business model was that they’d make it up in software sales and eventually cost reduce the hardware.

This time, Sony took a page out of Nintendo’s book and built lower cost hardware that can at least be sold close to breakeven at launch. The downside being that the tech specs are somewhat mundane. Price sensitivity wins over performance.

Dropping the gates.

The tightly gated ecosystem that dominated consoles for decades would have been absolutely disastrous if left to stand. Sony has largely obliterated their gate and gone for a more authoritarian version of Apple’s curated model. Surely the most significant evidence of mobile’s influence on console to date. Microsoft has also adopted this posture with their ID program. The indie revolution is heavily influencing games, and allowing this movement to continue on consoles is a smart move. Especially when fewer and fewer studios can execute at a AAA level.

Users didn’t move.

A lot of analysts mistook stagnant console numbers for lagging demand. It turns out, there really was just nothing else to buy. Despite hype about core games on mobile–that transition has yet to happen. Most titles console players would recognize as ‘core’ games have utterly failed to gain traction on tablets. Core gamers want core games exclusively on console or desktop while reserving mobile for a completely different experience.

Eventually we’ll see a major disruption in how and where games are consumed. It’s going to take longer than one console generation to transform core gamer habits. It also may be too early to tell. After all, we’re only a few months into this generation.

The fact is, the AAA economy isn’t sustainable. Massive layoffs, even while Sony is basking in post-hardware launch success, shows not all is well with the AAA end of the spectrum.

The Next Problems to Solve in Augmented Reality

I’m totally amped up about Project Tango. After having worked with augmented reality for a few years, most of the problems I’ve seen with current platforms could be solved with a miniaturized depth-sensing Kinect-style sensor. The Myriad 1 is a revolutionary chip that will dramatically change the quality of experience you get from augmented reality applications–both on mobile devices and wearables.

There’s a few other issues in AR I’d like to see addressed. Perhaps they are in research papers, but I haven’t seen anything real yet. Maybe they require some custom hardware as well.

Real-world lighting simulation.

One of the reasons virtual objects in augmented reality look fake is because AR APIs can’t simulate the real-world lighting environment in a 3D engine. For most applications, you place a directional light pointing down to and turn up the ambient for a vague approximation of overhead lighting. This is assuming the orientation of the object you’re tracking is upright, of course.

Camera Birds AR mode using an overhead directional light.

Camera Birds AR mode using an overhead directional light.

What I’d really like to use is Image Based Lighting. Image based Lighting is a computationally efficient way to simulate environmental lighting without filling a scene up with dynamic lights. It uses a combination of cube maps built from HDR photos with custom shaders to produce great results. A good example of this is the Marmoset Skyshop plug-in for Unity3D.

Perhaps with a combination of sensors and 360 cameras you can build HDR cubemaps out of the viewer’s local environment in real-time to match environmental lighting. Using these with Image Based Lighting will be a far more accurate lighting model than what’s currently available. Maybe building rudimentary cubemaps out of the video feed is a decent half-measure.

Which object is moving?

In a 3D engine, virtual objects drawn on top of image targets are rendered with two types of cameras. Ether the camera is moving around the object, or the object is moving around the camera. In real life, the ‘camera’ is your eye–so the it should move if you move your head. If you move an image target, that is effectively moving the virtual object.

Current AR APIs have no way of knowing whether the camera or the object is moving. With Qualcomm’s Vuforia, you can either tell it to always move the camera around the object, or to move the objects around the camera. This can cause problems with lighting and physics.

For instance, on one project I was asked to make liquid pour out of a virtual glass when you tilt the image target it rest upon. To do this I had to force Vuforia to assume the image target was moving–so then the image target tilted, so would the 3D object in the game engine and liquid would pour. Only problem is, this would also happen if I had moved the phone as well. Vuforia can’t tell what’s actually moving.

There needs to be a way to accurately track the ‘camera’ movement of either the wearable or mobile device so that in the 3D scene the camera and objects can be positioned accurately. This will allow for lighting to be realistically applied and for moving trackable objects to behave properly in a 3D engine. Especially with motion tracking advances such as the M7 chip, I suspect there are some good algorithmic solutions to factoring out the movement of the object and the observer to solve this problem.

Anyway, these are the kind of problems you begin to think about when staring at augmented reality simulations for years. Once you get over the initial appeal of AR’s gimmick, the practical implications of the technology poses many questions. I’ve applied for my Project Tango devkit and really hope I get my hands on one soon!

From Bits to Atoms: Creating A Game In The Physical World

Some of you may recall last year’s post about 3D printing and my general disappointment with consumer-grade additive manufacturing technology. This was the start of my year-long quest to turn bits into atoms. Since that time there has been much progress in the technology and I’ve learned a lot about manufacturing. But first, a little about why I’m doing this, and my new project titled: Ether Drift.

Ether Drift AR App

A little over a year ago, I met a small team of developers who had a jaw-dropping trailer for a property they tried to get funded as a AAA console game. After failing to get the game off the ground it was mothballed until I accidentally saw their video one fateful afternoon.

With the incredible success of wargaming miniatures and miniature-based board game campaigns on Kickstarter, I thought one way to launch this awesome concept would be to turn the existing game assets into figurines. These toys would work with an augmented reality app that introduces the world and the characters as well as light gameplay elements. This would be a way to gauge interest in the property before going ahead with a full game production.

A lot of this was based on my erroneous assumption that I could just 3D print game models and ship them as toys. I really knew nothing about manufacturing. Vague memories of Ed Fries’ 3D printing service that made figurines out of World of Warcraft avatars guided my first steps.

3D printers are great prototyping tools. Still, printing the existing game model took over 20 hours and cost hundreds of dollars in materials and machine time. Plus, 3D prints are fragile and require a lot of hand-finishing to smooth out. When manufacturing in quantity, you need to go back to old-school molding.

You can 3D print just about any shape, but molding and casting has strict limitations. You have to minimize undercut by breaking the model up into smaller pieces that can be molded and assembled. The game model I printed out was way too complicated to be broken down into a manageable set of parts.

Most of these little bits on the back and underside would have to be individual molded parts to be re-assembled later--An expensive process!

Most of these little bits on the back and underside would have to be individual molded parts to be re-assembled later–An expensive process!

So I scrapped the idea of using an existing game property. Instead, I developed an entirely new production process. I now create new characters from scratch that are designed to be molded. This starts as a high detail 3D model that is printed out in parts that molds are made from. Then, I have that 3D model turned into something that can be textured and rigged for Unity3D. There are some sacrifices made in character design since the more pieces there are, the more expensive it is to manufacture. Same goes for the painting process–the more detailed the game texture is, the more costly it becomes to duplicate in paint on a plastic toy.

We're working on getting a simple paint job that matches the in-game texture.

We’re working on getting a simple paint job that matches the in-game texture.

So, what is Ether Drift? In short: it’s Skylanders for nerds. I love the concept of Skylanders–but, grown adult geeks like toys too. The first version of this project features a limited set of figures and an augmented reality companion app.

The app uses augmented reality trading cards packed with each figure to display your toy in real-time 3D as well as allowing you to use your characters with a simple card battle game. I’m using Qualcomm’s Vuforia for this feature–the gold standard in AR.

The app lets you add characters to your collection via a unique code on the card. These characters will be available in the eventual Ether Drift game, as well as others. I’ve secured a deal to have these characters available in at least one other game.

If you are building a new IP today, it’s extremely important to think about your physical goods strategy. Smart indies have already figured this out. The workflow I created for physical to digital can be applied to any IP, but planning it in advance can make the process much simpler.

In essence, I’m financing the development of a new IP by selling individual assets as toys while it is being built. For me, it’s also a throwback to the days before everything was licensed from movies or comic books and toy store shelves were stocked with all kinds of crazy stuff. Will it work? We’ll see next month! I am planning a Kickstarter for the first series in mid-March. Stay Tuned to the Ether Drift site, Facebook page, or Twitter account. Selling atoms instead of bits is totally new ground for me. I’m open to all feedback on the project, as well as people who want to collaborate.