Unity3D vs. Unreal 4 vs. Crytek: GDC 2014 Engine Wars

GDC 2014 is over, and one thing is clear:  The engine wars are ON!


For at least a few years, Unity has clearly dominated the game engine field.  Starting with browser and mobile games, then gobbling up the entire ecosystem Innovator’s Dilemma style, Unity has become the engine of choice for startups, mobile game companies, and downloadable console titles.

Until now, Unreal seemed unphased.  The creation of an entire generation of studios based on Unity technology seemed to completely pass Epic by as Unreal continued to be licensed out for high fees and revenue share by AAA studios cranking out $50 million blockbusters.

Lately, the AAA market has been contracting–leaving only a handful of high-budget tent pole games in development every year.  Many of those mega studios have started to use their own internal engine tech, avoiding Epic’s licensing fees altogether.  Surely this trend was a big wakeup call.

This year Epic strikes back with a new business model aimed at the small mammals scurrying underfoot the AAA dinosaurs.  Offering Unreal 4 on desktop and mobile platforms for a mere $19 a month and a 5% revenue cut seems like a breakthrough, but it really isn’t.

One of Unity’s biggest obstacles for new teams is its $1500 per-seat platform fee.  When you need to buy 20 licenses of Unity for 3 platforms, things get costly.  Unity’s monthly plan can help lower initial costs, but over time this can be far more expensive than just paying for the license up front.  Even when you add up all the monthly costs for each platform license subscription, it’s still a better deal than Unreal.

Giving up 5% of your revenue to Epic when profit margins are razor-thin is a non starter for me.  Unreal’s AAA feature set creates unparalleled results, even with Unity 5’s upgrades, but it’s that 5% revenue cut that still makes it an unattractive choice to me.

Epic is also aping Unity’s Asset Store with their Unreal Marketplace.  This is absolutely critical.  The Asset Store is Unity’s trojan horse–allowing developers to add to the engine’s functionality as well as provide pre-made graphics and other items invaluable for rapid prototyping or full production.  While Unreal’s Marketplace is starting out rather empty, this is a big move for the survival of the engine.

Unreal 4 throws a lot of tried and true Unreal technologies out the window, starting with UnrealScript.  The reason why Unreal comes with the source is that you have to write your game code in native C++, not a scripting language.  The new Blueprints feature is intended to somewhat replace UnrealScript for designers, but this is completely new territory.  Unreal advertises full source as a benefit over Unity, but source-level access for Unity is almost always unnecessary.  Although, it is possible now that Unreal 4 source is on Github that the community can patch bugs in the engine before Epic does.  Unity developers have to wait until Unity performs updates themselves.

Unreal 4 is so radically different from previous versions, that a lot of Unreal developers may have very good reasons for escaping to Unity or other competing engines.  For some, learning Unreal 4’s new features may not be any easier than switching to a new engine altogether.

Oh, and Crytek is basically giving their stuff away.  At $10 a month and no revenue share, I’m not sure why they are charging for this at all.  That can’t possibly cover even the marketing costs.  I’m not very familiar with Crytek, but my biggest issue with the current offering is Crytek for mobile is a completely different engine.  The mobile engine Crytek built their iOS games with is not yet publicly available to developers.

Which brings me to the latest version of Unity.  I’m sure it’s getting harder to come up with new stuff that justifies a point release.  Still, I need almost none of the features announced in Unity 5.  This is irrelevant as Unity has won the war for developers.  Which is why Unity is moving on to the next problem:  making money for developers.

Unity Cloud is Unity’s new service that is starting as a referral network for Unity games.  Developers can trade traffic between games within a huge network of Unity apps on both Android and iOS.  Unity’s purchase of Applifier shows they are dead serious about solving monetization and discovery–two of the biggest problems in mobile right now.

While other engines are still focused on surpassing Unity’s features or business model, Unity have moved into an entirely different space.  Ad networks and app traffic services may start to worry if what happened to Epic and Crytek is about to happen to them.

Anyone who reads this blog knows I’m a huge Unity fanboy.  But having one insanely dominant engine is not healthy for anyone.  I’m glad to see the other engine providers finally make a move.  I still don’t think any of them have quite got it right yet.

Oh–and in other news, YoYo Game’s GameMaker announcement at GDC, as well as some more recent examples of its capabilities make me wonder why I even bothered to get a computer science degree in the first place!

The Next Problems to Solve in Augmented Reality

I’m totally amped up about Project Tango. After having worked with augmented reality for a few years, most of the problems I’ve seen with current platforms could be solved with a miniaturized depth-sensing Kinect-style sensor. The Myriad 1 is a revolutionary chip that will dramatically change the quality of experience you get from augmented reality applications–both on mobile devices and wearables.

There’s a few other issues in AR I’d like to see addressed. Perhaps they are in research papers, but I haven’t seen anything real yet. Maybe they require some custom hardware as well.

Real-world lighting simulation.

One of the reasons virtual objects in augmented reality look fake is because AR APIs can’t simulate the real-world lighting environment in a 3D engine. For most applications, you place a directional light pointing down to and turn up the ambient for a vague approximation of overhead lighting. This is assuming the orientation of the object you’re tracking is upright, of course.

Camera Birds AR mode using an overhead directional light.

Camera Birds AR mode using an overhead directional light.

What I’d really like to use is Image Based Lighting. Image based Lighting is a computationally efficient way to simulate environmental lighting without filling a scene up with dynamic lights. It uses a combination of cube maps built from HDR photos with custom shaders to produce great results. A good example of this is the Marmoset Skyshop plug-in for Unity3D.

Perhaps with a combination of sensors and 360 cameras you can build HDR cubemaps out of the viewer’s local environment in real-time to match environmental lighting. Using these with Image Based Lighting will be a far more accurate lighting model than what’s currently available. Maybe building rudimentary cubemaps out of the video feed is a decent half-measure.

Which object is moving?

In a 3D engine, virtual objects drawn on top of image targets are rendered with two types of cameras. Ether the camera is moving around the object, or the object is moving around the camera. In real life, the ‘camera’ is your eye–so the it should move if you move your head. If you move an image target, that is effectively moving the virtual object.

Current AR APIs have no way of knowing whether the camera or the object is moving. With Qualcomm’s Vuforia, you can either tell it to always move the camera around the object, or to move the objects around the camera. This can cause problems with lighting and physics.

For instance, on one project I was asked to make liquid pour out of a virtual glass when you tilt the image target it rest upon. To do this I had to force Vuforia to assume the image target was moving–so then the image target tilted, so would the 3D object in the game engine and liquid would pour. Only problem is, this would also happen if I had moved the phone as well. Vuforia can’t tell what’s actually moving.

There needs to be a way to accurately track the ‘camera’ movement of either the wearable or mobile device so that in the 3D scene the camera and objects can be positioned accurately. This will allow for lighting to be realistically applied and for moving trackable objects to behave properly in a 3D engine. Especially with motion tracking advances such as the M7 chip, I suspect there are some good algorithmic solutions to factoring out the movement of the object and the observer to solve this problem.

Anyway, these are the kind of problems you begin to think about when staring at augmented reality simulations for years. Once you get over the initial appeal of AR’s gimmick, the practical implications of the technology poses many questions. I’ve applied for my Project Tango devkit and really hope I get my hands on one soon!

From Bits to Atoms: Creating A Game In The Physical World

Some of you may recall last year’s post about 3D printing and my general disappointment with consumer-grade additive manufacturing technology. This was the start of my year-long quest to turn bits into atoms. Since that time there has been much progress in the technology and I’ve learned a lot about manufacturing. But first, a little about why I’m doing this, and my new project titled: Ether Drift.

Ether Drift AR App

A little over a year ago, I met a small team of developers who had a jaw-dropping trailer for a property they tried to get funded as a AAA console game. After failing to get the game off the ground it was mothballed until I accidentally saw their video one fateful afternoon.

With the incredible success of wargaming miniatures and miniature-based board game campaigns on Kickstarter, I thought one way to launch this awesome concept would be to turn the existing game assets into figurines. These toys would work with an augmented reality app that introduces the world and the characters as well as light gameplay elements. This would be a way to gauge interest in the property before going ahead with a full game production.

A lot of this was based on my erroneous assumption that I could just 3D print game models and ship them as toys. I really knew nothing about manufacturing. Vague memories of Ed Fries’ 3D printing service that made figurines out of World of Warcraft avatars guided my first steps.

3D printers are great prototyping tools. Still, printing the existing game model took over 20 hours and cost hundreds of dollars in materials and machine time. Plus, 3D prints are fragile and require a lot of hand-finishing to smooth out. When manufacturing in quantity, you need to go back to old-school molding.

You can 3D print just about any shape, but molding and casting has strict limitations. You have to minimize undercut by breaking the model up into smaller pieces that can be molded and assembled. The game model I printed out was way too complicated to be broken down into a manageable set of parts.

Most of these little bits on the back and underside would have to be individual molded parts to be re-assembled later--An expensive process!

Most of these little bits on the back and underside would have to be individual molded parts to be re-assembled later–An expensive process!

So I scrapped the idea of using an existing game property. Instead, I developed an entirely new production process. I now create new characters from scratch that are designed to be molded. This starts as a high detail 3D model that is printed out in parts that molds are made from. Then, I have that 3D model turned into something that can be textured and rigged for Unity3D. There are some sacrifices made in character design since the more pieces there are, the more expensive it is to manufacture. Same goes for the painting process–the more detailed the game texture is, the more costly it becomes to duplicate in paint on a plastic toy.

We're working on getting a simple paint job that matches the in-game texture.

We’re working on getting a simple paint job that matches the in-game texture.

So, what is Ether Drift? In short: it’s Skylanders for nerds. I love the concept of Skylanders–but, grown adult geeks like toys too. The first version of this project features a limited set of figures and an augmented reality companion app.

The app uses augmented reality trading cards packed with each figure to display your toy in real-time 3D as well as allowing you to use your characters with a simple card battle game. I’m using Qualcomm’s Vuforia for this feature–the gold standard in AR.

The app lets you add characters to your collection via a unique code on the card. These characters will be available in the eventual Ether Drift game, as well as others. I’ve secured a deal to have these characters available in at least one other game.

If you are building a new IP today, it’s extremely important to think about your physical goods strategy. Smart indies have already figured this out. The workflow I created for physical to digital can be applied to any IP, but planning it in advance can make the process much simpler.

In essence, I’m financing the development of a new IP by selling individual assets as toys while it is being built. For me, it’s also a throwback to the days before everything was licensed from movies or comic books and toy store shelves were stocked with all kinds of crazy stuff. Will it work? We’ll see next month! I am planning a Kickstarter for the first series in mid-March. Stay Tuned to the Ether Drift site, Facebook page, or Twitter account. Selling atoms instead of bits is totally new ground for me. I’m open to all feedback on the project, as well as people who want to collaborate.

Ludum Dare: Ten Seconds of Thrust

This past weekend I participated in Ludum Dare 48, a contest where you make a game by yourself in 48 hours. The theme is revealed at the start of the contest–Iron Chef style. All code, graphics, and sounds have to be made from scratch. Voting began on Sunday night and will extend for a few weeks. I’m not even sure what you win, but that’s not the point. It’s an awesome experience in GETTING IT DONE.

Ten Seconds of Thrust!

My entry is the Lunar Lander-esque Ten Seconds of Thrust. (Please rate it!) Attempt to land at the bottom of increasingly difficult randomly generated space caverns with only ten seconds of thruster time. It’s crude, ugly, and buggy–especially on Windows where it doesn’t seem to detect landing. I didn’t have time to fix this bug as I only discovered it in the last half hour, but it does seem like a strange Unity Web Player bug since it works fine on OSX browsers. (PROTIP: Make sure you have a few friends around during the weekend to test your game!)

One of the best things about the contest is watching games evolve quickly through Twitter, Vine, Facebook, and Instagram posts. I put up a few videos in progress over the weekend.

I used a lot of the tools mentioned in my rapid prototyping posts, including a new tool I found called Sprite Gen which creates randomly generated animated character and tile sprites in tiny 12×12 blocks. Naturally, the game was developed in Unity along with 2DToolkit and HOTween for plug-ins.

I’d like to fix the landing bug as it makes the game useless on Windows, but the rules are somewhat unclear on bug-fixes that don’t add any content. This game was actually based on an idea for a Lunar Lander roguelike I was developing earlier this year. The LD48 version is highly simplified and way more fun. I abandoned my prototype in disgust back in February. This quick and dirty version is much better–I might run with it and make a full game.

Displaying Maps in Unity3D

There have been a few recent examples of real-world maps displayed in Unity3D apps. The first one I noticed was the playfield in the infamous Halo 4 iPhone app that came out late last year. For unknown reasons, I was really into this game for a few months. I hung around my local 7-11 scanning bags of Doritos so much that I thought I was going to get arrested for shoplifting. Eventually this obsession led to me wanting to duplicate the map display used in the game. Here’s how I did it.

Google Maps Plug-In

Naturally the first place I looked was the Asset Store. It turns out there is a free Google Maps plug-in available. The only catch is that it requires UniWeb to work. UniWeb lets you call REST APIs and generally have more control over HTTP requests than Unity’s own WWW class allows. It can be a necessity if you’re using REST API calls but it restricts your code stripping options. This will bump up your binary size.

This asset’s sample scene works flawlessly. It downloads a map from the Google Static Map API and textures it on a cube. The code is clean and well documented, featuring the ability to request paths and markers to be added to the static map. Most attributes can be tweaked through the inspector–such as map resolution, location, etc.

I made a lot of changes to this package. I really wish it was open source. Free code assets really should be in most cases. I will try to isolate my changes into another C# file and post a Gist.

The first change I made was to add support for themed Static Maps. If you look at this wizard, you can see that there are a lot of styling options. This appears to be the same technique used in the Halo 4 app because with the right set of options you can get something that looks really close. Supporting styling in Unity3D is just a simple act of appending the style parameters to the end of the URL used by the Google Maps plug-in.

Displaying Markers in 3D

The next thing I wanted to do is display the markers as 3D objects on top of the map instead of having them inside the texture itself. This requires 3 steps:

  1. Determine where the markers are in pixel coordinates in the static map texture.
  2. Calculate the UV coordinate of the pixel coordinate.
  3. Calculate the world coordinate of the texel the UV coordinate resides at.

Step 1 can be tricky. You have to project the latitude and longitude of the marker with the Mercator projection Google Maps uses to get the pixel coordinate. Luckily, this guy already did it in PHP to create image maps from static maps. I adapted this code to C# and it works perfectly. You can grab the Google Maps utility functions here. (All this great free code on the net is making me lazy–but I digress)

Step 2 is easy. This code snippet does the trick. The only catch is that you have to flip the V so that it matches with how Unity uses UV coordinates.

Step 3 is also tricky. However, someone with much better math skills than I wrote a JavaScript method to compute the world coordinate from a UV coordinate. It searches through each triangle in the mesh and sees if the UV coordinate is contained inside it. If so, it then calculates the resultant world coordinate. The key to using this is to put the static map on a plane (the default scene in the plug-in uses a cube) and use the C# version of this function I wrote here.

3D objects floating over marker locations on a Google Static Map.

3D objects floating over marker locations on a Google Static Map.

Here’s the end result–in this case it’s a display for the Donut Dazzler prototype. 3D donuts are floating over real-world donut shops and cupcakes over cupcake bakeries. I got the locations from the Foursquare API. This is quite easy to do using UniWeb.

Slippy Maps

The aforementioned technique works great if you just want a static map to display stuff around the user’s current location. What if you want to be able to scroll around and see more map tiles, just like Google Maps when you move around with your mouse? This is called a Slippy Map. Slippy Maps are much more elaborate–they require dynamically downloading map tiles and stitching them together as the user moves around the world.

Thankfully Jonathan Derrough wrote an amazing free Slippy Map implementation for Unity3D. It really is fantastic. It displays markers in 3D and pulls map tiles from multiple sources–including OpenStreetMap and Bing/VirtualEarth. It doesn’t use Google Maps because of possible TOS violations.

I couldn’t find a way to style map tiles like Google Static Maps can. So the end result was impressive but kind of ugly. It is possible with OpenStreetMap to run your own tile server and run a custom renderer to draw styled tiles. I suspect that’s how Rescue Rush styles their OpenStreetMap tiles–unless they are doing some image processing on the client.

Either Or

For my prototype I ended up using Google Static Maps because Slippy Maps were overkill. Also, pulling tiles down from the servers seemed much slower than grabbing a single static map. I suppose I could add some tile caching, but in the end static maps worked fine for my purposes.

Keep in mind that Google Maps has some pretty fierce API usage costs. If your app goes viral, you will likely be on the hook for a huge bill. Which is why it might be worth figuring out how to style free OpenStreetMap tiles.

Unity3D 4 Pet Peeves

I’ve been updating my older apps to use the newly released Unity3D 4 engine, as well as starting an entirely new project. I haven’t used many of Unity3D 4’s new features yet, but I figured this is as good a time as any to list a few of my pet peeves with Unity3D 4 as I did with Unity3D 3 a few years back.

It’s time Unity3D had a package manager.

Unity3D plug-ins and assets purchased from the Asset Store are invaluable. It’s becoming the most important feature that makes Unity3D the superior choice. However, managing projects with multiple plug-ins is can be a nightmare. A lot of this is how Unity3D handles file deletions.

If you click the “update” button to overwrite an existing plug-in with the latest version from the Asset Store, it may wreak havoc upon your entire project. Unity3D’s file hashing system will sometimes fail to overwrite files with the same name, even if you are importing a newer one. You’ll end up with a mess of old and new plug-in files causing chaos and mayhem. The only way to prevent this is to manually find delete all the old plug-in files before updating with the latest version.

Not to mention the fact that native plug-ins either require you to manually setup your own XCode project with external libraries or have their own proprietary scripts that edit your XCode project. Unity3D should provide an API and package manager that lets plug-ins forcibly delete and update their own files as well as modify settings in the XCode project Unity3D generates.

Let me import files with arbitrary extensions.

A minor annoyance is how Unity3D will only accept files with specific extensions in your project. If you want a custom binary data file you HAVE to give it the txt extension. It’s the only way you can drag the file in to the project. Unity3D should allow you to import files with any extension you want, but provide a method in the AssetPostprocessor API to be called when an unknown file extension is detected.

Where’s the GUI?

Come on now. It’s 2013. The new GUI has been “coming soon” for years. Unity hired the NGUI guy, which leads me to believe the mythical Unity3D 4 GUI is merely the stuff of legends and fantasies. I like NGUI but I’m really looking forward to an official solution from Unity. Although I’m not looking forward to re-writing all my GUIs once it arrives. Let’s just get it over with. Bring it on.

Monodevelop sucks.

My god. Monodevelop sucks. Lots of people use other text editors for code, but you still can’t avoid touching Monodevelop when it comes to debugging on OSX. I’m sure it can be whipped into shape with a minor overhaul, but it’s been awful for so long perhaps this is unlikely. Aside from the crashes and interface weirdness, how much human productivity has been destroyed waiting for Monodevelop to reload the solution every time so much as a single file has been moved to a different folder?

Is it time to update Mono?

While we’re at it, Mono recently updated to C# 5.0. I’m not sure if this is a big performance drag or not, but I’d love to see Unity3D’s Mono implementation updated to the latest. There are some C# 5.0 features I’ve been dying to use in Unity3D.

Tough Love

Don’t take it personally, Unity3D is still my engine of choice. This list of annoyances is pretty minor compared to previous ones. Every year, Unity gives me fewer and fewer things to whine about. It seems competing solutions are having trouble keeping up.

Native Code is Dead

Although Android has a larger market share when you count it by pure number of devices and users, iOS still dominates monetization. Research I did earlier this year about apathetic Android users still rings true. However, vast improvements in Android as an OS and Google Play as a way to monetize apps are changing this. Not to mention changes to the new iOS 6 App Store are making app discovery even more difficult on Apple devices. A lot of developers are grumbling about their fortunes on iOS and are looking elsewhere.

Platforms are volatile. Five years ago, Facebook was the ultimate destination for game developers. Now it’s a ghost town. iOS is the hot ticket now, but Android is becoming increasingly competitive. As a developer you need to be prepared to move platforms in an instant. For this reason, native code is dead.

Using a solution such as Unity3D, Flash, or HTML5 allows you to easily move apps from one device to another. Or, from mobile to the Web. Or, from Web to desktop. You get the idea. It’s true that each one of these solutions has tradeoffs in features or performance to accomplish frictionless cross-platform porting. However, most studios can’t afford to double their engineering staff to multiply the amount of platforms they deploy on.

If you’re starting a new project from scratch, you have to consider your cross-platform options:


As one of the (very few) detractors said of my recent GDCO 2012 presentation on Unity3D: “This guy was a bigger Unity fan-boy than the company would have been.” It’s true! I am a self-declared Unity3D zealot. My experience moving between platforms has been incredibly easy. You can check this older post on the process I went through to bring Brick Buddies to Android. Unity3D has issues, but it’s the best solution I’ve found yet.


I’ve not used Corona myself, but did research it a bit when deciding which platform to hang my hat on. I know other developers who have created very successful apps with it. The major drawbacks are it uses Lua as its scripting language and it still doesn’t allow native code extensions. Yeah, I know I said native was dead–but not totally dead. I occasionally have to write native code plug-ins for Unity3D to access parts of a platform’s API that aren’t abstracted in Unity itself. This is a critical feature. Also, Corona can’t be used on the web or desktop platforms.


Flash has a tragic branding problem. The declaration of mobile Flash’s death doesn’t mean Flash is dead on mobile. This means the browser plug-in on Android is defunct. Good riddance. Flash made the mobile browsing experience on Android unusable.

Adobe has stepped their game up with Flash’s iOS and Android exporters. The packager allows Flash projects to be exported as apps on the target device. Flash’s CS5 exporter was atrocious, but I’ve seen some impressive work with the latest version. Flash even supports native code extensions. Adobe’s extortionate demands for revenue share mean Flash is out of the running for me if I intend to use their more advanced features. Otherwise, it is a superior option over Corona.


For game development, HTML5 is insane. If you really want to give it a shot, there are some relatively performant libraries such as impact.js that might help you out. I don’t recommend it. HTML5 isn’t much of a standard, needing a lot of workarounds for various browsers. Not to mention its horrible performance on mobile browsers. You just can’t win.

For non-game GUI-based apps (like Yelp or Evernote) HTML5 makes a lot of sense. PhoneGap/Cordova makes this possible by providing a framework for running HTML5/CSS/JS based applications inside a mobile web view and packaged as a native app. Coming from a native code background, constructing interfaces in HTML5/CSS seems absolutely insane. Friends don’t let friends write HTML/CSS. It should remain purely the output of tools such as Handheld Designer. HTML/CSS is becoming the Assembly Language of the web–it’s good to know, but hopefully you’ll never have to touch it.


There are plenty of other options I haven’t mentioned: Moai, Marmalade, Titanium Studio, UDK, and the list goes on. The important thing is to research your platform independent option and find what’s best for you. For games, I’m biased towards Unity–but other options are just as valid…I guess. Obviously there are applications for which native code will always be the solution. Yet, for the incredible glut of dying console game studios “pivoting to mobile,” this is an increasingly remote option.

Detecting Android Tablets and Phones in Unity3D

There’s been a few cases when I’ve needed to know whether one of my apps is running on an Android phone or a tablet. With Camera Birds’ gyro virtual camera, I encountered the fact that orientations are flipped differently on Android tablets and phones. By default, an Android tablet’s “natural” orientation is landscape, while a phone is portrait. This means that a 90 degree rotation is landscape on a phone, while on a tablet this becomes portrait. Get it? Neither do I. It’s another supremely awful decision that is simply par for the course with Android.

I ended up adapting a method from a Stack Overflow post used to determine natural orientation. I wrote it with Unity3D’s ability to access Android’s Java classes via the AndroidJavaClass object. This is a great feature of Unity3D that allows you to access the Android API through JNI without having to write a native plug-in.

The code is here. With this, you can tell if you are running on a tablet or a phone by checking for the natural orientation: landscape on a tablet, portrait on a phone. Even if you don’t need to flip gyro rotations, you might want to do this to separate tablet ad units from phone ads, for instance.

How To Prevent Performance Spikes in Unity3D When a Model is First Visible

In my latest Unity3D app I dynamically load assets from the Resources folder and place them in the world after the initial scene load. These assets use new materials and textures that must be uploaded to the GPU. I thought I was being slick by caching prefabs to prevent a loading hiccup when I needed to instantiate. However, that’s only part of the problem. After placing the object in the scene, my game would freeze up for a frame or two when the newly created object first became visible. The profiler showed this spike attributed to a function called AwakeFromLoad.

It turns out Unity3D does not load the GPU with your new object’s assets until it’s first visible. Apparently, this is what AwakeFromLoad does. This is an optimization technique presumably to prevent thrashing on the GPU by loaded assets that won’t be visible immediately. The downside is you’ll see a pause as Unity3D uploads data to the GPU. From what I can tell, this can even mean compiling the shader if it hasn’t been used in the scene yet.

Unity doesn’t provide a function to force the GPU to load assets. From looking at Unity forum threads, the most common solution is to put up a loading screen and show newly instantiated assets to the main camera for a frame to force a GPU load. Once all the assets have been made visible, the loading screen is dropped.

Putting up a loading screen just seemed like a huge pain in the ass, not to mention an ugly hack. So, I came up with a solution using Unity Pro’s RenderTexture and a second camera. Now, my game scene has two cameras: the Main Camera and a disabled secondary camera with a tiny 32×32 RenderTexture as its target. Whenever I instantiate a new asset in the world, I position the second camera in front of it and render a frame to this texture. This forced rendering does the trick of uploading all necessary data to the GPU. Yes, there still is a loading spike, but you decide when it’s going to happen and you don’t have to reposition your object in view of the main camera for a frame.

I put this in a behavior called AssetGPULoader, you can grab it here. It only works with Unity3D Pro as it needs RenderTexture. As far as I can tell, this does the trick. It has removed my unpredictable performance spikes. For an alternative solution, I also found this technique in the Unity forums.

3 Ways To Capture A Screenshot In Unity3D

For my equally ridiculous follow-up to Brick Buddies, I need to save a screenshot both as a texture and as a file in Unity3D. Although the game I’m currently writing has screenshots as an integral gameplay element, it’s still useful to integrate screenshots into any Unity3D project. Using Prime31’s Social Networking plug-in, it’s possible to Tweet pictures or post screens to user’s Facebook galleries. Having a screenshot capture feature can boost your viral reach. Especially if you design your application in such a way that people want to share screenshots. In Unity3D, there are a number of ways to do this.


CaptureScreenshot is a method in the Application class that does exactly what it says; it saves the screenshot as a PNG file. On iOS devices, this screenshot will be put in the Documents folder. On other platforms you can specify an absolute path to put the file anywhere.

What the documentation doesn’t tell you is CaptureScreenshot is asynchronous. This is because capturing and saving the screen can take awhile. The API call itself isn’t a coroutine, so there’s no easy way to monitor its progress. One hack is to write your own method which checks for the existence of the screenshot file. Once the screenshot has completed saving, the file will be there.

Also note that it’s good practice to put CaptureScreenshot calls in the LateUpdate method. This way you capture the contents of the frame as it will look at the end of that update. If you have made any objects active or inactive during that frame, the results of those operations will be seen in LateUpdate.


One convoluted way to take a screenshot is to use the RenderTexture feature in Unity Pro. You can create a RenderTexture object and tell any camera to write to it. You can then access the color buffer of the RenderTexture if you want to write out the pixels to a PNG.

CaptureScreenshot was just way too slow for my needs (I needed immediate access to the frame buffer) so I started writing a RenderTexture solution, until I found an easier way. If you want to check out this technique, I suggest this example.


ReadPixels will read the pixel data from a specified rectangle on the screen and copy it to the source texture. It’s fairly fast and works with a few lines of code. Much like CaptureScreenshot, it’s a good idea to queue up the action somehow and then actually call ReadPixels in LateUpdate. Works like a charm:

Texture2D tex = new Texture2D(Screen.width, Screen.height);
tex.ReadPixels(new Rect(0,0,Screen.width,Screen.height),0,0);

ReadPixels still introduces some delay, so in the end I might try to see if RenderTexture is faster. If you are using the screenshots as a real-time texture effect, then RenderTexture is your best bet.