• Home  / 
  • Author's archive:
About the author

Jason Weimann

Unity GPU Instancing

By Jason Weimann / April 25, 2017

Unity 5.6 is available now and has some great performance improvements you can take advantage of.  Today I’m starting a new series covering some of these enhancements and the performance benefits you can gain.  These simple tips will keep your FPS high and your players happy.  This post will cover the new Unity GPU Instancing system.

Introducing Unity GPU Instancing

GPU instancing allows your to render a large number of the same mesh/material in just a few draw calls.

While it was possible to get instancing working partially in 5.5, full support is now available in 5.6.

Now, the standard shaders all support GPU instancing with the check of a box.

How do I use it?

Enabling Instancing is actually very simple in most cases.  If you’re using the Standard shader, look to the bottom and check the “Enable Instancing” box.

That’s all you need to do!

If you want instancing for other shaders, you can check out more details on how to set that up here: https://docs.unity3d.com/Manual/GPUInstancing.html

 

When should I use it?

It’s important to note that GPU instancing doesn’t work on skinned mesh renderers, so you can’t just drop it on any character and see the gains.

Where you’ll see the most improvement is with a large number of objects which share a mesh but have some variety in scale.

For my testing, I setup an asteroid field with 1000 asteroids from one of the free space packs on the asset store.

I generated them with random locations and scales, then compared the draw data.

Here’s the script I used to generate the asteroids:

The Results

Instancing Off

Instancing On

That’s right, the draw calls drop from 4165 to 11!  It’s a huge difference.

What about dynamic batching?

You may be wondering how this is different from dynamic batching.

There are 2 things that really stand out to me.

#1. Dynamic batching requires the objects to be the same scale.  In many cases that won’t matter, but for things like asteroids, trees, or other objects where you want scale variety, instancing comes to the rescue.

#2. GPU instancing is done on the GPU while dynamic batching is on the CPU.  Moving this load over to the GPU frees up CPU time and makes for more efficient use of the system overall.

Conclusion

Instancing is here, it’s great, and you should definitely start checking that box whenever you’re gonna have many instances of your mesh in the scene.  It’s an easy to use optimization and may be one of my favorite Unity 5.6 changes.

Continue reading >
Share

360 Video in Unity 5.6

Unity 5.6 introduced a bunch of great new features including a new video player.  If you tried playing video in Unity 5.5, you know it’s a bit of a pain and generally requires an expensive plugin. I tried doing some Unity 360 Video projects in the past and ran into nothing but pain.  I’m happy to say though that the latest version makes it much easier to render 360 video in Unity.  The contents in this guide was written with the GearVR and Google Daydream in mind, but should be applicable to any setup.

Note on 3D 360

This post covers 360 video but not stereo 3D video.

If you’re looking to play 3D 360 Video check out this post: https://unity3d.college/2017/07/31/how-to-play-stereoscopic-3d-360-video-in-vr-with-unity3d/

How to get started

Before you can render 360 video, you’ll need a surface to render it on.  Typically, we use a sphere with inverted normals (the sphere renders on the inside only).

You can create your own, search online, or download this one here: https://unity3dcollegedownloads.blob.core.windows.net/vrgames/invertedsphere.fbx

To create an inverted sphere, follow the steps outlined here – https://unity3d.college/2017/05/15/building-an-interactive-mobile-360-video-player-gearvr/

Setup

Create a new scene

Drag your inverted sphere to the Hierarchy.

Reset it’s position to (0, 0, 0), if it’s not already.

Set the sphere’s scale to (-3000, 3000, 3000).

If you’re using a different sphere than the one provided above, you may need a different scale value.

Without the scale’s X value is set negative here, the video would appear backward

Reset your camera’s position to (0, 0, 0)

Select a 360 video to play.

You can shoot your own or download one..

I used this video to test – https://www.youtube.com/watch?v=a5goYOaPzAo, but most 360 videos should work.

If you aren’t sure how to download videos from youtube, this is the site I use to do it: http://keepvid.com/

Drag the video onto the sphere in the scene view.

Select the Inverted Sphere

You should see the Video Player component added to the sphere.

You’ll also see the Video Clip assigned with your 360 video.

Because Play on Awake is selected, you can just hit play now and watch your video start playing.

Play Controls

Ready to add some basic controls to the player?

Let’s add keyboard controls to pause and play the video.

Create a script named VideoPlayerInput.cs

Replace the contents with this:

Add the VideoPlayerInput component to the Sphere.

 

Press Play

Use the “Space” key to pause and resume playing of your video.

Conclusions

With Unity 5.6, playing video has become simple.  You can play 2D video, 360 video, and even render video on any material you want (we used an inverted sphere, but you could attach it to a fire truck or whatever other model you pick really).

Try out all the new functionality, if you build something interesting, share it below!

Continue reading >
Share

Add Unity Analytics to your Unity3D game today!

By Jason Weimann / April 9, 2017

Should you use Unity Analytics?

Definitely!!!  If you’re building a game and expect to have anyone playing it, there’s really no good excuse not to use the Unity Analytics system.

The Unity Analytics system is free, easy to integrate, and if used properly can provide great insight into what your players are doing and how to make your game better.

With even the most basic Unity Analytics setup, you can track player growth and retention, with nothing more than the flip of a switch.

If you decide to go further though, the possibilities are amazing.

How do I enable Unity Analytics?

To enable the analytics system, you need to open the Services Tab

If you’re not logged in, you’ll see the Sign in button.

Sign into your Unity account.

Next you’ll see a screen to create your project, just click the Create button.

Click the button on Analytics to toggle it from Off to ON

Click the “Enable Analytics” button

If your game is targeted at kids, check the box.

Then click the Continue button

That’s it, basic analytics are enabled.  See how simple that was..

How Can I test it?

The first thing you can do is hit Play.

When you do, you should see data appear in the Validator (still on that analytics/services tab)

Checking out your data on the webpage

You can visit the analytics site here: https://analytics.cloud.unity3d.com

When you do, you’ll see your project but no data.

This is because there’s a delay in the data processing.  In my experience it can range anywhere from a few hours to a day.

So don’t worry, your data is probably getting written, just check back later to view it.

Custom Unity Analytics Data

What we have so far will give you some really useful info like Daily Active Users (DAU), Monthly Active Users (MAU), retention, session count, and more.

But to get the most out of the system, you really want to track actionable items.

Custom data is easy to track too though, it can be as simple as a single call to Analytics.CustomEvent, or you can build a dictionary of data and send that all at once.

Personally, I like to create a new class to handle wrapping all my analytics calls so that they’re not scattered throughout the project.

Let’s take a look at a sample of how I prefer to setup mine.

I start by preventing the object from getting destroyed when we switch scenes (I always include this object in my opening/loading/menu scene).

Then I’ll register for events on things I care about.  In this example, I’m registering with our level controller and player controller events.

Level Events

The LevelController event will happen whenever a player beats a level, and we simply send an event with a string that appends the levelName.

Since this data is pretty simple, I prefer to have it in an easy to use format that I can build funnels on in the Analytics webpage UI.

Player Events

The OnPlayerDied event has a bit of extra info I want though.  In there, I track the position the player died at and what killed them.

I then put that data into a dictionary and pass that dictionary into the CustomEvent for analysis later.

Analytics.FlushEvents

The last thing to notice is in the OnDestroy method.  Here, I force the analytics system to push the data up to the servers.

To minimize networking data, the system determines when to send updates on it’s own.

If we don’t call FlushEvents on close, we may not get the players analytics data until the next time they launch the game.

If they never launch it again, we’ll never get the data, and never have any insight into why they didn’t come back.

 

Conclusion

If you’re not using analytics, just turn it on…  Even if you’re not sure what data you want to capture, or what you’ll do with it…. just turn it on.

And once you have some idea of even the basic things you may care about, where’d my players die, how many levels are they beating, etc… you can start tracking that data easily.

Maybe you’ll find out “NOBODY beating level 3!”, or “Nobody’s activating this special ability”…  then you can adjust level 3, change the instructions or controls for the special ability, and compare..

Or alternatively you can make wild guesses about what your players are doing and what they like…. but I think you’ll prefer getting the data 🙂

Continue reading >
Share

HTC Vive Tracker Unity3D / SteamVR Setup

Friday, my Vive trackers finally arrived.  When I opened them, I wasn’t sure what to expect.. would they be cool, could I do something fun with them, or will they just sit on a shelf?  After just one day, I have the answer.. they’re Awesome!

So far, I’ve barely played with them, but I’ve already put them on my feet, stomped some civilians and kicked some buildings..

And now I’ve strapped one on to a tennis racket and dropped my wife onto a baseball field to smack tennis balls.

How do you use them?

Since they don’t really require any special code to work, I wanted to give a quick demo of how to make them work in Unity with the SteamVR plugin.

Camera Rig Setup

The default camera rig gives you a left and right controller, and works fine as a starting point for many VR projects.

To add trackers, create new children under the [CameraRig] with the context menu.

I used cubes to start and scaled them to (0.1, 0.1 , 0.1 ).

Once you create the objects, add a SteamVR_TrackedObject script to them.

Now select the [CameraRig] prefab and add the tracker(s) to the Objects array.

Turn on all the controllers & tracker(s).

I’ve noticed that the trackers don’t show up if both controllers aren’t on.  I haven’t yet dug into why, so if you already know, drop a comment at the bottom and share please 🙂

Hit Play.

That’s it… they’ll just work and track from there.

The Tennis Racket

One thing to note when you’re setting these trackers up is the position of the tracker on the object.

Unless you’re holding the tracker, it’s going to be offset a bit, and you need to take that into account when positioning things.

For example, with my tennis racket, the tracker is attached to the strings like this.

In-game, my tracker object has the racket as a child, and the object is lined up so that the rackets pivot under the tracker matches closely with where the tracker is placed on the actual racket.

 

Conclusions & Other Games

I have to say I really love these things.  They’re pretty light, they track great just like the controllers, and for some reason I feel more comfortable attaching them to things.. (even though they’re almost the same price).

If you’re unsure about getting one, I’d say do it… they’re well worth the cost IMO even if you’re just playing around with them in the editor.

When it comes to in-game support, I think it’ll be a bit before they’re common, but I do expect to see games start adding capabilities over the next few months, I know I really want to put support in a few games myself.

In conclusion though, I’m excited to see what people come up with, and to experiment and make fun things..  If you happen to have some ideas, please share below or drop me an email.

Continue reading >
Share

Contest Winners!

By Jason Weimann / March 7, 2017

Many weeks ago, I sent out an open invitation for members of the site to enter a special contest.

The rules were simple, create a quick game with Unity and submit a video or playable version, and try to use an asset from the contests sponsor BitGem.

The response for this contest was great, we had a ton of awesome submissions come flying in from all around the world.

Today, I’m going to reveal the winners to the public and give you a quick glimpse of some of the projects that were created.

First Place “ImpBall” – $100 Cash

DOWNLOAD (51MB)

The first place winner came from Adrian Higareda.

ImpBall blew me away.  It’s a really neat little game with a lot of character to it.  Even the UI was really well done, and the game is just fun.  It’s even got good audio..  all around it’s a very awesome submission and worth trying out.

 

 

Second Place “Ad Noctis” – $100 BitGem Voucher

DOWNLOAD (42MB)

Ad Noctis surprised me.  Submitted by Javier Alvear, it starts out with a cool cut scene (done in-game), then drops into a boss fight.

You play as an archer battling a boss monster built from a chest full of weapons.

The boss has special moves, charging and swinging, and is difficult to beat…

Overall, it’s a great experience and a really awesome submission.

Javier also has a facebook page for his team here – https://www.facebook.com/KiltroGameStudio/?fref=ts

 

Third Place “Skelinvasion”

DOWNLOAD (21MB)

And rounding off an extremely close 3rd place is “Skelinvasion”.  It felt really polished and one my fellow reviewers described it as “Like Gandalf meets Legolas”.

Skelinvasion is a of course a game where you fight off an invasion of skeletons.  Created by Miguel Martorell, it’s a fun 3rd person experience well worth checking out.

He’s even created a nice youtube video to show off some of the gameplay.

Honorable Mentions

The top 3 winners were great, but there were so many great submissions that at the very least deserve some mention and applause.

Qutopia

A very cool Augmented Reality experience

BowmanVSZombies

Arrows and waves of zombies, you’re the bowman trying to stop the horde!

Skeletons and Treasure

This was a great Android submission and is even available on the google play store.

ARTest

Another augmented reality submission that used multiple devices.  Definitely some potential here.

BenBones

I loved the name of this one and the game.  An endless runner with tons of levels and a load of fun.  If there was a 4th place, this probably would have taken it

StopThem

The second Android game in our mentions, a cool endless wave game with nice tap inputs.

 

More Contests?

I really enjoyed this experience, the submissions were amazing.

Since the contest ended, I’ve had a few people ask me if there’d be another contest anytime soon.

If you’re interested in being a part of one of these, or just want to see others submissions, maybe even with some public voting, drop a comment at the end to let me know.

The more comments that appear, the sooner I’ll kick off contest #2 🙂

 

Continue reading >
Share

Pooled Decals for Bullet Holes

The Bullet Decal Pool System

Todays article came out of necessity.  As you probably know, I’m wrapping up my long awaited VR Course, and one of the last things I needed to create is a decal setup for the game built in it.  To do decals properly, you’d want a full fledged decal system, but for this course and post, we have a system that does exactly what we need and no more.

What is that?  Well it’s a system to create a bullet hole decal where you shoot.  And do to it without creating and destroying a bunch of things at runtime.

It’s worth noting that I wrote this system for a VR game, but it is completely applicable to a normal game. This would work in a 3d game, mobile game, or anything else that needs basic decals.

The end result will look something like this

How do we build it?

Let’s take a look at the code

Code Breakdown

Serialized Fields

We open with 2 serialized fields.

bulletHoleDecalPrefab – The first determines which decal prefab we’ll use.  If you’re building a more generic decal system, you may want to re-name this.  Because it’s part of a VR course, I left the name as is, but if I were putting this in another game, it’d likely be more generic or maybe even an array that’s randomly chosen from.

maxConcurrentDecals – This sets the maximum number of decals the system will show.  We do this primarily for performance, but also to avoid visual cluttering.  Having too many decals could cause a hit on rendering, remember each one is a transparent quad.  This number is variable in the editor though, so you can adjust it as you see fit for your game.

 

Private Fields

We have two private fields in this class.  They’re both using the Queue type to keep a first in first out collection of decals.

decalsInPool – This is where we’ll store the decals that are available and ready to be placed.

decalsActiveInWorld – These are the decals that we’ve placed in the world.  As our pool runs empty, we’ll start grabbing decals from here instead.

 

Awake

Calls our InitializeDecals method()….

 

Private Methods

InitializeDecals() – This is our setup.  Here, we create our queues, then we use a loop to create our initial pooled decals.

InstantiateDecal() – Here we do the actual creation of a single decal.  This is only called by InitializeDecals & a special editor only Update you’ll see soon.

GetNextAvailableDecal() – This method gets the next available decal…. useful description eh’?  It actually just looks at the pool, if there’s at least one decal in it, the method returns the first one in the queue.  If there’s no decal in the pool, it returns the oldest decal that’s active in the world.

 

Public Methods

SpawnDecal(RaycastHit hit) – This is our only public method, it’s the one thing this class is responsible for doing.  In the code that calls it, we’re doing a raycast to determine where our bullet hits.  The raycast returns a raycasthit and we pass it into this method as the only parameter.

The method uses GetNextAvailableDecal() and assuming a decal is available, it places that decal at the raycasthit.point, adjusts the rotation to the raycasthit.normal, and sets the decal to active.  The method ends by adding the decal to the decalsActiveInWorld queue.

 

#if UNITY_EDITOR ????

Everything else in this class is actually wrapped to only run in the editor.

This code has a single purpose, to update our queue size at runtime.

It’s absolutely not necessary for your decal system, but it’s a nice little thing I enjoy having 🙂

I won’t cover each method, but you should play with the queue size at run-time and watch as it keeps everything in sync.

 

 

 

Continue reading >
Share

Oculus Haptic Feedback

Oculus Haptic Feedback

Why you need it and how to get started quickly

Vive & Oculus Haptic feedback systems are extremely important and often overlooked…  On a pretty usual basis, they get overlooked or added in last second (I’m guilty of that myself).

Adding just a little haptic feedback in the right places gives a huge boost to immersion, and leaving it out gives the feeling that something just isn’t quite right.

Why’s it hard?

SteamVR haptics on touch controllers don’t work at all…. (at least at the time of this writing)

Recently, Oculus changed their haptic system, breaking my old code when I upgraded the SDK….

So I’ve written a wrapper.  It handles haptics for either system, and makes it extremely easy.

For this post, I won’t dig into the entire cross platform wrapper, but will instead give you the basics to get started with Oculus haptics in a quick and easy way.

Prerequisites (Oculus)

For this to work, you’ll need the oculus utilities for Unity in your project.

https://developer3.oculus.com/downloads/game-engines/1.11.0/Oculus_Utilities_for_Unity_5/

Shared Code

In the more complete version of my projects, I have quite a bit of shared code and swap between implementations with #defines.

For this simplified sample though, we still have one piece of shared code.  That code is an enum which specifies how strong the feedback vibration effect should be.

The Code (Oculus)

The Oculus code is designed to bypass the need for custom audioclips.  While the clip system is pretty cool and could do quite a bit, it’s not cross platform, and much harder to get started with in my opinion.

 

In the code, we generate 3 OVRHapticsClips named clipLight, clipMedium, & clipHard..  As you may have guessed, these correspond to our enum values.

Use (Oculus)

To use the haptics code, add the script to your controller object.

Assign the controller mask as L Touch or R Touch (whichever matches the controller it’s attached to).

Then call the Vibrate method when you want it to shake.

Demo (Oculus)

If you’re not quite sure how you’d use the code, here’s a real quick sample.  Ideally, you’d have something per controller, managing events for the controller, like a generic VR Controller script that ties these all together and works cross platform.

But to get you started quickly, here’s a simple sample that will vibrate the controllers when you press space.  (just remember to assign the 2 controllers)

What about the Vive?

I’ll cover Vive haptics soon, likely with a more generic implementation that you can use across both platforms.

If you’re really interested in Vive development though, I’m working on a quick weekend long guide that covers everything you need to know to get started, including haptics.  Just sign up (at the top of this post) and I’ll get you the info as soon as it’s ready.

Continue reading >
Share

GIT for Unity Developers – Remotes

By Jason Weimann / February 7, 2017

What’s a Remote?

So far, everything we’ve done is on your local computer.

We haven’t added any remote servers, and if your computer gets stolen, hdd dies, or some other disaster, you’ll wish your project was backed up somewhere else.

Adding a remote server also allows you collaborate with others, but even if you don’t plan to do that, the benefits for yourself are more than worthwhile.

Adding a Remote

First, you’ll need to create an account on one of the big remote git hosting sites.

You’re welcome to use whichever you like.  I use BitBucket so I’ll guide you through setting one up on there.

Create an account on https://bitbucket.org or log into your existing one.  (Sourcetree and BitBucket use the same credentials)

Once you’re logged in, you’ll be at your “Dashboard”

From there, click the repositories menu, then “Create Repository”.

Give the repository a proper name and click “Create repository”.

Once the repository is created, you’ll be redirected to the project “overview” page.

On there, you’ll see a URL that looks something like this, but with your username.

Select the text and copy it.  This has the full URI for your repository remote.

For example, mine is: https://jasonweimann@bitbucket.org/jasonweimann/git-branching.git

Back to SourceTree

Now that you have your remote URI, switch back to sourcetree then select Repository->Repository Settings…

Click the “Add” button to add a remote.

Check the “Default Remote” box.

Paste your URI from the clipboard into the “URL / Path” field.

Click OK.

When you’re prompted to log in, use the same credentials you used for the bitbucket account.

Pushing to the Remote

Now it’s time to push.

The Push command tells GIT to send your current branch to the remote repository.

Click the “Push” button.

Click the checkbox for “master”, then click OK.

You’ll see the progress bar for a second or two.

If all goes well, you’ll be back in your “master” branch view.

Take note of 2 new things though.

On the left, under “Remotes” we now have “origin”.

And in the commit history, we can see that the “origin/master” branch is set to the same commit as our local “master” branch.

This means that our branches match, which we’d expect since we just pushed our master branch to the remote “origin”.

Pulling from Remotes

Pushing your work to a remote server is great, but if you don’t know how to pull from the server there’s no value.

Of course like most things in GIT, it’s not too hard of a process, but there are some intricacies that are important to understand.

To try this out, let’s clone the repository into a new folder, from the remote.

Click the “Clone / New” button.

You’ll be presented with a dialog to enter your Source URL & Destination path.

Paste the source path into the URL that you used for the remote origin.

For the destination path, make sure the directory is DIFFERENT from the directory you’re working in.  In my screenshot, you can see I’ve placed it into a folder named “gitpull”.

Click the button.

Wait for the clone progress window to finish.

When it finishes, you’ll be greeted with something that looks very familiar.

Why does this look exactly the same?

What we’ve done here is take our remote repository and clone it to our local system in a brand new folder.

This is the same process that another person would follow to clone your repository and work with you.

I want to be clear though that we’re doing this for learning purposes only, there’s no good reason I can think of for you to clone your own repositories in more than one place on a system.

If you want to work on a desktop and a laptop though, or some other multi computer setup, this is exactly the process you’d follow (though you could have the same local directory / Destination Path since they’d be on different computers).

Let’s Push and Pull again!

Switch to the Git Branching repository by clicking on the tab.

All of your recently opened repositories show up as tabs in sourcetree.  If you don’t see the one you want, they’re also bookmarked on the left.  Double click the one on the left to open it as a tab.

Select the master branch.  It should look like this.

Back to Unity

Let’s jump back over to Unity now.

Right click on the TallBox and create a new Cube as a child.

Adjust the transform values so it looks something like this.

Apply the prefab.

Save the project.

Before we go back to GIT

Let’s open another instance of Unity.

Open the project located in the folder you did the pull to.

Mine was c:\git\gitpull.

It should look just like your project did before we edited the prefab.

Back to SourceTree!

Okay it’s time to commit, push, and pull.

If your working copy looks like this, go back to Unity and make sure you saved.

Hit the Stage All button.

Notice that in my unstaged files, I had the TestScene, but when I hit Stage All, I only have the prefab.

This is because the Scene isn’t actually changed.  The timestamp on it updated when I saved it earlier so it showed up in the Unstaged area.

But the contents didn’t change at all since we only modified the prefab, so when we go to stage, it does a comparison and realizes it doesn’t need to be committed.

Enter a commit message “Added top to the TallBox” and click commit.

Now push the commit to your remote.

You’ll see a dialog like this again, click Push.

Now select the tab of your gitpull repository.

Nothing changed??

It probably looks the same, that’s just because it hasn’t refreshed.

Click the Fetch button to get a refreshed view of the repository and it’s remotes.

You’ll get this dialog, leave the default and just click ‘OK’.

It’s changed! The remote is ahead

Now when you look at your master branch, you’ll see something that could be confusing.  I know it confused me at first…

What this image is conveying is that the “origin/master” branch on the remote is ahead of my local “master” branch.

The local master branch even says on it that it’s “1 behind”, meaning there’s 1 commit on the remote that we don’t have locally.

 

Don’t Pull yet

We could hit pull and get our local master up to the same commit as the remote, but before we do that, let’s reproduce a VERY common issue that people get caught up on every day.

This needs to be done in the other instance of Unity not the primary one, need to update this and talk about opening the other instance.

Go back to Unity

Make sure you’re in the the ‘gitpull’ project

To make things easier, I’d recommend opening both project in separate Unity instances if you can.  If you’re resource limited though, you can just open the projects as needed.

Open the TestScene.

In the Project View, create a new folder and name it “Materials”.

In the Materials folder, create a new material, then name it “Blue”

Select the “Blue” material and change the color in the inspector.

Assign the blue material to the tallbox prefab.

Click Apply to update the prefab

Save your project

 

Save the scene.

 

Back to SourceTree

In the ‘gitpull’ repository, click pull.

Click ‘OK’

 

An ERROR!

You should have received this error message.

It’s telling you that you can’t pull because you have local changes that would be overwritten if you did.

Let’s look at those local changes.

Go to your “Working Copy”

Here, you’ll see we do have changes, and since TallBox.prefab was modified in the most recent commit, it definitely would conflict.

Let’s see how to resolve that!

Stage your changes.

Commit your changes in the working copy.

Switch back to the “master” branch view.

Pull again

Success!!!

But what’s this??  “Uncommitted changes”??

What’s happened here is GIT has automatically created a merge for you.  It knows that there were changes to the TallBox.prefab file from a remote, and that you had changes to it locally.

Now it’s up to you to decide how to resolve it.

Go back to the “Working Copy”

Great, in this instance there’s no conflict!  We’ve done a successful merge, and because the properties we changed were different, everything worked automatically!

It’s worth noting that often you’ll run into a conflict.  In that case, it’s up to you to decide which version to take.  In some instances, you can combine the changes and get something that works, but with prefabs, if there’s a conflict, it’s often more difficult to work around a conflict than to just re-create the change.

Click Commit

Go back to Unity

Check out the merged prefab!

 

Notes

This guide has taken you through a lot, and if you aren’t really comfortable with this process yet, try repeating it 2 or 3 times.  By then, if I haven’t totally messed this up, it should be a bit more apparent how things are merged, how remotes work, and how you can get some real value out of git.

Again, I’d really recommend you practice with this simple project before trying to do it on your real projects.  Here, if something goes wrong, you just start right over, in a real project, if you mess up, there’s the possibility you waste hours trying to fix stuff 🙂

If there’s demand for more on GIT, I’ll continue on with this series.  There are still plenty of topics to cover… branching, more in-depth merging, stashing, etc…

If you’d like more GIT tutorials, just drop a comment and let me know.

Thanks!

Continue reading >
Share

Unity3D GIT Tutorial – Getting Started

By Jason Weimann / January 24, 2017

This post is the first in a series of articles covering GIT for Unity Developers.  GIT is a great tool for any developer, but if you’re new to it, there are quite a few concepts to cover, and I’ve found most tutorials only cover a small part. This series will focus on what’s needed to make good use of GIT with as little complication and confusion as possible.  While I know many developers love using the GIT CLI, I find it to be far from user friendly, and frankly a bit of a distraction for anyone who just wants to get work done.  So we won’t cover that, but we will cover the features and concepts I find most teams using day to day.  By the end of the series, you’ll be a Unity3D GIT pro!

Prerequisites

To follow along, you’ll need Unity and SourceTree installed.  Any version of Unity should work fine, I’m doing this in 5.5.

Unity – You should know what this is if you’re reading this.

SourceTree – A free visual Git client for Mac and Windows

 

Project Setup

First, we’ll create a new project and name it “Git Branching”.

If you’re wondering why the project is named “Git Branching”, it’s because we’ll be using this same project from the start, all the way through to branching, merging, and more.

Browse to the folder where we’ll create a new file named “.gitignore”.  (The filename is blank with only an extension of gitignore)

Edit the .gitignore file with your favorite text editor (I use Notepad++ for these things).

Paste in this gist.

Git Setup

Open SourceTree.

Click on the Clone / New button.

Select the Create New Repository tab.

Enter the destination path that matches your folder structure.  Mine is located in “c:\git\Git Branching”

Click Create

First Commit

The git repository will be created and you should see this.

If you see a bunch of files in the Library folder, your .gitignore file isn’t correct.  Go back and double check that you have it named right and in the correct location.

Click the “Unstaged Files” checkbox to move everything into the Staged files area.

At the bottom of the screen you’ll see the Commit message area.

Type in “Project Settings and .gitignore”, then click the commit button

.

Once the commit completes, switch from viewing the “Working Copy” to viewing the “master branch” by clicking on “master”.

Here, you’ll see our first commit, with the commit message we entered.

I often see people write short commits with no information about what changed.  While it will save you a couple seconds at commit time by not having to think about what you’re committing, it will generally take a lot more time for you to try to figure out when someone commited some broken change.  Commit messages also serve to inform the rest of your team about what you’re doing (or remind you when you try to remember where you left off)

Serialization

When you use GIT as a source control engine, it’s recommended to use the Text serialization mode in Unity.  This is helpful when you need to merge later.

Open the Editor Settings page.

In the inspector, select “Force Text” for the serialization mode.

Now go back to SourceTree.

Switch back to the “Working Copy” view b y clicking on “Working Copy”.

Stage all the changed files by clicking the “Unstaged files” checkbox.

Enter a commit message that says “Changed asset serialization mode to force text”.

Click Commit.

The reason all of these .asset files are ‘changed’ and ready to be commited is that they were initially saved in a binary format.  When we switched to ‘force text’ mode, they were re-saved as human readable text files.

If you look at the “master” branch again, you should see two commits now.

Let’s do some work (in Unity)

Okay, you’ve commited 2 things already, but haven’t done anything in the engine yet.

Let’s change that now.

Creating the TallBox

In Unity, create a cube

Name it “TallBox”.

Set the scale to (1, 2, 1) .

Making the Prefab

Create a folder for prefabs.

Drag the “TallBox” into the prefabs folder to make it into a prefab.

Save the Scene

Save your scene in a folder named “Scenes” and name it “TestScene”.

Commiting the Prefab

Switch back to SourceTree.

In your “Working Copy” you should see 6 changed files.

Select the 3 that are related to the TallBox prefab.  You can batch select them and hit space or check them individually.

Your “Staged files” area should now contain the 3 TallBox related changes while your “Unstaged files” area still has the Scene changes.

Type in the commit message “Created TallBox prefab”, then click commit.

Why didn’t we do the other files?

The reason we only commited the prefab and not the scene is to keep our changes logically grouped.  Since the scene change isn’t directly related to the prefab it’s good to let it have it’s own commit.  For this example it may seem like overkill, but as your project grows, you’ll want to be able to occasionally undo a commit.  The less that’s in each commit, the easier it is to deal with them.  There are of course limits, but a general rule of thumb is to group all of the files for a single change or feature into a commit.

Commit the Scene

Select 3 the scene related files and commit them.

Your master branch should now contain 4 commits.

Checking Out

Before we move to more advanced things like branching, let’s cover what the checkout command does.

In the “master” branch, right click on the first commit “Project settings and .gitignore”, then select checkout.

Commits are in order from newest to oldest.  The first commit would be at the bottom of the list.

You’ll be presented with a confirmation dialog that talks about your detached HEAD.  Don’t worry about it, check the “Clean” box and hit OK.

Detached head means you are no longer on a branch, you have checked out a single commit in the history – ralphtheninja

Go back to Unity and you’ll see that everything is gone.  That’s because we’ve checked out a commit from before the prefab was created or the scene was saved.

Not worry though, our changes are in GIT.

To get back to them, select the most recent commit “Created Test Scene”, right click, and Checkout.

Go ahead and try this with the different commits and check Unity, you’ll see that you can get back to any state.

With GIT, you have fast, clean, bookmarked history.  And that’s just the start.

Remember that how clean and how well bookmarked it is completely depends on you keeping your commits separated and well described.

Continue reading >
Share

Unity3D Architecture – Understanding the Single Responsibility Principal

By Jason Weimann / January 10, 2017

Unity3D architecture is something that doesn’t get nearly enough attention. With most other software disciplines, there are standard ways of doing things that have grown and improved over time. The goal of this article is to help bring one of the key principals of software to Unity3D developers and show how it can help improve your projects and make you smile when you look at the code.

The single responsibility principle states that every module or class should have responsibility over a single part of the functionality provided by the software, and that responsibility should be entirely encapsulated by the class. All its services should be narrowly aligned with that responsibility.

 

Robert C. Martin expresses the principle as follows: "A class should have only one reason to change."

What does this mean, and how does it apply to Unity game development?

To summarize, it means that when you create a class, it should do only what’s required to meet it’s single responsibility.

This applies to MonoBehaviours and plain old classes.

If you create a component for one of your prefabs, that component shouldn’t be responsible for more than a single thing.

Example: If you have a weapon class, it should know nothing about the UI system.  Inversely, a WeaponAmmoUI class shouldn’t need to know anything about how weapons work, and should instead ONLY work on the UI.

Reading that, you may think “if each class only does one thing, there are gonna be a lot of classes”.

CORRECT!

If you follow SRP, you’ll end up with a large number of very small classes. While that may seem strange at first, it actually gives you a huge benefit.

Consider the alternative. You could have a very small number of giant classes. Or you could even go to an extreme and just have one mega class that runs your entire game (I’ve seen this attempted before, it’s scary).

Skeptical?

Before I go into details of the benefits and how to integrate the SRP into your process, let me point out a very prominent example of SRP in your existing projects.

Take a look at the built in Unity components.  Look at the AudioSource component.  It has one responsibility, to play audio.  Audio isn’t played through a more general ‘entity’, ‘npc’, ‘random other abstract name’.  It plays through an AudioSource.

The same goes for a Renderer component, a Transform, a RigidBody, and any other component.  They each do one thing.  They do that thing well.  And complex behaviors often involve interaction between these components.

This is because the Unity team understands the benefits of SRP.

Benefits

Splitting up your logic into classes specifically responsible for one thing provides many great benefits:

  • Readability – Classes are easy to keep between 20-100 lines when they correctly follow SRP.
  • Extensibility – Small classes are easy to inherit from, modify, or replace.
  • Re-usability – If your class does one thing and does that thing well, it can do that thing for other parts of your game.

Example: (HP Bars)

Imagine your game has the very typical need of HP bars over your NPCs heads.

You could have a base NPC class that handles all things NPC including the HP bar.

Fast forward a few weeks and imagine you get a new requirement and need to put HP bars over some buildings that aren’t NPCs.

Now, you’re in the disastrous situation where you need to extract all that HP bar code out into something you can re-use, or even worse you end up copy/pasting the HP bar code from your NPC class to your Building class.

Let’s see how that looks in an actual project and how to fix it.

Here, we have an NPC class that handles taking damage and death, and also does some UI work.

This is a super simple version of an NPC to avoid overwhelming the post with needless extra code.

When you look at this class, take notice at the # of things it’s doing.

  1. Managing Health
  2. Handling death
  3. Updating the UI

So this simplified NPC is already doing 3 things.

But we need more stuff, like particles when our npc dies!

Now, we’re doing 4 things…  and it will of course explode into 10 or 20 things as the project continues.  Logic will get more complex.  The file will grow… and soon you’ll be in the soul sucking hell that is a 5000 line class.

I’ve seen plenty 10-20k classes as well, and even a 10k method.

Let’s take it apart!

We need to take this class apart piece by piece.  For no particular reason, let’s start with the UI.

First, we’ll create a new class called HPBar.cs

This class will handle the HP Bar updating.  Right now, if it looks like a bit of overkill, wait until we need to extend it.

To make this work, we also need to update the NPC class.  HPBar.cs is looking for an OnHPPctChanged event to tell it when the UI should change.

What have we gained so far?

At this point, we’ve separated a tiny part of a small class off into something else.  We’re doing it for a good reason though.  We know our projects grow, and we know that our UI components for HP are going to be more complex than a slider.  We’ll probably need to add floating HP text, maybe some numbers.  We might need to make the bars flash when stuff gets hit.  What we know for sure is that our HP UI system will grow, and now when we grow it, we don’t have to touch the NPC class at all.  Everything we need to do is nice and isolated.

Keep splitting!

Okay, we cut one part off, it’s time to move onto the next.  Let’s separate out the particle playing into an NPCParticles.cs class.

Our NPC.cs file needs to update as well… take a look though and see if you notice anything.

It’s shrinking!!!!!

Let’s take this even further and see what happens…

Create another file named Health.cs

Now we’ll update the NPC.cs file again.

We’ll also need to update the HPBar to look at Health instead of NPC.

And our particles also need to reference Health instead of NPC.

Cool it’s all split up… what now?

So far, I’ve shown you how to split the code up, but for this to stick, I want to show you some of the extensibility we’ve just gained.

Extending Health

Let’s imagine our game now has a new NPC type that we need to implement.

This NPC can only be killed by high damage weapons, and it always takes 5 hits to kill them.

They also become invulnerable for 5 seconds after being hit.

The bad option

We could modify our health class, add a bool field in there that we check in the editor for the NPCs that we want to use this behavior.  But we don’t know how many other types of health interaction we’ll need that could cause the Health class to balloon into a mess.

And we wouldn’t be following our single responsibility principal

What should we do? – The good option

Let’s create a couple new files and modify our existing ones.

First, we’ll want to create an interface for health, named IHealth.cs

If you haven’t used interfaces before, you can get a quick understanding of how they work here – http://unity3d.college/2016/09/04/unity-and-interfaces/

This interface says that our classes implementing it must have a TakeDamage method that has a single integer parameter.  It must also have the two events we need for OnHPPctChanged and OnDied.

StandardHealth.cs

Our initial health.cs class was pretty standard for a health system.  Because we’ll be adding new ones, let’s rename it from “Health” to “StandardHealth” (remember we have to rename the file as well).

The interface

We’ve also added IHealth after MonoBehavior on line 4.  This tells the compiler that our StandardHealth class must implement the IHealth interface, and that it can be used for anything requiring an IHealth reference.

It’s Broken!

We haven’t even added the new health type yet, and we’ve already broken the project…

Because we renamed health, our references to the class have probably broken (unless we used the rename tooling in our editor).
Even if we didn’t break them, we still need to change our code to use the interface instead of the StandardHealth.

Let’s update NPC.cs first.  We’ll replace Health (or StandardHealth) with IHealth on line 7.

We’ll do the same thing for HPBar.cs on line 11.

And repeat for NPCParticles.cs on line 9.

Let’s add that new health type finally!

Now we’ll create a new Health type called “NumerOfHitsHealth“.

Like our StandardHealth, this implements the IHealth interface, so it can be plugged in anywhere we use health on our NPC.

Unlike the standard health component though, this one completely ignores the amount of damage done, and dies after a set number of hits.

In addition to that, it adds an invulnerability timer.  This prevents the NPC from taking damage more than once every 5 seconds.

Wrap Up

So now we’ve completely swapped out the health mechanics of this NPC, without needing to touch the NPC code at all (other than our initial conversion to use an interface).

If we decide to add more ways to manage health, we can simply create another implementation of IHealth, and drop that component onto the NPC.

Some other possible options might include

  • NPCs that take a single hit and lose HP over time for each hit
  • NPCs that regenerate HP where you need to kill them in a set amount of time
  • NPCs that are unkillable and never have their HP drop
  • NPCs that gain health when you shoot them (you could even swap to a component that heals them when they’re hit instead of damaging them at runtime)
  • Tons of other crazy ideas I haven’t come up with in the last 60 seconds.

Using the Single responsibility principal will make your development process much smoother.  It forces you to think about what you’re doing and helps discourage sloppiness.  If used properly, your job will become easier, code will be cleaner, projects will be more maintainable, and you’ll be a happier person!

Continue reading >
Share
1 11 12 13 14 15 18
Page 13 of 18