All posts in "Unity3D"

Unity3D Architecture – Understanding the Single Responsibility Principal

By Jason Weimann / January 10, 2017

Unity3D architecture is something that doesn’t get nearly enough attention. With most other software disciplines, there are standard ways of doing things that have grown and improved over time. The goal of this article is to help bring one of the key principals of software to Unity3D developers and show how it can help improve your projects and make you smile when you look at the code.

The single responsibility principle states that every module or class should have responsibility over a single part of the functionality provided by the software, and that responsibility should be entirely encapsulated by the class. All its services should be narrowly aligned with that responsibility.

 

Robert C. Martin expresses the principle as follows: "A class should have only one reason to change."

What does this mean, and how does it apply to Unity game development?

To summarize, it means that when you create a class, it should do only what’s required to meet it’s single responsibility.

This applies to MonoBehaviours and plain old classes.

If you create a component for one of your prefabs, that component shouldn’t be responsible for more than a single thing.

Example: If you have a weapon class, it should know nothing about the UI system.  Inversely, a WeaponAmmoUI class shouldn’t need to know anything about how weapons work, and should instead ONLY work on the UI.

Reading that, you may think “if each class only does one thing, there are gonna be a lot of classes”.

CORRECT!

If you follow SRP, you’ll end up with a large number of very small classes. While that may seem strange at first, it actually gives you a huge benefit.

Consider the alternative. You could have a very small number of giant classes. Or you could even go to an extreme and just have one mega class that runs your entire game (I’ve seen this attempted before, it’s scary).

Skeptical?

Before I go into details of the benefits and how to integrate the SRP into your process, let me point out a very prominent example of SRP in your existing projects.

Take a look at the built in Unity components.  Look at the AudioSource component.  It has one responsibility, to play audio.  Audio isn’t played through a more general ‘entity’, ‘npc’, ‘random other abstract name’.  It plays through an AudioSource.

The same goes for a Renderer component, a Transform, a RigidBody, and any other component.  They each do one thing.  They do that thing well.  And complex behaviors often involve interaction between these components.

This is because the Unity team understands the benefits of SRP.

Benefits

Splitting up your logic into classes specifically responsible for one thing provides many great benefits:

  • Readability – Classes are easy to keep between 20-100 lines when they correctly follow SRP.
  • Extensibility – Small classes are easy to inherit from, modify, or replace.
  • Re-usability – If your class does one thing and does that thing well, it can do that thing for other parts of your game.

Example: (HP Bars)

Imagine your game has the very typical need of HP bars over your NPCs heads.

You could have a base NPC class that handles all things NPC including the HP bar.

Fast forward a few weeks and imagine you get a new requirement and need to put HP bars over some buildings that aren’t NPCs.

Now, you’re in the disastrous situation where you need to extract all that HP bar code out into something you can re-use, or even worse you end up copy/pasting the HP bar code from your NPC class to your Building class.

Let’s see how that looks in an actual project and how to fix it.

Here, we have an NPC class that handles taking damage and death, and also does some UI work.

This is a super simple version of an NPC to avoid overwhelming the post with needless extra code.

When you look at this class, take notice at the # of things it’s doing.

  1. Managing Health
  2. Handling death
  3. Updating the UI

So this simplified NPC is already doing 3 things.

But we need more stuff, like particles when our npc dies!

Now, we’re doing 4 things…  and it will of course explode into 10 or 20 things as the project continues.  Logic will get more complex.  The file will grow… and soon you’ll be in the soul sucking hell that is a 5000 line class.

I’ve seen plenty 10-20k classes as well, and even a 10k method.

Let’s take it apart!

We need to take this class apart piece by piece.  For no particular reason, let’s start with the UI.

First, we’ll create a new class called HPBar.cs

This class will handle the HP Bar updating.  Right now, if it looks like a bit of overkill, wait until we need to extend it.

To make this work, we also need to update the NPC class.  HPBar.cs is looking for an OnHPPctChanged event to tell it when the UI should change.

What have we gained so far?

At this point, we’ve separated a tiny part of a small class off into something else.  We’re doing it for a good reason though.  We know our projects grow, and we know that our UI components for HP are going to be more complex than a slider.  We’ll probably need to add floating HP text, maybe some numbers.  We might need to make the bars flash when stuff gets hit.  What we know for sure is that our HP UI system will grow, and now when we grow it, we don’t have to touch the NPC class at all.  Everything we need to do is nice and isolated.

Keep splitting!

Okay, we cut one part off, it’s time to move onto the next.  Let’s separate out the particle playing into an NPCParticles.cs class.

Our NPC.cs file needs to update as well… take a look though and see if you notice anything.

It’s shrinking!!!!!

Let’s take this even further and see what happens…

Create another file named Health.cs

Now we’ll update the NPC.cs file again.

We’ll also need to update the HPBar to look at Health instead of NPC.

And our particles also need to reference Health instead of NPC.

Cool it’s all split up… what now?

So far, I’ve shown you how to split the code up, but for this to stick, I want to show you some of the extensibility we’ve just gained.

Extending Health

Let’s imagine our game now has a new NPC type that we need to implement.

This NPC can only be killed by high damage weapons, and it always takes 5 hits to kill them.

They also become invulnerable for 5 seconds after being hit.

The bad option

We could modify our health class, add a bool field in there that we check in the editor for the NPCs that we want to use this behavior.  But we don’t know how many other types of health interaction we’ll need that could cause the Health class to balloon into a mess.

And we wouldn’t be following our single responsibility principal

What should we do? – The good option

Let’s create a couple new files and modify our existing ones.

First, we’ll want to create an interface for health, named IHealth.cs

If you haven’t used interfaces before, you can get a quick understanding of how they work here – http://unity3d.college/2016/09/04/unity-and-interfaces/

This interface says that our classes implementing it must have a TakeDamage method that has a single integer parameter.  It must also have the two events we need for OnHPPctChanged and OnDied.

StandardHealth.cs

Our initial health.cs class was pretty standard for a health system.  Because we’ll be adding new ones, let’s rename it from “Health” to “StandardHealth” (remember we have to rename the file as well).

The interface

We’ve also added IHealth after MonoBehavior on line 4.  This tells the compiler that our StandardHealth class must implement the IHealth interface, and that it can be used for anything requiring an IHealth reference.

It’s Broken!

We haven’t even added the new health type yet, and we’ve already broken the project…

Because we renamed health, our references to the class have probably broken (unless we used the rename tooling in our editor).
Even if we didn’t break them, we still need to change our code to use the interface instead of the StandardHealth.

Let’s update NPC.cs first.  We’ll replace Health (or StandardHealth) with IHealth on line 7.

We’ll do the same thing for HPBar.cs on line 11.

And repeat for NPCParticles.cs on line 9.

Let’s add that new health type finally!

Now we’ll create a new Health type called “NumerOfHitsHealth“.

Like our StandardHealth, this implements the IHealth interface, so it can be plugged in anywhere we use health on our NPC.

Unlike the standard health component though, this one completely ignores the amount of damage done, and dies after a set number of hits.

In addition to that, it adds an invulnerability timer.  This prevents the NPC from taking damage more than once every 5 seconds.

Wrap Up

So now we’ve completely swapped out the health mechanics of this NPC, without needing to touch the NPC code at all (other than our initial conversion to use an interface).

If we decide to add more ways to manage health, we can simply create another implementation of IHealth, and drop that component onto the NPC.

Some other possible options might include

  • NPCs that take a single hit and lose HP over time for each hit
  • NPCs that regenerate HP where you need to kill them in a set amount of time
  • NPCs that are unkillable and never have their HP drop
  • NPCs that gain health when you shoot them (you could even swap to a component that heals them when they’re hit instead of damaging them at runtime)
  • Tons of other crazy ideas I haven’t come up with in the last 60 seconds.

Using the Single responsibility principal will make your development process much smoother.  It forces you to think about what you’re doing and helps discourage sloppiness.  If used properly, your job will become easier, code will be cleaner, projects will be more maintainable, and you’ll be a happier person!

Continue reading >
Share

My favorite Unity Podcasts for Developers

By Jason Weimann / January 4, 2017

When I’m driving, I’m almost always listening to podcasts or audio books. I used to listen to music, but I’ve reached the grumpy old man point where I’ve memorized every song I like, and I don’t like anything new!


What I listen to changes with my interests, but there are a few things that I started and just can’t imagine stopping.

The Debug Log

The debug log has been around about a year and a half now. It’s a show aimed at game developers and has a big Unity bias. They cover all aspects of the game industry, both technical and soft. If you’re looking for something Unity specific to listen to, or really just want to hear more about the industry from a bunch of developers, this is the podcast for you.

Game Design Zen

Game Design Zen is a work of art done by Curtiss Murphy. This podcast won’t teach you how to code or about new tricks. Instead it teaches you how to make a great game. Curtiss focuses on the key elements to good design and how to make a game that will stand out in the giant crowd of games released every day. Even if you don’t consider yourself a designer, take the time to let him instill some good actionable guidance into great design.

Link: http://www.goodgamesbydesign.com/category/podcast/

iTunes: https://itunes.apple.com/us/podcast/game-design-zen/id1028932466?mt=2

.NET Rocks!

.NET Rocks is not a podcast just about game development. Instead it focuses on all aspects of programming and software development, with a small lingering bias toward .net and c#. If you’re looking to learn about new technology, soft skills, c# language changes, or even nuclear power, you’ve gotta give Carl and Richard a listen.

Link: https://www.dotnetrocks.com/?show=1310

iTunes: https://itunes.apple.com/us/podcast/.net-rocks!/id130068596?mt=2

Everything Vive

I don’t always listen to podcasts about development. Occasionally, I come across one focused solely on gaming, and in this case, VR gaming on the Vive. Listening to people talk about games gives me a good idea of what works and what doesn’t.

Link: http://www.goodgamesbydesign.com/category/podcast/

iTunes: https://itunes.apple.com/us/podcast/game-design-zen/id1028932466?mt=2

Unlike the others, this isn’t an actual podcast. Instead, it’s a youtube channel where Dan Moran guides you through Makin’ Stuff Look Good, I have to mention that “good” doesn’t do the channel justice. Dan will show you tricks to re-create great effects from AAA games (sometimes in ways better than what was actually done). He also has a few introduction tutorials for shader development and animations. One of my favorite videos was the Shader Case Study on Hearthstones Golden cards. He teaches you how to re-create the effect in a clean and easy to use way.

Link: https://www.youtube.com/channel/UCEklP9iLcpExB8vp_fWQseg

This podcast is focused around software development as a passion.  The most recent episodes have gone deep into some of the best software books ever written.  If you're looking for open honest discussion about software development that applies across all industries, you should definitely give it a listen.  I'd recommend one of the "Clean Code" episodes as a good starting point.

They also run their own slack chat which is full of developers discussing code, careers, and more.  So if you hear something and want more info, or want to provide your feedback, join us in there and join the conversation.

Link: https://www.codingblocks.net/

iTunes: https://itunes.apple.com/us/podcast/coding-blocks-software-web/id769189585?mt=2

Rick Davidson – Career Coach

I’ve just started listening to this guy, but so far I think he’s great. He’s giving advice very similar to what I’ve told many people in the past and recommends Unity to new developers. If you’re brand new to game development, wondering where to start, jump over and devour his videos right away.

Link: https://www.youtube.com/channel/UC7DWn7tAAtT0SVQRqlJRknQ

This list will be a living document, and as I start changing my listening and viewing habits, I'll make sure to share what I come across.

If you have your own favorites that you'd like to share, comment below or send me an email.

Continue reading >
Share

How to create a custom Unity Animation in 5.6

By Jason Weimann / December 7, 2016

Overview

Have you ever wished your inanimate objects were animated?  Do you have some object you’d like to wiggle?  Have you ever needed to animate your UI and resorted to using scripts and a Tweening library?  Are you ready to create your own Unity Animations?

In this quick tutorial, I’ll cover how to create your own animations directly in Unity.

We’ll cover animating bones to make a skeleton wave, and we’ll do some renderer animation as well to make his eyes flash an evil red color.

Setup – Getting our Character

To get started, we’ll need something to animate.  Grab this skeleton from the asset store.

Link: http://u3d.as/kJe

After importing, you should see a “Proto_Skeleton” folder like this.

Preparing our Scene

Create a new Scene.

Add a Plane to the scene.

Drag the “proto_skeleton_01” prefab into the scene.

Your Inspector should look like this.

Our Scene View should resemble this.

 

The Animation Window

It’s time to open the Animation window.  You can find the Animation window under the Window menu.

I want to point out that there is also an “Animator” window.  That’s NOT the one we want.

Creating our First Unity Animation

The animator window looks like this.

Click the Create button.

You’ll be presented with a dialog similar to this.  Give your first animation a name “EyesGlowing”, and click Save.

The Animation window will change now to show our new empty animation.

An empty animation

An empty animation

Select the “skeleton_01” child in your Hierarchy.

Animation - Skeleton_01-mesh-renderer-child

This child has the Skinned Mesh Renderer on it.  The first thing we’re going to animate is a property of this renderer.

Look at the emissive property.  It should be empty.

The Emissive Texture

skeleton_texture_emissive

SAVE THIS TEXTURE (RIGHT CLICK, SAVE AS)

Save this texture into your “Proto_Skeleton\Textures” folder as “skeleton_texture_emissive”.

img_583fab785595a

Assign your new texture as the Emission for the renderer by dragging it here.

The emission field allows you to specify a mask for where the renderer should emit light.  The color and float value next to it allow you to adjust that lights color and intensity.

Animation - skeleton-material-assigning-emissive

Now try adjusting the Emission value like this.

Animation - Sliding Emissive

Set the value back to 0.

Take another look at your animation window.  You should notice that it’s changed a bit.  Because you had the skeleton selected and the animation was in “record” mode, the changes you made to the renderer were actually added in as a key-frame in the animation.

If you happened to do other things like move the character, those may show up in here as well.  But don’t worry, you can just select and delete the properties you don’t want in your animation.

Adding Keyframes

Now let’s make this do some actual animation and change some values.

In the animation window, select a point further into the animation.  In my example below, I select right at 0:10.

Then click on the color picker for the Emission property.

Slide it all the way to bright red.

Animation - second-keyframe-red-emissive-eyes

Let’s watch it!

Click the play button to watch your animation.

You should see some quick flashing eyes!  Great work so far!

Add another keyframe

In your animation window, you can control the zoom level with the mouse wheel.

Hover over the animation window and zoom out so you can see 1:00.

Select the 1:00 mark in the animator, then change the Emission value to 0.

animation-third-keyframe-eyes

Play your animation again and you’ll see it’s a bit better now.  It lights up fast then fades out.

Time to Smooth it out

Take the keyframe from the 0:10 mark and drag it over to 0:30.

Animation - Moving-keyframe-from-10-to-30

This little change will smooth our eye flashing a bit, making it a more gradual glow.

A Unity Animation showing flashing red eyes

Conclusion

You’ve already learned how to setup a simple animation for renderers, but don’t forget that you can do this for just about anything.

Here are some other examples of things I animate like this all the time:

  • UI Components
  • Particles
  • Alpha channels
  • Transform positions
  • Entire GameObjects

On to Animating Bones!

Okay, enough of the renderer, let’s move onto more traditional unity animation.  It’s time to move some bones!

Create a new Unity Animation.

Animation - Create-skeleton-dance

Name it “Skeleton Dance“.

Now let’s look in the Hierarchy.

If you expand out the skeleton completely, you’ll see a bunch of children.  These children are the bones of our character.

Most humanoid game characters have a very similar and standardized bone structure to make animating them easier.

Animation - Skeleton-bone-structure

Let’s move some bones!

Select “Character1_LeftArm” in the hierarchy.

In the animation window set the red line to the 0:30 mark.

Now move and rotate the LeftArm so that it’s raised and pointing to his side like you see below.

Animation - Arm-moved

New Keyframes!

You’ll notice that we now have some key-frames in the unity animation window.

Press Play

Don’t forget to play your animations as we go along so you can see how they progress!

Let’s do another

We have a very simple bone animation, let’s expand on it.

Select the “Character1_LeftForeArm“. (it’s a child of the LeftArm you have selected)

Leave the red line at the 0:30 mark.

Now in your inspector window, adjust the potion and rotation of the forearm just a little away from the parent.

Animation - Adding-forearm

This will force key-frames to be added for the forearm.

We could have alternatively added properties using the add property button, but navigating that deep down the tree using the add property dialog is a bit more painful, so we took a shortcut.

In the Unity Animation window, move the red line to the 1:00 mark.

Remember you can use the mouse-wheel to zoom the Animation window scale.

Move the forearm up a bit and rotate it to look like this.

Animation - Forearm-raised

Repeat this 2 times, once at the 1:10 mark and again at 1:20, giving it a little rotation back and forth.

Animation - Forearm-110Animation - Forearm-120

Great, we’ve got him doing a little wave!

Let’s extend that wave a bit though.

To do that, let’s copy and paste the keyframes from the 1:10 and 1:20 marks.

Select the 2 key-frames to be copied, then use your copy shortcut (ctrl-c or cmd-c).

Move the red line over to 1:30 and paste (ctrl-v or cmd-v).

Then move the red line over to 1:50 and paste again.

Animation - Copy-paste-keyframes

The Results!

Hit play and check out the animation.

Your skeleton is waving!!

animation-skeleton-waving

Let’s clean it up!

When you play the animation, you’ll see it abruptly loops with the skeleton’s arm snapping back to the original position.

We need to make him animate out of the wave now.

To do that, we can copy the key-frames from the 0:00 mark then paste them at the 3:00 mark.

 

animation-keyframes-at-300

Now give that a play.

Animation - Wave-broken

OH NO IT’S BROKEN!

The arm is coming down early now and completely breaking our wave.

That’s because we only have keyframes for the LeftArm position and rotation up to the 0:30 mark.
After that, the next key-frame is at 3:00, so it’s slowly animating back to it’s idle position right after it the 0:30 keyframes.

We can fix it!

Copy the “LeftArm : Rotation” and “LeftArm : Position” key-frames from the 0:30 mark and paste them at the 2:00 mark.

Animation - Copy-paste-parent-keyframes

The end result should look something like this.

Our completed Unity Animation of a waving skeleton

Going Further

We’ve created a nice simple wave animation, and we’ve animated some eye emission values, but this is really just the start of what you can do.

Play with the animation window some more and see what you can come up with.

For a little inspiration, here’s a short sample of what I put together while writing this post.

A Unity Animation of a skeleton dancing

 

Continue reading >
Share

Unity Extension Methods

By Jason Weimann / November 22, 2016

Do you ever find yourself writing a method that you feel could or should be part of an existing Unity component?  Have you ever wished the Vector3 class had a few more methods?  You should should be using extension methods!

Getting Started

Direction Example

We’ve all needed to get the direction between two points.  Generally we subtract one from the other, then normalize the result.

Sometimes we mix them up, put them in the wrong order, and get the opposite direction.

Because this behaviour is needed across a bunch of classes, it doesn’t make sense to add a GetDirection() method in each class needing to calculate it.

Without Extensions – Helpers

One option that I’ve done plenty of time myself is to create a set of “helper” classes that I can call like this.

While it’s functional, it’s in some mytichical “helper” class that will inevitably turn into a bit of a mess.  When new people join the team, they may not realize this helper existed, and they may even write their own version of it.

With Extensions – Better

Now I use extension methods instead!

Let’s see how the previous example looks as an extension method.


Here, you see that it looks like the Transform class has a DirectionTo method which takes another transform as a parameter and returns the direction as a Vector3.

Extension methods are usually pretty short and simple, though there’s no technical requirement for them to be so.

The biggest difference between a regular method and an extension is how you call them. Extension methods appear to be methods on an existing class (typically something thats sealed or you don’t want to inherit from).

Benefits

  • Discoverability – You can see them with Intellisense. This makes it easy to see what extra functionality you’ve provided for the classes.
  • Cleanliness – You can avoid a bunch of “SomethingHelper.cs” classes.
  • Reusability – Typically your extension methods are simple and can be re-used across all of your projects.

How do I write an extension method?

To create an extension method, you first need a public static class.

For the first example, we’ll create an extension method class to handle Transform extensions.

Next, add a static method named LookAtY like this.

What’s ‘this’ Parameter?

Notice the “this” keyword for our first parameter. The keyword tells our method that the first parameter, named “transform”, is the one that the extension method will be called from.

So when you call the method, you’re not passing in the transform, instead you’re calling it from the transform, but the method recieves it for you to work with.

The LookAtY method will do perform the equivielent of Transform.LookAt, but only on the Y axis. This means it won’t look up or down, only in the flatened direction of the target point. I use this for creatures who walk and turn to face something, to avoid having them tip over and aim at something a bit above or below them.

Another Example – Particle System Emission Rates

Here’s another one of my favorite extension methods.  I use this to adjust a particle system emission rate.  Since I can’t modify the ParticleSystem class and wouldn’t want all particle systems to need to use another class, an extension method is the perfect solution.

How about Vector3 Extension Method?

To wrap things up and show a little more variety, I give you one of my Vector3 extension methods.  You’ve probably used Vector3.Distance before, but if you haven’t, it’s a static method on the Vector3 class to give the distance between two points.

I found myself needing a distance that ignored the Y axis more times than I can remember, and with this simple extension method, it feels like built in functionality.

Conclusion & Resources

If you haven’t used extension methods before, you’re really missing out.  They’re very easy to get started with and will quickly become a habit once you start seeing the benefits.

Unity Tutorial Video

https://unity3d.com/learn/tutorials/topics/scripting/extension-methods

A couple useful extensions on GitHub
https://gist.github.com/omgwtfgames/f917ca28581761b8100f

Continue reading >
Share

Getting started with SteamVR Controller Input

Handling SteamVR Controller Input

I’ve talked to quite a few developers recently who weren’t really sure how to get started with input in their VR projects.  While it’s not too hard to get started, there are some things that are important to understand.  In this post, I’ll cover some of the fundamentals of the SteamVR controller inputs and interactions.

Project Setup

Create a new project and import the SteamVR plugin from the Asset Store.

Now find the [CameraRig] prefab in your project view and place it into your scene.

SteamVR - Input - Scene Setup - Add CameraRig and delete cameraSteamVR - Input - Scene Setup - Add CameraRig and delete camera

Next, delete the “Main Camera” that was in our scene by default.  The [CameraRig] already has a Camera component for us that tracks to our head, having another camera in here will just mess things up.

Expand the [CameraRig] and select the left & right controllers.

Add the “SteamVR Tracked Controllercomponent.

Play Time

Now save your scene and press play.

With the game in play mode, select just the left controller.

Now grab the controller and start pressing buttons.

TrackedController Input in Inspector

The TrackedController Script

The TrackedController script is great and actually gives us a bunch of events we can register for in our code.

Let’s take a look at those events.

Code Time

We’re going to hook  into these events with our own class.

Create a new folder named “Code“, then create a new script named “PrimitiveCreator” in that folder.

Paste this code in for the class.

Add the Script to your controllers

Before we go over the code, add the script to your controllers and try it out.

Press play and start placing primitives with the trigger.  Use the trackpad to select which type of primitive the controller should place.

How does it work?

Event Registration

We’re using the OnEnable & OnDisable methods of the MonoBehaviour here to cache our TrackedController and register for events.

The events we care about are TriggerClicked & PadClicked.  When the player clicks the Trigger we call into our HandleTriggerClicked() method [line 7], and when they click the Pad, we call HandlePadClicked() [line 8].

It’s important to pay attention to the OnDisable method though.  Since we’re registering for events when our gameobject is enabled, we need to be sure to deregister them when it’s disabled.  To deregister, we just call with the same syntax as registration, but use a minus instead of a plus.

Events not being deregistered is one of the few ways you can leak memory in a garbage collected language.  In fact it’s the most common leak I see in c# code.

 

Our Event Handlers – The Trigger

Our first event handler throws away the event arguments and calls into SpawnCurrentPrimitiveAtController().  We do this for a few reasons.  The main one is to have a method named for what it does and taking only the parameters it needs.

If you’re comfortable with linq, you could re-write the event registration to avoid this method, but for this example I wanted to keep it simple.

The SpawnCurrentPrimitiveAtController() method does what it says.  It uses CreatePrimitive, passing in the _currentPrimitiveType that we’ve previously set to PrimitiveType.Sphere. [line 9]

Then it adjusts the position and rotation to match our controller. [lines 10 & 11]  Finally, it adjusts the scale to be a size we can work with.  For planes, we need to get even smaller to avoid it covering our entire view. [lines 13-15]

Our Event Handlers – The Trackpad

The other event we registered for was PadClicked.  That event is registered to our HandlePadClicked() method.  In this method, we check the event argument’s padY variable.

padY represents the Y axis on your touchpad.  It’s value can range from -1 to +1, with negative values being above the center and 0 being exactly in the middle.

We check the padY value to see if it’s less than 0, which means the player clicked on the upper part of the pad.  If they did, we increment our primitive type.  If not, we decrement our primitive type.  In each of these methods, we do a quick bounds check for wrapping, but the key part is that we’ve updated the _currentPrimitiveType value that was used when we pull the trigger.

 

Where to go from here

img_5721b39ec05c7

If you’ve already read my post on Getting Started with SteamVR, you may be considering some other options for your trigger click.  There are countless things you could do just to this project.  You could modify the system so that the initial press of the trigger spawns the primitive, but keeps it at the controllers position and rotation (perhaps by making it a child of the controller), then hook into the TriggerUnclicked method to drop it in it’s position.

Try hooking into a few of the other events and see what you can come up with!

If you really want to jumpstart your VR project though and get a game out quickly, I’m building a VR development course where we go over all the steps to build a full game using real world proper practices.

Tell me how and I’ll contact you as soon as there’s more info.

Continue reading >
Share

Unite LA 2016

Unite LA just ended.  If you didn’t get to attend this year, you should definitely head to one of the upcoming Unite events soon!

It’s a great venue to learn more about Unity, interact with other Unity developers, and even talk to the Unity team.

 

Keynote

The keynote was filled with a ton of exciting announcements.  Starting with this demo of 10k independent fish GameObjects swimming around at 60fps!

Unite - 10k fish

The Pantheon team was also there in force showing off the latest version of their exciting MMO running in Unity.

Unite - Pantheon

Video

We also saw the announcement of the new Unity Video player.

The video player will replace the current hacky system and plugins, and allow high res videos to be shown in-game at high FPS.

It also supports 360 video, which I hope will be the new way to build 360 VR content in the future.

Unite - Video Player

Not my picture – Which is why it looks so much better 🙂

New Navigation System

The new navigation system that was discussed is amazing too.

They’ve added support multiple agent sizes, multiple NavMeshes, and walking on walls!

And you can try it out here https://forum.unity3d.com/threads/welcome-new-navmesh-features-experimental-preview.438991/

I’ve tried it out a bit so since then and it really is great… I’m thrilled for this to become part of mainline Unity.

Unite - NavMesh

Connect

And it wrapped up with an introduction to the new Unity Connect platform, designed to help developers find projects and teams to find developers.

I’m excited to start using connect right away to find more talent for some of our upcoming game projects.

 

Other Exciting Things

After the keynote, I was able to spend 3 days learning from speakers and attendees.

 

My Favorite Session

Tools, Tricks and Technologies for reaching stutter free 60 FPS in INSIDE

Kristian Kjems – Playdead
Erik Rodrigues Pedersen – Playdead
Søren Trautner Madsen – Playdead

Their session was full of great actionable information and strategies for keeping a consistent 60fps.

If you haven’t played Inside yet, check it out and think about how you’d build a game with hours of seamless content scrolling by at 60fps on an XBox…

They have an entire strategy they’ve built and it’s amazing, I look forward to sharing some new tricks I learned from their session soon.

Unite - Inside - Time Slicing

A great time slicing technique to keep frame rate high

Some of these low level optimizations were a surprise

 

Another Great Session

Overthrowing the MonoBehaviour Tyranny in a Glorious ScriptableObject Revolution (again)

Richard Fine did a great job in this talk demonstrating some great uses for ScriptableObjects.

He explained how ScriptableObjects allow you to have a singleton that doesn’t need to worry about reloading.  It wrapped up with an example of to swap in ScriptableObjects as brains for the Tanks demo.

If you’re not already using ScriptableObjects heavily, you should check out the video replay when it’s available and start following his advice!

The talk is actually available online here too! https://unity3d.com/learn/tutorials/topics/scripting/overthrowing-monobehaviour-tyranny-glorious-scriptableobject

 

Pokemon GO!

Unite - PokemonGO

The final session I attended was given by Chris Mortonson from Niantic Labs.

This talk was actually about dependency injection and how they use DI + Unit Tests to build and maintain a game like Pokemon GO.

It was very exciting to find out that Pokemon GO is actually using Zenject.  If you’re interested in using DI for your projects, I highly recommend trying Zenject out

 

Image Effects

Unite - Cinematic Image Effects

The Cinematic Image Effects were another big hit at Unite.

From the talks, it sounds like the plan is to replace the standard assets effects with the new ones eventually.  I didn’t get to see the full talk, but I heard it was great, and the bit that was spoiled at the keynote looked worth digging into. (especially the bit about performance)

They’re available here to try out today: https://bitbucket.org/Unity-Technologies/cinematic-image-effects.  I plan on grabbing them soon and going deeper.

 

The People

The best thing about events like Unite is getting to meet other developers.

I met hundreds, and had great conversations with quite a few of them.

I also got to ride on Jurassic Park with Stephan from TextMeshPro, which you may know is my favorite Unity asset!

Unite - TextMeshPro

And met some of the team from The Debug Log.

It’s a great podcast about Unity development btw.

If you haven’t heard it yet, go check it out!

Unite - DebugLog

And this great pair from HyperLuminal who were demoing a really fun game when we weren’t dragging them onto the Transformers ride.

Unite - Hyper-Luminal

 

Should you go?

There are Unite events all over the world, and you may be asking yourself if you should attend one.

If you’re not sure, the answer is probably yes!

Even if you don’t like talking to people or traveling, you’ll gain a lot just from the sessions.  And if you’re open to it, you can make a bunch of great connections, find new jobs, or new employees, and just have a blast doing it.

If you do attend one soon, send me an email jason@unity3d.college or message me on twitter @unity3dcollege.

Continue reading >
Share

Unity Events, Actions, and BroadcastMessage

By Jason Weimann / October 5, 2016

Events are a key part of general C# development.  In Unity, they can sometimes be overlooked and less optimal options are used.  If you’ve never used events, you’ll be happy to know that they’re easy to get started with and add a lot of value to your project architecture.

Before I cover the Event system in detail, I’d like to go over some of the common alternatives you’ll see in Unity projects.

BroadcastMessage

The BroadcastMessage method is part of the MonoBehaviour class.  It allows you to send a loosely coupled message to all active gameobjects.

BroadcastMessage is simple to use and accomplishes the task of sending a message from one gameObject to another.

The biggest issue with BroadcastMessage is how it refers to the method that will be called.  Because it takes a string as the first parameter, there is no compile time verification that the method actually exists.

It can be prone to typos, and it introduces danger when refactoring / renaming your methods.  If the method name in the string doesn’t match the method name in your classes, it will no-longer work, and there’s no obvious indication of this beyond your game failing to work properly.

The other parameters are also not tightly coupled, so there’s no parameter verification.  If your method required a string and an int, but you call BroadcastMessage with two strings, your call will fail, and again there’s no compile time indication of this issue.

Another big drawback to BroadcastMessage is the fact that it only broadcasts to children.  For the example given, the UI Text would only receive the message if it’s a child of the player.

This Works

This does not work

This does not work

Update Polling

Another common technique I see in Unity projects is polling properties in the Update() method.

Polling in an Update() method is fine for many things, but generally not the cleanest way to deal with cross gameobject communication.

using UnityEngine;
using UnityEngine.UI;

namespace UpdatePolling
{
    public class PlayerHPBar : MonoBehaviour
	{
		private Text _text;
		private Player _player;
		private void Awake()
		{
			_text = GetComponent<Text>();
			_player = FindObjectOfType<Player>();
		}
		private void Update()
		{
			_text.text = _player.HP.ToString();
		}
	}
}

In this example, we update the text of our UI every frame to match the HP value of the player.  While this works, it’s not very extensible, it can be a bit confusing, and it requires us to make variables public that may not really need to be.

It also get a lot messier using Update Polling when we want to only do things on a specific situation.  For updating the player HP UI, we may not mind doing it every frame, but imagine we want to play a sound effect when the player takes damage too, suddenly this method becomes much more complicated.

Events

If you’ve never coded an event, you’ve probably at least hooked into one before.

One built in Unity event I’ve written about recently is the SceneManager.sceneLoaded event.

This event fires whenever a new scene is loaded.

You can register for the sceneLoaded event and react to it like this.

using UnityEngine;
using UnityEngine.SceneManagement;

public class SceneLoadedListener : MonoBehaviour
{
    private void Start()
	{
		SceneManager.sceneLoaded += HandleSceneLoaded; 
	}

	private void HandleSceneLoaded(Scene arg0, LoadSceneMode arg1)
	{
		string logMessage = string.Format("Scene {0} loaded in mode {1}", arg0, arg1);
		Debug.Log(logMessage);
	}
}

Each event can have a different signature, meaning the parameters the event will pass to your method can vary.

In the example, we can see that the sceneLoaded event passes two parameters.  The parameters for this event are the Scene and the LoadSceneMode.

Creating your own Events

Now, let’s see how we can build our own events and tie them into the example before.

using UnityEngine;

namespace UsingEvents
{
    public class Player : MonoBehaviour
	{
		public delegate void PlayerTookDamageEvent(int hp);
		public event PlayerTookDamageEvent OnPlayerTookDamage;

		public int HP { get; set; }
        
        private void Start()
        {
            HP = 10;
        }

		public void TakeDamage()
		{
			HP -= 1;
			if (OnPlayerTookDamage != null)
				OnPlayerTookDamage(HP);
		}
	}
}

In this example, we create a new delegate named PlayerTookDamageEvent which takes a single integer for our HP value.

Then we use the delegate to create an event named OnPlayerTookDamage.

Now, when we take damage, our Player class actually fires our new event so all listeners can deal with it how they like.

We have to check our event for null before calling it.  If nothing has registered with our event yet, and we don’t do a null check, we’ll get a null reference exception.

Next, we need to register for this newly created event.  To do that, we’ll modify the PlayerHPBar script like this.

using UnityEngine;
using UnityEngine.UI;

namespace UsingEvents
{
    public class PlayerHPBar : MonoBehaviour
	{
		private Text _text;
		private void Awake()
		{
			_text = GetComponent<Text>();
			Player player = FindObjectOfType<Player>();
			player.OnPlayerTookDamage += HandlePlayerTookDamage;
		}

		private void HandlePlayerTookDamage(int hp)
		{
			_text.text = hp.ToString();
		}
	}
}

To test our event, let’s use this PlayerDamager.cs script.

using UnityEngine;
using System.Collections;

namespace UsingEvents
{
    public class PlayerDamager : MonoBehaviour
	{
		private void Start()
		{
			StartCoroutine(DealDamageEvery5Seconds());
		}

		private IEnumerator DealDamageEvery5Seconds()
		{
			while (true)
			{
				FindObjectOfType<Player>().TakeDamage();
				yield return new WaitForSeconds(5f);
			}
		}
	}
}

This script calls the TakeDamage() method on the Player every 5 seconds.

TakeDamage() then calls the OnPlayerTookDamage event which causes our PlayerHPBar to update the text.

Let’s see how this looks in action.

Events - Custom Events - Game View

Example playing at 10x speed

We can see here that the players HP is decreasing and the text is updating.

Sidebar – Script Execution Order

You may have noticed something strange though.  The first value shown is -1.  This caught me off guard the first time, but the cause is visible in the code.

Before you continue reading, take a look and see if you can find it.

….

In our Player.cs script, we set the HP to 10 in the Start() method.

Our PlayerDamager.cs script also starts dealing damage in the Start() method.

Because our script execution order isn’t specified, the PlayerDamager script happens to be running first.

Since an int in c# defaults to a value of Zero, when TakeDamage() is called, the value changes to -1.

Fix #1

There are a few ways we can fix this.

We could change the script execution order so that Player always executes before PlayerDamager.

Events - Custom Events - Script Execution Order Menu

In the Script Execution Order screen, you can set the order as a number.  Lower numbered scripts are run before higher numbered scripts.

Events - Custom Events - Script Execution Order

Fix #2 – Better

While this would work, there’s a much simpler and cleaner option we can use.

We can change the Player.cs script to set our HP in the Awake() method instead of Start().

Events - Custom Events - Initialization Order

Awake() is always called before Start(), so script execution order won’t matter.

Back to Events

So now we have our event working, but we haven’t quite seen a benefit yet.

Let’s add a new requirement for our player.  When the player takes damage, let’s play a sound effect that indicates that they were hurt.

PlayerImpactAudio

To do this, we’ll create a new script named PlayerImpactAudio.cs

using UnityEngine;

namespace UsingEvents
{
    [RequireComponent(typeof(AudioSource))]
	public class PlayerImpactAudio : MonoBehaviour
	{
		private AudioSource _audioSource;
		private void Awake()
		{
			_audioSource = GetComponent<AudioSource>();

			FindObjectOfType<Player>().OnPlayerTookDamage += PlayAudioOnPlayerTookDamage;
		}

		private void PlayAudioOnPlayerTookDamage(int hp)
		{
			_audioSource.Play();
		}
	}
}

Notice on line 13, we register for the same OnPlayerTookDamage event that we used in the PlayerHPBar.cs script.

One of the great things about events is that they allow multiple registrations.

Because of this, we don’t need to change the Player.cs script at all.  This means we’re less likely to break something.

If you’re working with others, you’re also less likely to need to do a merge with another developers code.

We’re also able to more closely adhere to the single responsibility principal.

The single responsibility principle states that every module or class should have responsibility over a single part of the functionality provided by the software, and that responsibility should be entirely encapsulated by the class. All itsservices should be narrowly aligned with that responsibility.
Robert C. Martin expresses the principle as follows:[1] “A class should have only one reason to change.”

The GameObject & AudioSource

You may have also noticed line 5 which tells the editor that this component requires another component to work.

Here, we’re telling it that we need an AudioSource on the gameobject.  We do this because line 11 looks for an AudioSource to play our sound effect from.

For this example, we’ve created a new gameobject and attached both our PlayerImpactAudioSource.cs script and an AudioSource.

Events - Custom Events - PlayerImpactAudio Inspector

Then we need to assign an AudioClip.  As you can see in the example, I’ve recorded my own sound effect and named it “oww”.

Now when we hit play, a sound effect triggers every time the TakeDamage() method is called.

Actions – Use these!

If this is all new to you, don’t worry, we’re almost done, and it gets easier.

Actions were added to c# with .net 2.0.  They’re meant to simplify events by removing some of the ceremony around them.

Let’s take another quick look at how we defined the event in our Player.cs script.

public delegate void PlayerTookDamageEvent(int hp);
public event PlayerTookDamageEvent OnPlayerTookDamage;

First, we define the event signature, declaring that our event will pass one integer named “hp”.

Then we declare the event so that other code can register for it.

With Actions, we can cut that down to one line.

using System;
using UnityEngine;

namespace UsingActions
{
    public class Player : MonoBehaviour
	{
		public Action<int> OnPlayerTookDamage;

		public int HP { get; set; }

		private void Start()
		{
			HP = 10;
		}

		public void TakeDamage()
		{
			HP -= 1;
			if (OnPlayerTookDamage != null)
				OnPlayerTookDamage(HP);
		}
	}
}

That’s all there is to it.  Nothing else needs to change.  All the other scripts work exactly the same.  We’ve simply reduced the amount of code needed for the same effect.

While this is great for the majority of events, there is one reason you may want to still use the occasional event.  That would be when your event has many parameters that can be easily confused with each other. My recommendation for that situation however is to re-think your events and see if the amount of data you’re passing is larger than it needs to be.  If you really need to pass a lot of data to an event though, another great option is to create a new class or struct and fill it with your data, then pass that into the event.

Final Tip

Before I go, it’s also worth mentioning that you can have multiple parameters to an Action.  To do this, simply comma separate your parameter types like this.

public Action<int, string, MyCustomClass> OnSomethingWithThreeParameters { get; set; }

If you have questions or comments about Events or using Action, please leave a comment or send me an email.

Continue reading >
Share

Using Multiple Scenes

By Jason Weimann / September 26, 2016

Project Example Source Code

The source code and example projects for this post are available.  If you'd like to grab it and follow along, just let me know where to send it.

Using Multiple Scenes in Unity

One of the great things about Unity 5 is the ability to load multiple scenes at the same time effectively.
You may have noticed that your scene name now is visible in the top left of the hierarchy.

Multiple Scenes - Empty Room with no other scenes loaded
This is here to show which scene the game objects are in.

If you load multiple scenes, you’ll see them as separate collapsible groups in the list.

Multiple Scenes - Scenes added in Hierarchy

There are a variety of ways you can use additive level loading in your projects.  In this article, we’ll cover some of the most common uses.

  • Splitting scenes for shared editing
  • Randomly generated game content
  • Seamless loading of large worlds
  • Smooth transitions from a menu scene to a game level

Shared editing / splitting the world

Multi-user scene editing in Unity can be painful.  Merging changes isn’t easy, and even when you successfully do a merge, it can be hard to tell if everything is right.

For many games, the new scene management systems in Unity 5 will allow you to split up parts of your world into separate chunks that are in their own scene files.

This means that multiple designer can setup part of the world.

Our Starting State

To demonstrate how this would work, I’ve built two scenes.  There’s a purple scene and a yellow scene.

Multiple Scenes - Yellow and Purple Scenes

With both of them loaded at the same time, you can see that their seams line up and they combine to be a larger scene.

The advantage though is we can have a designer working on the yellow scene while another designer makes changes to the purple one.

This example has simple scenes.  In a real game, just imagine the scenes are different quadrants of a city, chunks of a large castle, or a large scene in one of your previous projects.

Mario Changed the Purple Scene

To show the benefit and how it works, we’ve modified the purple scene.  It now has another sphere and an extra word!

Check out the Hierarchy and notice that only the purple scene has been modified, so when we save, we’re not affecting the yellow scene at all.Multiple Scenes - Yellow and Purple Scenes - Puple Changed

Luigi changed the Yellow Scene

It’s a good thing we didn’t touch the yellow scene too, because another designer has made some changes to it while we were modifying the purple one!  They added a cube and more words!

Multiple Scenes - Yellow Scene Changed

Not a problem

Since we only edited the purple scene, nobody’s overwritten someone else’s work.

Multiple Scenes - Yellow and Purple Scenes - Both Changed

Our end result has changes from two separate designers working in parallel.  Depending on your game, this could be split among any number of people, all in charge of their own area, or at least coordinating who’s editing each area to avoid stepping on each others work.

Generating a level at run-time

The first situation we’ll cover today is loading multiple scenes to build a bigger level dynamically.

For this example, I’ve built two rooms.  One is red and the other is blue.

Multiple Scenes - Blue Room

The Blue Room

Multiple Scenes - Red Room

The Red Room

I’ve also created another scene named ‘EmptyRoom‘.

This scene holds a camera, a light, and a gameobject with a RoomLoadController script.

Multiple Scenes - Empty Room

The RoomLoadController is responsible for loading in our red and blue rooms during the game.

For this sample, our RoomLoadController will watch for the a keypress of the numpad plus and numpad minus keys.  If the user presses either of them, we’ll load add another scene to our game.

using UnityEngine;

public class RoomLoadController : MonoBehaviour
{
    private int zPos = 0;

	private void Update()
	{
		if (Input.GetKeyDown(KeyCode.KeypadMinus))
		{
			AddRoom("RedRoom");
		}

		if (Input.GetKeyDown(KeyCode.KeypadPlus))
		{
			AddRoom("BlueRoom");
		}
	}

	private void AddRoom(string roomName)
	{
		zPos += 7;

		var roomLoader = new GameObject("RoomLoader").AddComponent<RoomLoader>();
		roomLoader.transform.position = new Vector3(0f, 0f, zPos);
		roomLoader.Load(roomName);
	}
}

You may have read the script and wondered, where’s the scene loading part?  Well for this project, I wanted to load a bunch of scenes in and I want them to always be offset by 7 meters.

To keep the code separated and simple, I spawn a new object called RoomLoader to do the work.  We give the RoomLoader is a position and a room name, it will handle the rest.

Let’s take a look at the RoomLoader.

using System.Collections;
using UnityEngine;
using UnityEngine.SceneManagement;

public class RoomLoader : MonoBehaviour
{
    public void Load(string roomName)
	{
		SceneManager.sceneLoaded += SceneManager_sceneLoaded;
		SceneManager.LoadSceneAsync(roomName, LoadSceneMode.Additive);
	}

	private void SceneManager_sceneLoaded(Scene scene, LoadSceneMode mode)
	{
		SceneManager.sceneLoaded -= SceneManager_sceneLoaded;
		StartCoroutine(MoveAfterLoad(scene));
	}

	private IEnumerator MoveAfterLoad(Scene scene)
	{
		while (scene.isLoaded == false)
		{
			yield return new WaitForEndOfFrame();
		}

		Debug.Log("Moving Scene " + transform.position.x);

		var rootGameObjects = scene.GetRootGameObjects();
		foreach (var rootGameObject in rootGameObjects)
			rootGameObject.transform.position += transform.position;
	}
}

Check out the load method.  This is what’s being called from the RoomLoadController. It does two things.

  1. Registers a callback for the SceneManager.sceneLoaded event.
  2. Calls SceneManager.LoadSceneAsync, using the LoadSceneMode.Additive option.

SceneManager.sceneLoaded Add a delegate to this to get notifications when a scene has loaded

After line 10 executes, the scene specified in roomName will start loading. Because of the LoadSceneMode.Additive option, we will keep our current scene open as well, including our camera, light, and RoomLoadController.

Once the scene finishes loading, our SceneManager_sceneLoaded method will be called by the delegate (registered on line 9).  The first thing we do is deregister from the event, so we don’t get called by every other scene that loads.  Then we kick off a coroutine to wait for the scene to be completely ready.  Lines 21-24 do the waiting…. and waiting…. until the scene.IsLoaded.

I’m not sure why the scene isn’t completely loaded when the sceneLoaded event fires.  I’m sure there’s a reason for it, but I haven’t found the explanation yet.  If you happen to know, please comment.

On line 28, we get the root gameobjects of the newly loaded scene.  We then move those objects over to be offset by the amount this RoomLoader is.  This is why the RoomLoadController is moving the RoomLoader.

Blue Room Root Objects

Blue Room Root Objects

Let’s check out the end result.

Multiple Scenes - Loading Red and Blue Rooms

Again, for this example, we’re controlling the loading of scenes, but there’s no reason we couldn’t randomly pick some.

This same technique can be used to randomly generate a dungeon out of pre-built scenes or load new scene parts as a player explores the world.

Part Two

Scene management is a huge subject, and while we've covered some important basics, there's a lot more to learn.

If you're interested in this subject, you can get part two delivered directly to you as soon as it's ready.

Part Two of this post will cover:

  • Seamless loading of large worlds
  • Smooth transitions from a menu scene to a game level

 

Continue reading >
Share

Unity OnInspectorGUI – Custom Editors, Gizmos, and Spawning Enemies

By Jason Weimann / September 12, 2016

Creating games can be difficult and time consuming.  You have to code all kinds of systems, add and modify art and sound, and of course design levels.

As a programmer, I often found myself overlooking level design, and forgetting just how time consuming and frustrating it could be.

But I also know that as a programmer, there are things I can do to make it easier for myself (and any designers working on the games).

Today, I’ll show  you one very useful technique you can use to drastically reduce the time spent on design work, while making it a much more fun process.

The Example – Spawn Points

Enemies are a very common thing in video games, and in a large number of them, enemies are created/spawn throughout the game.

The GameObject spawning them could be simple, instantiating an enemy on a set interval.

Before I show you my technique, let me show you how I used to create them.

Version 1 – A simple transform (very bad)

When I first started placing spawn points in a game, I did it by simply placing a transform.  The screenshot below is actually a step beyond what I used to do, because in this one I’ve actually enabled the Icon so you can see it.

Custom Editors - Spawn Point as Transform

If you haven’t used the Icons before, the selection dialog is just to the left of the Active checkbox in the inspector.

Custom Editors - Icon Selector

I quickly moved on from just placing a transform though because it got really hard to tell exactly where the spawn point was in the world.  If the transform is below the ground, I wouldn’t be able to tell without moving the camera all around.  The same goes for a spawn point that’s in a building, hovering over the ground, etc.

Version 2 – Using a cube (less bad)

The next evolution of my spawn points involved cubes.  Creating spawn points with a cube renderer mostly resolved the issue with not being able to easily see the position in the scene.

To make this work though, I needed my spawn points to disable the renderer in their Awake() call so I didn’t have random boxes showing in the world when the game was being played.

It also didn’t really solve the issue of spawning enemies on the ground, so I’d have to make my spawners do a raycast downward to the ground to get their spawn point before popping out an enemy.

I’d try to place the boxes just a bit over the ground, but found that I wasted a lot of time lining things up right, testing, making minor movements, testing, etc.

In addition to that, it felt ugly, but I used this technique for a very long time….

Custom Editors - Spawn Point as Cube

Version 3 – Custom Editors

After using the previous methods for way too long, I finally came up with a solution that solved my previous problems and made building levels much faster.

Custom Editors - Enemy Spawners Scene View

As you can see in the image, Version 3 looks drastically different.  There are colored spheres with lines attaching them.  There’s text over them instead of in an Icon, and that text has a lot of info to it.

Before I show you how it’s done, let me explain what it is you’re seeing.

The Green spheres show actual spawn points for this game.  These are points where enemies will be instantiated.

The Blue spheres are waypoints.  Enemies spawn at the green spheres then walk to the blue ones.

The lines between them show which waypoints belong to each spawnpoint.

What’s that Text?

The text over the spawn point shows a few things.  Let’s examine the top left spawn point.

Custom Editors - Spawn Point Up Close

Intro 1 0:25-0:28 Spawn 2 [1/3] after 5(8s)

Intro 1 – This is the name of the wave/area this spawn point belongs to.  In this case, it’s the first introductory wave the player gets when they start the game.

0:25-0:28 – Here you see the time in the wave that this spawn point will be active.  This spawn point is active for a very short time, starting 25 seconds into the wave and ending only 3 seconds later.

Spawn 2 [1/3] – This tells us how many enemies will spawn from this point.  It’s going to spawn 2 zombies, one every three seconds (the [1/3] shows the count and interval).  The first one will spawn immediately, and the second after 3 seconds.

after 5 – This part isn’t visible on all spawn points, only on spawn points that delay their start.  You can see that in the Hierarchy, this spawn point is under a gameobject that enables after 20 seconds.  Each spawnpoint in a timer can have an additional delay added to them to avoid a large list of timers in the hierarchy.  The 5 second delay is what makes this spawner start at 0:25 instead of 0:20.

Custom Editors - Hierarchy

(8s) – The last thing you see just shows how long this spawnpoint is enabled.  For this one, after 8 seconds it will auto disable itself.  This is just time of the last spawn minus the time the spawn point becomes enabled (28 – 20 in this case). 

Snapping to the Terrain or Navmesh

One final benefit of this system that I want to show before getting into code is the ability to have your spawn points and waypoints automatically snap to the terrain or navmesh.  In the example below, you can see that when I move this waypoint around it will automatically find its place on the ground as soon as I release it.

This saves a ton of time and resolves that entire issue of lining things up.  Don’t do these things manually, have the editor do it for you.

Custom Editors - Waypoint Snapping

How It Works

To make my custom spawn points work like they do, I take advantage of two great features in Unity, Gizmos and Custom Inspectors.

Both parts do about half of the work required to get the full functionality.

Let’s start with this snippet from my EnemySpawner.cs script

The first thing we do here is get the Wave parent of this spawner.  This is the GameObject that all spawners and timers will be under for a specific wave or area of the game.

In the example above, you saw the green part “Intro 1“.  That part was just the name of the wave we find right here.

Line 6 takes this wave name and formats uses string.Format to split the wave name from the current spawners name, which is why “Intro 1” is above the spawning details.

On Line 8, we check to see if the wave this gizmo is for is currently selected.  We then use that to determine if we want a green spawner gizmo or a gray one.  I do this so we can easily tell which spawners are related.  All spawners in a wave will be colored at the same time, and all the ones from other waves will just show up as gray.

Custom Editors - Disabled Spawners

Line 12 draws the sphere using Gizmos.DrawSphere, in whichever color we’ve chosen.

Lines 14-15 will draw the final text above the sphere if the spawner is in the selected wave.

The OnDrawGizmos code is pretty short, and on it’s own it does a bit of really useful stuff, but there’s a lot missing.  It does show the spheres, and it places the name above the sphere with the wave name as a prefix, but there’s a lot more we want to happen.

For example the label from line 15 has a lot of useful info, and we pull that from the name, but we don’t want to manually enter that info, we want it auto generated and updated whenever we change things.

Overriding ToString()

To generate the name, with all the useful data, we override the ToString method of our EnemySpawner class.

If you’ve never overridden the ToString method, you may want to check out this description for a simpler sample of how it works https://msdn.microsoft.com/en-us/library/ms173154.aspx

Every object in c# implements the ToString method that you can override (the default return value for most objects is the name of the class/type).

In this example, we’re building up the rest of the label text.  While I won’t go into the details of each line, the end result of this method looks like this:

"0:25-0:28 Spawn 2 [1/3] after 5(8s)"

The Custom Editor

To tie this all together, we use a custom editor for the EnemySpawner.

Before you see the bigger parts of the script, let’s start with the initial attribute that tells Unity this class is a custom editor.

The CustomEditor attribute allows you to tell the engine which MonoBehaviour you want the editor to be used for.  This is specified by giving it the type of the MonoBehaviour.  In this example it’s typeof(EnemySpawner).

Also remember to add the using UnityEditor statement and make the base class for your custom editor be of typeEditor“.

The Editor class has one important method you need to override.  Check out this expanded version of the script and the OnInspectorGUI method that’s being overridden.

This method is called every frame in the editor while the Inspector window is visible and the object is selected.  If the Inspector is not visible, or is showing some other game object, this code won’t be called.

Code Breakdown

The first thing we do in this OnInspectorGUI method is cache the component we’re working with.

On line 12, we assign the target gameobject to the _enemySpawner variable.

The variable target is defined by the editor class and specifies the gameobject this editor is showing currently

Line 13 calls the base editor class version of OnInspectorGUI so it can handle anything that we’re not dealing with.  This is required because we’re overriding the behavior of OnInspectorGUI.

Lines 14-19 are a single method call to create a range slider that will fill the min and max movement speed.  I do this just to enforce the idea that the max must be greater than the minimum.  As a benefit, it also makes the value a little easier to visualize.

custom-editors-movementspeed-range-slider

Lines 21-24 are there to add waypoints to the spawners.  I won’t cover in detail how they work, but these buttons essentially add a child object that will be used as a waypoint.  If it’s a random waypoint, my navigation code will select one at random, if it’s static, the enemies will path around them in order.  These also have their own gizmo and custom editor code to make them show up as blue in the scene view.

Line 28 just calls a method to disable any left over colliders or renderers on the spawner.  Generally there aren’t any, but sometimes one gets created with a cube or sphere and I want to make sure that’s disabled right away.  I could just remove them here too, but disabling does the same job and feels safer.

Line 30 does one of the most important parts.  It calls the method to stick the spawner to the ground.  Sticking the spawner down is done by a raycast from the spawners current position aimed downward.  We get the hit point and update the spawners position.

Line 33 wraps it all up by updating the spawners name.  It uses the overridden ToString() method we created above to determine the objects new name.

Auto Naming in Action

custom-editors-naming-in-action

Important Note

For a custom editor to work, you need to place the script in a sub-folder named “Editor“.  This sub-folder can be anywhere in your project, and you can have multiple Editor folders, but only scripts in an Editor folder will work.

Custom Editors - EditorFolder

Custom Editors - EnemySpawner

Continue reading >
Share

Unity Interfaces

By Jason Weimann / September 4, 2016

Unity Interfaces – Getting Started

Lately, I’ve realized that many Unity developers have never programmed outside of Unity projects.
While there’s nothing wrong with that, it does seem to leave some holes in the average Unity developers skill set.
There are some great features and techniques that aren’t commonly used in Unity but are staples for typical c# projects.

That’s all fine, and they can be completely productive, but some of the things I see missing can really help, and I want to make sure to share those things with you.

Because of this, I’ve decided to write a few articles covering some core c# concepts that can really improve your code if you’re not using them already.

The first in this series will cover c# interfaces.

If you google c# interfaces, you’ll come across the msdn definition

An interface contains definitions for a group of related functionalities that a class or a struct can implement.

Personally, I prefer to use an example to explain them though, so here’s one from an actual game.

The ICanBeShot interface

In Armed Against the Undead, you have guns and shoot zombies..Armed Against the Undead
But you can also shoot other things like Ammo pickups, Weapon unlocks, Lights, etc.

Shooting things is done with a standard raycast from the muzzle of the gun.  Any objects on the correct layer and in range can be shot.

If you’ve used Physics.Raycast before, you’ll know that it returns a bool and outputs a RayCastHit object.

The  RayCastHit has a .collider property that points to the collider your raycast found.

In Armed, the implementation of this raycast looks like this:

private bool TryHitEnvironment(Ray ray)
{
	RaycastHit hitInfo;

    if (Physics.Raycast(ray, out hitInfo, _weaponRange, LayerMask.GetMask("EnvironmentAndGround")) == false)
        return false;

    ICanBeShot shootable = hitInfo.collider.GetComponent<ICanBeShot>();

    if (shootable != null)
		shootable.TakeShot(hitInfo.point);
    else
        PlaceBulletHoleBillboardOnHit(hitInfo);

    return true;
}

Here you can see that we do a raycast on the EnvironmentAndGround layer (where I place things you can shoot that aren’t enemies).

If we find something, we attempt to get an ICanBeShot component.

That component is not an actual implementation but rather an Interface which is on a variety of components.

It’s also very simple with a single method named TakeShot defined on it as you can see here:

public interface ICanBeShot
{
    void TakeShot(Vector3 hitPosition);
}

If you’ve never used an interface before, it may seem a little strange that there’s no actual code or implementation.  In the interface, we only define how the methods look and not the implementation.  We leave that part to the classes implementing our interface.

How the Interface is used

So now that I have my interface, and I have a method that will search for components implementing that interface, let me show you some of the ways I’m using this interface.

Implementation #1 – Ammo Pickups

public class AmmoBox : MonoBehaviour, ICanBeShot
{
    public void TakeShot(Vector3 hitPosition)
    {
		PickupAmmo();

		if (_isSuperWeaponAmmo)
			FindObjectOfType<Inventory>().AddChargeToSuperWeapon();
		else
			FindObjectOfType<Inventory>().AddAmmoToWeapons();
	}
}

This ammo script is placed on an Ammo prefab.

Ammo Scene and Inspector

Ammo Scene and Inspector

Notice the box collider that will be found by the raycast in TryHitEnvironment above (line 5).

 

Ammo Inspector

Ammo Inspector

In the case of the AmmoBox, the TakeShot method will add ammo to the currently equipped weapon.  But an AmmoBox isn’t the only thing we want the player to shoot at.

Implementation #2 – Weapon Unlocks

public class WeaponUnlocker : MonoBehaviour, ICanBeShot
{
    public void TakeShot(Vector3 hitPosition)
    {
        WeaponUnlocks.UnlockWeapon(_weaponToUnlock);
        PlayerNotificationPanel.Notify(string.Format("<color=red>{0}</color> UNLOCKED", _weaponToUnlock.name));

        if (_particle != null)
            Instantiate(_particle, transform.position, transform.rotation);

        Destroy(this.gameObject);
    }
}

Compare the AmmoBox to the WeaponUnlocker.  Here you see that we have a completely different implementation of TakeShot.  Instead of adding ammo to the players guns, we’re unlocking a weapon and notifying the player that they’ve unlocked it.

And remember, our code to deal with shooting things didn’t get any more complicated, it’s still just calling TakeShot.  This is one of the key benefits, we can add countless new implementations, without complicating or even editing the code to handle shooting.  As long as those components implement the interface, everything just works.

Implementation #3 – Explosive Boxes

These are crates that when shot will explode and kill zombies.

Implementation #4 – Destructible Lights

In addition to everything else, the lights can also take a shot (in which case they explode and turn off the light source component)

Recapping

Again to make the benefits of Unity interfaces clear, re-examine our code in TryHitEnvironment.

ICanBeShot shootable = hitInfo.collider.GetComponent<ICanBeShot>();

if (shootable != null)
	shootable.TakeShot(hitInfo.point);

We simply look for any collider on the right layer then search for the ICanBeShot interface.  We don’t need to worry about which implementation it is.  If it’s an ammo box, the ammo box code will take care of it.  If it’s a weapon unlock, that’s covered as well.  If we add a new object that implements the interface, we don’t need to touch our current code.

Other Benefits

While I won’t cover everything that’s great about interfaces in depth here, I feel I should at least point out that there are other benefits you can take advantage of.

  1. Unit Testing – If you ever do any unit testing, interfaces are a key component as they allow you to mock out dependencies when you write your tests.
  2. Better Encapsulation – When you code to interfaces, it becomes much more obvious what should be public, and your code typically becomes much better encapsulated.
  3. Loose Coupling – Your code no-longer needs to rely on the implementations of methods it calls, which usually leads to code that is more versatile and changeable.

 

 

Continue reading >
Share
1 15 16 17 18 19 21
Page 17 of 21