• Home  / 
  • Author's archive:
About the author

Jason Weimann

Unity Navigation Basics – Click to Move

By Jason Weimann / August 24, 2017

In this article I’ll explain some of the basics of the Unity Navigation System.  We’ll bake a navmesh, setup the walkable area, and show how to use the navmeshagent to control the character’s movement.

Video Version

Getting Started

To get started, I imported an asset pack, but you can use anything you want, even a simple plane.

Select the meshes in your scene that you want the character to navigate on and check the NavigationStatic box in the inspector.

Checking this box tells the navigation system that we want to have the mesh considered for navigation.

But we also want those buildings to be considered.. since they’re ‘blocking‘ the area.

So we need to set Navigation Static on any mesh that’s touching our navigable area.

Baking the Navmesh

Now open the Navigation window under Window->Navigation

On the Bake Tab, select Bake.

Making Areas Not Walkable

If you see areas like this where you don’t want the character navigating, but are blue..

You’ll need to select them and change their navmesh layer to “not walkable

This tells the system that they need to block navigation, but not be walkable themselves.

Controlling a Character

For this sample, I’ve created a capsule to be my character.

Add a NavMeshAgent to the capsule and place it near the ground.

Now we need a custom script to read input and tell the agent where to go.

Create this NavMeshClickController script and add it to the capsule.

Let’s take a look at what’s going on here..

In the Update method, we’re checking for the user to click the mouse with Fire1.

If they do, we create a ray into the scene from where the player clicked.

We take any hit point on that ray and tell the NavMeshAgent to set it’s destination.

If that point is on the NavMesh, the agent will start walking there.  If it’s not, nothing will happen.

Important Note

We must have a MainCamera for this to work.  Because we’re using Camera.main, if there’s no camera with the mainCamera tag set, we’ll have a null reference exception.

For my example, I created a camera, tagged it as mainCamera, and made it a child of the character so it would follow the character around.

Conclusions

The navigation system in Unity is really powerful and can do a whole lot more than I’ve shown here.  If you have characters that need to walk around, you should definitely consider using this system for them.  It’s not perfect for every situation, but it’s usually a great option, and always worth considering.

 

Continue reading >
22 Shares

Unity3D Runtime Animating with the GameObjectRecorder

By Jason Weimann / August 18, 2017

Recording custom animations in Unity just got easier with the introduction of the GameObjectRecorder.  It’s in beta at the time of this writing, but I definitely recommend you give it a try.

To use the recorder, you need to instantiate a GameObjectRecorder, set the root, and bind the transform.

You’ll also need to create an empty animation clip for the object you’re recording.

Then every frame, simply call the recorder’s TakeSnapShot method, and when you’re done, save the clip using the recorder’s SaveToClip() method.

The sample code provided on the Unity forums here is a great way to get started, using it, you can get started recording your animations in just a few minutes: https://forum.unity3d.com/threads/feedbacks-for-experimental-feature-gameobjectrecorder-2017-1.467852/

Here’s a copy of that code:

Using the Sample Code

Find the gameobject you want to record, and add the HierarchyRecorder component to it.

Create an empty animation clip for your gameobject.

Assign that new empty clip to the HierarchyRecorder’s “clip” field.

Start playing, and check the record box.

Move your objects around as desired (this can be done w/ physics, in the scene view, navigation, or other code).

Uncheck the record box.

You’re done, play your animation 🙂

Video Version

Continue reading >
5 Shares

How to adjust player size & scale in Virtual Reality with Unity3D

Adjusting player scale in VR is easy if you know what to do.  It gives you the ability to create really amazing effects and interesting gameplay.  You can make your player into a Giant.. or shrink them down to the size of a Mouse..  The technique in this article works across all devices, Vive, Oculus, GearVR, Daydream, etc..

Scaling the Player Up and Down

To adjust the player scale, you need an empty gameobject.

In your scene, create an empty gameobject and reset it’s transform… name it “Player Root”

Move your main camera under the “Player Root” and reset the transform there as well.

Now simply scale the “Player Root”.  As you go up, your player will feel like they’re growing.  If you shrink down small, they’ll in turn be small.

Automatically Resizing the Player

In some situations, you may have a need to automatically resize the player so their point of view is consistent.  If your game requires a certain height to be playable, this may be a solution that can expand your audience to shorter or taller people.  You can use a script similar to the one below to adjust the players scale based on their height at the time it runs.  Ideally, you probably want to run this while they’re loading a scene or give the player a ‘re-orient’ option to allow them to adjust for their size.

Video Version

Conclusions

Resizing the player opens up a variety of fun gameplay opportunities.  If you haven’t tried it before, jump in and play around.. new ideas may come up and you could build the next amazing experience using this simple technique.

 

Continue reading >
0 Shares

Unity3D .net 4.6 String Interpolation and Null Conditional Operator

By Jason Weimann / August 16, 2017

With the release of Unity 2017.1, the .net framework update to 4.6 has been included and is now just about ready for mainstream usage.  The .net framework upgrade is actually pretty big, it contains a lot of things under the hood, and quite a few new pieces of functionality you can use.  In this post, I’m only going to cover two of them, they’re two of my favorite.  That’s because they’re extremely easy to start using right away and will make your code just a tad bit cleaner by removing some of the extra ceremony that was required before them.  These features are String Interpolation (using the $ operator) and the Null Conditional Operator (aka Elvis operator ?. )

Important Warning

Before you upgrade to .net 4.6, be sure to commit / backup your project.  The last time I did it, I ended up with quite a few prefab references becoming null.  I don’t know if that’s a fixed issue, was a 1 time occurrence, or something else, but treat it just like you would a major version upgrade in Unity.  Commit to source control and save yourself from possible headaches.

 

To demonstrate these, I plucked a line of code from one of my real projects that happens to use them both, for a very common use.. logging.

Before I show them off though, I want you to take a quick look at this

Here, I’m writing to the logs when the character handling the puck has changed (this is a hockey game).  I log out the character that’s handling the puck change at {0} and the character that now is holding the puck with {1}.  Sometimes the puck is held by nobody though, so I need to do a null check and pass in an empty string if that’s the case..  so I’m using the ternary / ? operator.  In all, it’s 3 lines (sure it could be one long one).. and if you’re reading it for the first time, it takes a second.. or a few..  to look at, see what’s going on, make sure that ternary is correct.

Now let’s compare that to the .net 4.6 version.

The first thing to note is we no-longer have string.Format, but instead we start the string with a $ before it.  This tells the compiler that we’ll be putting variables in our string using the { }’s.  In-fact, right after the quote, we start with the first parameter.. and instead of a {0} that we have to look to the end for, we can put the variable right there.  Behind the scenes, the compiler will treat this just like a string.Format from the first version, but we don’t have to mentally track the {0} to the first parameter and so on.

We do the same for the second parameter, but there’s a little something extra.  In the 2nd parameter, we needed to check to make sure the heldBy wasn’t null.  With the Null Conditional Operator, we can tell the compiler to do that for us.  If the heldBy variable is null, it’ll be a null string.  If it’s not, it’ll evaluate the gameObject and then the .name of that gameObject.  All without us writing extra null checks.

It’s also worth noting that the Null Conditional Operator works for executing methods… or invoking events.  You could have a method named Shoot() on a character, and call it with character?.Shoot() to have the code shoot if a character is assigned to the variable..  Not the best example, but hopefully you get the idea 🙂

 Video Version

Continue reading >
2 Shares

How to make Vive Trackers work without Controllers

A common issue I’ve seen people run into with the HTC Vive Trackers is that they can’t track them without the controllers on.  This is caused by a single line of code in the SteamVR_ControllerManager.cs file and is easy to change.  All you need to do is find the section shown below and comment out the 2nd line there.  As of the time this is written, that line # is 255 (it may change with steamvr updates).

 

Video Version

Continue reading >
3 Shares

Using the HTC Vive Tracker POGO Pins in Unity3D

If you have an HTC Vive Tracker, you may have already hooked it up and started tracking objects in Unity3D.  That part isn’t too hard, and it’s a lot of fun.  But you can also flip that Vive Tracker over and start using the POGO pins on the back.  The pins are pretty easy to use and give you access to the same buttons you’d have on a normal Vive wand and even include an output for haptic feedback.

HTC Vive Tracker Documentation

To get detailed info, HTC has a guide here: https://dl.vive.com/Tracker/Guideline/HTC_Vive_Tracker_Developer_Guidelines_v1.3.pdf

Video Version

Setting up the HTC Vive Tracker in Unity3D

I’ve gone over it already and just wanted to share the steps involved to get going.

First, you need your tracker in-game.  To keep it simple, we’ll use the CameraRig prefab.

Drop a [CameraRig] into an empty scene (and delete the existing maincamera).

Under the CameraRig, add an empty gameobject and name it “Tracker“.

Add the SteamVR_TrackedObject component to the “Tracker” you’ve just created.

Select the [CameraRig].

Drag the “Tracker” to the Objects array on the SteamVR_ControllerManager.

Turn on both controllers and press play.

Move the tracker around, if you see it move in-game, you’re good to move on to the next part.

Reading Vive Tracker POGO Pins

To show how to read the inputs, I’ve created this example script.  Put it in your project, then add it to the “Tracker” object.

<script src=”https://gist.github.com/unity3dcollege/6b097fb4163abf6e6d36b33ff0d48776.js”></script>

Start playing again and make sure the Tracker is still moving (remember both controllers need to be on too).

Now let’s take a look at the image from the official documentation.

If you’re not familiar with electronics, don’t worry, this one’s pretty simple.

All you need to do is make a connection from GND (Pin 2) to whichever pin you want to trigger.

You can do this with a single wire, or ideally hook it up to a switch that’s attached to your physical device.

To test this, simply touch pin 2 & pin 4 with the same wire, and you’ll see the “Trigger” field set to true.

Pins

  • 2 – Ground
  • 3 – Grip
  • 4 – Trigger
  • 5 – Trackpad
  • 6 – Menu Button

Hooking up Hardware

If you’re not sure what to use, try digging out an old electric nerf gun like this: http://amzn.to/2uDRI9b (or find a broken used one on craigslist for free/cheap)

Rip it apart and hook up the wires coming from the trigger to the tracker’s pogo pins.

There are a few different adapters out there you can order/print, like this: https://www.thingiverse.com/thing:2127180 – It’d probably be a good idea to have some sort of adapter in there to make pin access easier…

 

 

 

Continue reading >
1 Shares

Unity UGUI – HorizontalLayoutGroup, VerticalLayoutGroup, & GridLayoutGroup.. and LayoutElement

By Jason Weimann / August 11, 2017

When building a UI with Unity3D’s UGUI system, it’s usually important to make your interface scalable and easy to extend.  There are also many cases where the number of items in your UI is dynamic.

The components I’ll cover here are all designed to make layout easier.

LayoutElement

Before we dive into the different layouts, we need to talk about the LayoutElement.  Without this component on your UI children, these UI elements aren’t going to do what you want.

The LayoutElement has a couple of options, though you don’t have to use any for layouts to work.

The properties are used in the following manner when a layout controller allocates width or height to a layout element:

  • First minimum sizes are allocated.
  • If there is sufficient available space, preferred sizes are allocated.
  • If there is additional available space, flexible size is allocated.

What this means is when the layout group is laying out your UI, it’s going to first allocate enough space for everything to have it’s minimum sizes.

After all the elements have met their minimum sizes, it’ll try to expand them out to their preferred sizes.. if there’s not enough space, the space is allocated % wise.  It’ll calculate out how much space each object wants and assign an equal % of the preferred to each object.

Example: We have two buttons in a group, each with no minimum set.

Button 1 has a preferred height of 100.

Button 2 has a preferred height of 300.

The panel they’re in has a size of 100.

Button one will be scaled to 25 and button two will be scaled to 75.

Example: With Minimum Heights

If the layout elements also have a minimum height, this calculation will be done AFTER the minimum height is assigned.

Button 1 has minimum height of 20, preferred height of 100.

Button 2 has minimum height of 20, preferred height of 300.

Panel size is still 100.

Button one will be 33 and button two will be 67.

Flexible Width & Height

The flexible width and height boxes should probably just be toggles… they accept a numeric value, but that value is always 0 or 1.

When it’s set to 1, the element can expand beyond the preferred setting if the layout controller requests it.

An element set to 0 (or unchecked) will not be used to fill the layout.

LayoutGroups

Now that we’ve covered the LayoutElement, it’s time to look at the different layout group components that can control these elements.  There are types available, HorizontalLayoutGroup, VeritcalLayoutGroup, and GridLayoutGroup.  The names should give away their layout strategies..

HorizontalLayoutGroup

The Horizontal Layout Group can automatically relocate and resize your UI components on a horizontal plane (left to right).  The first child will be on the left and the last child will be on the right.

By default, the layout groups are set to force children to expand and fill in the group’s area.  Sometimes that’s the behavior you want, but if you’re using layout elements you may want to un-check those boxes. (or at least the box relevant to the layout type.

For example, on a horizontal layout group, I’ll often uncheck Child Force Expand – Width, and leave Height checked.  This will make the elements expand out to get taller, but let me have finer grained control over the width using my layoutelement components.

It’s also important to check Child Controls Size on the width here, so the layout elements actually gain control over their horizontal size (width).

Here you can see the difference.  When I check the width box, the children expand out to fill the entire panel.  With it unchecked, they’re using the width specified in the LayoutElement components.

Child Alignment

This setting allows you to specify where in the panel the objects will start their layout from.  The position you choose depends on your specific UI part, but I tend to default horizontal layout groups to middle left or middle center most of the time.

Padding & Spacing

The padding settings allow you to add a margin between the elements and the edge of the panel the layout group is on.  It’s not always needed, but I usually choose a standard padding value for the UI controls.  It’s typically between 10-20, but again it totally depends on your layout.  My only recommendation is that you choose a value and standardize it as best you can.

Spacing is used between elements.  It’s almost always set to something other than 0 in my projects to give a little separation between controls.  If you find yourself over-sizing elements in your UI to add a little space.. try switching to using the spacing variable instead.

VerticalLayoutGroup

The vertical layoutgroup is almost exactly the same as the horizontal.. the only difference is the orientation.  Because of that, I won’t go into detail as I’d just be repeating myself.  Just imagine everything above, with height and width swapped and you’re done 🙂

GridLayoutGroup

Grid Layouts are a bit different.  The GridLayoutGroup is used to create a standard grid of rows and columns.  It’s a bit like a table in HTML or a grid in many other systems (WPF comes to mind).

Instead of relying on the layout elements to determine size, we specify a cell size on the GridLayoutGroup itself.

In-fact, you don’t need layoutelements at all to use the GridLayoutGroup.

For the grid to work, you need to choose both the X & Y size of the cells (width and height).

Constraint – Flexible

By default, the GridLayoutGroup is set to Flexible.  This means the grid will automatically determine the # of rows and columns based on the height and width of the panel.

Example

Your cell size is 100 x 100 and your panel is 300 x 300 wide.

You have 5 children of the grid.

If your Start Axis is Horizontal, you’ll have 3 columns and 2 rows.

If Start Axis is set to Vertical, you’ll have 3 rows and 2 columns.

Constraint – Fixed

You can also choose to specify the column count using the “Fixed Column Count” or “Fixed Row Count” settings for Constraint.

These do what you’d expect and set hard values for the number of columns or rows.

Conclusions

With these layout objects, you can quickly build a UI that’s easy to adjust & automatically fits to your resolution.  You can also stack these layout groups inside each other.  For example when I build a standard window with a header and body, I’ll start with a VerticalLayoutGroup, then have children under it that have HorizontalLayoutGroups on them.  If you’re building an interface with UGUI, spend a little time with these components, get comfortable with them, and your experience will definitely be a lot better.

Continue reading >
10 Shares

How to use the Unity 2017 SpriteMask component & create a seamless scrolling background

By Jason Weimann / August 10, 2017

Unity3D SpriteMask & Infinite Scrolling Background

The SpriteMask component in Unity3D 2017 makes it easy to show a sprite through another’s alpha channel.  In this video, I go over the steps required in two minutes.

Once you have the SpriteMask setup, we can make the window scroll infinitely along.. giving the feeling that you’re riding on a high class train.

Here’s the video showing the steps required.

 

 

Resources

I found the background image on OpenGameArt.org here: https://opengameart.org/content/seamless-hd-landscape-in-parts

The train window came from a google image search here: https://www.google.com/search?q=train+window&tbm=isch&source=lnt&tbs=ic:trans&sa=X&ved=0ahUKEwjy_6nY0c3VAhVG8WMKHR82B2IQpwUIHw&biw=2048&bih=1137&dpr=1.25

Continue reading >
6 Shares

Opinion: Where to put Unity3D components on your gameobjects

By Jason Weimann / August 8, 2017

Last night, I caught myself doing something bad while prototyping..  I was adding a script component on a child.. deep in the hierarchy.  In-fact, I added the script, played around a bit, then stepped away for a minute to play some Overwatch with my 9yr old.

When I came back, I went looking for the component and my hand instinctively floated toward my forehead… (I’ve been re-watching TNG to fall asleep this week)..

What’s wrong with that?

You may already realize the problem.. or you may be wondering why it matters?  Why is this bad?

In this case, it may have been ‘okay’ since it’s just me prototyping something out, but it’s a terrible habit to get into.

The reason for this is complete lack of visibility.  Having components this low down, makes them practically hidden.  Of course you can find them, if you know where to look, and for that matter, you know that you need to look for it..  But the odds that you’re going to remember what child of a child of a child the component for ice skating marks is on are pretty damn low.  Especially if you do this as a common practice and have components spread all throughout your hierarchy.

I’ve worked on dozens of Unity projects, and I’ve seen this mistake enough times to realize how painful it can be.  And of course, I’ve done it myself countless times..

It starts out simple..

“I just need to add this component here…”

“It needs to know where the hand is, so I’ll put it on the hand..”

But quickly the hand starts getting more components.. next thing you know, each finger has a component, on each character, and if one of them is missing, something’s gonna break.

Or I’ll need to change the timing on some character’s attack, but that’s burred 4 layers deep on an AI object, or under that AI object on an AI-AttackSettings object..

And once you have to share with someone else.. game over..  they’re gonna struggle, you’re gonna struggle, and everyone will wish these components were easier to find.

Where should the scripts be?

I like to keep them at level 1 or 2.  Depending on the object setup, there are situations where level 2 makes more sense, but I try to default to keeping them at the root of the gameobject/prefab.  Often these gameobjects are getting nested under some global root (like my [Players] above), so keeping them on the first level of the prefab keeps them easy to find.

Are there any exceptions?

As with anything, there are of course some exceptions.  The main one that comes to mind is with colliders.. if you want to register for an OnCollision… method, you’ll need the component where the collider is.  In that case, I’d just make sure that there aren’t any serialized fields on the component and instead try to keep those edited fields further up the hierarchy.

What if I need to know where the foot, hand, other random thing is?

You may be thinking.. I need the transform of this object.  I need the gun to fire from this position.. or this hand needs to hold the objects..

Not a problem.  With the component on the root object, add a serialized field for the transform you need, and assign the child there.  Now when you look at the root, you can see everything that’s going on with your gameobject in one place.  You’ve reduced the cognitive load required to use that gameobject, made your project a bit cleaner, and your work a bit easier.

Agree / Disagree?

Like I said above, this is all my personal opinion, based on my experiences.. and of course there are always exceptions..  but as a general rule, I think it’s an ideal practice.  If you have thoughts on this though, please drop a comment and share with everyone.

Continue reading >
8 Shares

How to hook up & use Unity UGUI UI Buttons in code or the inspector

By Jason Weimann / August 8, 2017

Most games need a UI, and most UI’s need buttons!  In this article, I’ll show you how to use the Unity3D UGUI Button OnClick events.  We’ll do it in the inspector and through code with onClick AddEventListener.  We’ll change the text and color of a button, and talk a bit about when you should do it in code vs the inspector.

Video Version

If you prefer video, you can watch everything in this post in video format here: https://www.youtube.com/watch?v=-bc7Ut8ijd4

Hooking up the Unity3D UGUI Button

 

To get started, we’ll need a button.  In an empty project, add a Button through the GameObject->UI->Button menu.

With the button selected, you’ll see the OnClick event section in the inspector.

To add an event, click the plus button.

Drag the Text child of the button onto the object field.

For the Function, select the Text.text option.  This will allow the button click to change the text shown by the text object.

The field below the function will become editable.. type in the word “CLICKED

Press play and watch your text change as you click it.

Calling Custom Code on the UGUI Button

The built in functions can be useful, but often you’ll want to call your own code on a button click.

Create and add a new script and name it ChooseRandomColor.cs

This class exposes one method on it named ChangeImageColor.  The method will get our Image component (on the button) and set the color to a randomly chosen one.

With this component on the button, we can assign it to an OnClick event and have the button change colors.

Add another event to the button by clicking the plus button.

Drag the ChooseRandomColor component onto it (you can drag from the bold component name in the inspector, or alternatively drag the button from the hierarchy)

For the function, select ChooseRandomColor->ChangeImageColor()  – This is selecting the ChangeImageColor method of our ChooseRandomColor component.

Press play again and watch your button change colors as you click.

Hooking up the button in code instead – AddListener

There’s another way to hook up our events.. we can do it in code.  Sometimes it makes sense to assign events in the inspector.  When we want to hook into other gameobjects, like the child text object, or we want do do something simple like toggle the GameObject Active/Inactive…

But other times, it’s better to do the assignment in code.  This makes it easier to figure out what’s calling the method.. making “FindReferences” work for example.  When all your events are hooked up in the editor, it can be hard to tell what’s actually used, and what it’s used by.  I’ve seen plenty projects where it took hours of digging through the hierarchy to figure out what’s calling all of the code.. and to know what’s unused and safe to delete.

Luckily, hooking up these events in code is easy to do as well.

Edit the ChooseRandomColor.cs file like this.


Here, we’re using the Start() method to register for the Button’s onClick event.  We do this with the AddListener method, where we assign our own ChangeImageColor method.  Notice that we don’t put parenthesis after ChangeImageColor on line 9, if you accidentally do, you’ll get a compiler error (It happens to all of us 🙂

I’ve also changed the ChangeImageColor method to be private.  We don’t need anything outside the class calling this method, so it should be marked private to keep it properly encapsulated.

Back in the editor, remove the 2nd event.  To do this, left click it and it’ll highlight, then click the minus button.

Press play again and everything will work the same.. but now you have references in code for your method calls.

 

Continue reading >
7 Shares
Page 3 of 14