All posts in "C#"

VR Dash movement / locomotion with SteamVR and Unity3D

Ready to add dash locomotion to your game?  It’s pretty simple to setup, but there are some important things to consider before you start. In this guide, I’ll show you how to set it up, what to watch out for, and a trick to minimize motion sickness.  All you need is SteamVR and Unity3D to get started.

Before you get started, you’ll want to have some sort of environment setup.  Ideally you want something where it’s obvious that you’re moving.  The locomotion is FAST, and if you just have a flat white area or single repeated texture, it’s gonna be a whole lot harder to tell what’s going on.

Here’s what my test scene looks like.

SteamVR / Oculus SDK

Once you have an environment setup, you’ll need the SteamVR plugin.  The code here is setup for SteamVR but could definitely be ported to Oculus with minimal effort, all we really care about in code is the button click.

After importing SteamVR, drop the [CameraRig] prefab into the middle of your scene.

Select the right controller and add the SteamVR_LaserPointer component to it.

We’re only using the laser pointer to visualize where we’ll dash, there’s no requirement to use the pointer for things to work, you can use it if you like or choose any other visualization of your choice.

Set the color of the pointer to Green.

DashController

It’s time to add a little code.  This is the only code you’ll need for dashing to work, so take a good look at it.

The Code

RequireComponent- The first thing to notice is we’re requiring a SteamVR_TrackedController component to be on the GameObject.  We’ll be attaching this component to the “Controller (right)” and the tracked controller will get added automatically.

Serialized Fields – We have 4 fields that are editable in the inspector.

  • MinDashRange & MaxDashRange – We have a  minimum and maximum dash range so we can prevent people from dashing across the entire map or ‘dashing in place’.
  • DashTime – This is the amount of time it will take the player to move from their position to the end point.  In this setup it’s always the same time, no matter how far you go.
  • MaskAnimator – Here we’re storing a reference to an animator.. we’ll be creating this animator soon.  It’s used to add an effect that will minimize motion sickness.

Private Fields

  • trackedController – We’ll be using the tracked controller reference in a few places, so we’re caching it here to avoid calling GetComponent more than once.
  • cameraRigRoot – The same situation applies here, we’ll use the cameraRigRoot to move the player around, and we don’t want to call GetComponent multiple times.

Start() – Here we do the caching of our private fields, and we register for the PadClicked event on the controllers.  If you wanted to use a different button, you’d register for some other event, like GripClicked or TriggerClicked.

TryDash() – This is what we call when the player clicks the trackpad.  The method performs a raycast from the controller’s position aimed in it’s forward direction.  This is exactly what the laserpointer we added earlier does, so if the laser pointer hits something, we should hit the same thing.  And our RaycastHit named “hit” will have a reference to the thing they both hit.  If it does hit something, and the distance of that hit is within our allowed ranges, we start a coroutine called DoDash().

DoDash() – The DoDash method does the actual work here.  First, we set a bool on our maskAnimator named “Mask” to true.  Then we give it 1/10th of a second for the animation to play.  Next, we take note of the cameraRig’s current position and save it off in startPoint.  We then go into a while loop.  This loop will execute for the amount of time specified in our ‘dashTime’ field above (editable in the inspector).  In the loop, we use Vector3.Lerp to move the cameraRig from it’s startPoint to the endPoint (where the user aimed the laser).

Vector3.Lerp returns a vector at a point between two other vector3’s.  The 3rd value (elapsedPct) determines how far between them it should be.  A value of 0 is right at the start point, a value of 1 is at the end, and a value of 0.5 is right in the middle.

Once we leave the loop, we set the “Mask” parameter on our animator back to false and we’re done.

Back to the Inspector

Attach the DashController to the Controller (right)

Play & Test

You should be able to play now and dash around.  Put on your headset, aim the controller and click the trackpad!

The Mask – Reducing motion sickness

Dashing around usually feels okay for most people, but motion sickness is still possible.

To help remove some of the sickness, we can add a cool masking effect.  And if you have some artistic skill, you can make it look much better.

Download this DashMask.png image

Select the Camera (eye).  Right click and add a Quad.

Drag the DashMask.png image onto the quad, a new material will be created automatically.

Select the quad and change the material’s render mode to “Fade”

Move and resize the quad to match this.  We need it close to our face as it’s going to be used to mask out the world.

Animating the Mask

If you play now, you’ll have a mask sitting in your face (or not depending on where you drag the alpha of the albedo color on that material)

What we want though is for the code above to be able to toggle the mask, with a smooth animated fade that we can customize later.

To do this, let’s create a simple animation for the alpha channel of our quad.

The Animation

Select the Quad

Open the Animation window.

Click create animation, name the animation FadeMask

Click record.

Open the Color picker for the Albedo of the material.

Slide the alpha around and settle it at 0.

Drag the Animator red line to the 1:00 mark.

Open the color picker again.

Slide the alpha to the far right.

The Animator

The animator will control our animations.  It handles the state of our animations and does the transitions between them.

Create a new animator for the quad.

The Mask Parameter

In the parameters section, add a new Boolean parameter and name it “Mask

Adding the Animations

Drag the FadeMask animation you just made onto it.

Do it again, but for the second FadeMask, rename it to “FadeMask Reverse“.

Add transitions between the two and use the “Maskparameter to control them.

Set the FadeMask speed to 5.

Select the FadeMask reverse and change the speed to -1.

We’re playing the animation in reverse here to fade the mask back out.  We could have a separate animation for this, but we don’t need it when we can just set the speed to -1.

Attaching the Animator

Add an animator to the Quad.

Assign the new Animator Controller to your animator.

The last thing we need to do is assign the MaskAnimator field of the Dash Controller.

Select the controller, drag the quad onto it, and you’re done..

 

The Result

Continue reading >
Share

Build a Unity multiplayer drawing game using UNET

In this article you’ll learn how to build the foundation for a multiplayer drawing game.  We’ll start with drawing on a texture and add some Unity UNET network components to make it multiplayer.  By the end, you’ll see other players drawing in real time and have a good foundation to build your own drawing game in Unity3D.

Requirements

To start, we’ll need a couple pieces of art.

The Canvas

First is our default/blank canvas.

For this example, I’m using a solid white texture that I put a little border around.

Download this canvas or create your own

Color Wheel

The other thing we need is a color wheel.

Download this wheel or find your own

Add both of those textures (or alternate ones you choose) to your project in a Textures folder.

Setup the Canvas

Create a new Quad.

Resize it so it looks similar to this.

Drag the canvas texture you’ve chosen onto the Quad.

Change the material shader on the quad to Unlit/Texture.

Make sure you select a resolution and don’t use free aspect while you’re setting this up.  I”m using 1920×1080.

Name it “Paint Canvas”.

Create a script named PaintCanvas.cs and replace the contents with this.

PaintCanvas Code

This script is used for two things.  First, it makes a copy of the texture in memory so we can easily read and write to it.  This texture is created in PrepareTemporaryTexture and assigned to the static property “Texture”.

The second thing it does is set the texture data from a byte array.  This is so we can ‘load’ a texture from the host when we join the game as a client.  You’ll see this later when we get into the networking code.

Add the new PaintCanvas component to  the “Paint Canvas” Quad.

Back to the Inspector

Your PaintCanvas should look similar to this.

Color Picker

It’s time to setup our color picker so the player can choose different colors easily.

Create a Quad

Name it “Color Picker”

Drag the ColorWheel texture onto the Quad.

Change the Shader to “Unlit/Transparent Cutout”

Create a new script named “ColorPicker.cs” and replace the contents with this.

Add the ColorPicker component to your “Color Picker” object in the inspector.

ColorPicker Code

This script starts off with a static “SelectedColor” property.  This is where we’ll store the users color selection when they click on the colorpicker.

But the work here is done in Update().  We check to see if the user has clicked this frame, and if so, we do a raycast from the click point into the scene).

If the ray intersects with something (line 17) and that something happens to be the color picker (line 20), then we pull the texture coordinate from the RayCastHit multiply it by the height and width of the texture, and get the actual pixel on the texture that we clicked.

With that pixel, we use the Texture2D method GetPixel to get its color and store it in that “SelectedColor” property (line 32).

Finally, we set the material color of our “preview” object to the selected color (line 34).  This is where the user will see the color they’ve picked and have “selected”.

Back to the Inspector

Your color picker should look like this.

Move the wheel so it looks like this.

Selected Color Preview

If you look at the Color Picker code or inspector, you may notice that we have a “Selected Color Preview” field.

Create another Quad and position it just below the wheel like this.

Create a new Material and name it “SelectedColor

Assign the material to the “Selected Color Preview” Renderer

Change the Shader to “Unlit/Color

We don’t want light having any impact on how our drawing appears.  Unlit/Color makes that not an issue.

It should look like this in the inspector.

Brush Size Slider

Let’s also add a brush size slider.

Create a new Slider from the GameObject->UI menu.

Move it so it looks like this.  (I also added a text object at the top, feel free to do that too if you want)

Create a new script named “BrushSizeSlider.cs” and replace the contents with this.

BrushSizeSlider Code

This class is really just wrapping the slider value so we don’t have to reference it anywhere in the editor.  Because our player is network instantiated, and we don’t have any objects in the scene that would make sense referencing this slider, we’re just putting the BrushSize into a static global int.  This isn’t the best architecture for something like this, but we’re not building out a full game or editor so it’s a quick enough way to get things working.

Add the BrushSizeSlider component to your slider GameObject.

Your slider should look like this.

Save your work to a scene!

Player & Brush Time

Now that we have the board setup, it’s time to create the player & brush.

Create a new script and name it “PlayerBrush.cs“.

Replace the contents with this.

PlayerBrush Code

This is the most important part of the project.  Here, we’re doing the drawing and networking.  (ideally we’d split this into 2 classes soon)

The Start method is only called on the server/host because of the [Server] tag.  That’s because the Start method is getting the texture data and sending it to the client.  This may be a little confusing, but remember that the “Player” will be spawned on the server, and this method is being called on the player gameobject but only on the host/server.

RpcSendFullTexture is the opposite.  It uses the [ClientRpc] tag because we only want that method being called on the clients (it’s also required to send the command to the client).  This method calls the SetAllTextureData method we covered earlier, setting the data of the client to match the server.

.Compress() & .Decompress() – These methods are extensions that you’ll see in a while.  They’re used to compress the texture data into something much smaller for network transmission.  The compression here is extremely important, without it, this wouldn’t work.

When you look at the Update method, it should feel familiar.  Like with the color picker, we’re checking for left mouse down, doing a raycast, and checking to see if the click was on what we wanted.  In this case, we’re looking to see if they have the mouse down on the PaintCanvas.

If they do, we get the texture coordinate (like before), get the pixel x & y, then we send a command to the server using CmdBrushAreaWithColorOnServer.  We also call BrushAreaWithColor on the client (if we don’t do this, the pixel won’t change until the server gets the message, handles it, and sends one back.  this would feel laggy and bad, so we need to call the method on the client immediately).

CmdBrushAreaWithColorOnServer doesn’t really do any work itself.  It’s a [Command] so clients can call it on the server, but all it really does is tell the clients to brush the area using an Rpc.  It also tells the server/host to draw using BrushAreaWithColor.

BrushAreaWithColor is responsible for changing the pixels.  It does looping over the brush size to get all the pixels surrounding the clicked one.  It does this in a simple square pattern.  If you wanted to change the brush to be circular or some other shape, this is the method you’d modify.

Back to the Inspector

Create a new “Empty GameObject“.

Name it “Player

Add a NetworkIdentity component to it & check the “Local Player Authority” box.

Add the “PlayerBrushcomponent.

Your Player object should look like this.

Drag the Player object into a folder named Prefabs.  This will make your player a prefab so we can instantiate it for network play.

Error! – An Extension Method!

The playerbrush script references some compression methods that we’ll use to sync data to the clients when they first connect.

To add these compression methods, let’s create a new script for our extension methods.  If you don’t know what extension methods are, you can read more about them here: Unity Extension Methods

Create a new script named “ByteArrayExtensions.cs” & replace the contents with this.

ByteArrayExtensions Code

I won’t go into the details on how this compression works, but this is some standard c# code for doing a quick in memory compression of data.  These are extension methods that will ‘extend’ the byte arrays.  It’s essentially just a cleaner way so we can write somearray.Compress() instead of ByteArrayExtensions.Compress(somearray).

The Networkmanager

The last thing we need is a networkmanager.

Create a new gameobject and name it “[NetworkManager]“.

Add the NetworkManager component to it.

Add the NetworkManagerHUD component to it.

Expand the NetworkManager component.

Drag the “Player” prefab from your Prefabs folder (NOT THE SCENE VIEW ONE).. onto the “Player Prefab” area of the component.

The work is done, Save your Scene!

Testing the game

Now that everything’s setup, it’s time to test.

To do this, create an executable build (make sure to add your scene to the build settings).

In one instance, start a LAN host, then the other can join.

Download

If you want to download the full working project sample, it’s available here: https://unity3dcollege.blob.core.windows.net/site/Downloads/MPPainting.zip

Online Multiplayer

If you want to go beyond LAN mode, you’ll need to enable Multiplayer in your Services window.  It will take you to the Unity webpage where you’ll need to set a maximum room size.  For my test I went with 8 players.

Conclusion & Notes

This project is meant only to be an example for UNET & drawing.  To keep things simple, I used quite a few static fields, which I’d highly recommend against in a real project of any size.

 

 

Continue reading >
Share

VR Movement – Ironman / Jetpack Flying with SteamVR

There are a lot of interesting movement systems for VR.  Arm swinging, dashing, teleportation, trackpad, and more.  In this guide, I’ll show you another one I experimented with previously and really enjoyed.  We’ll go over how to make a player fly like Ironman or at least like a person holding some jet packs..

Prerequisites

This article assumes you have SteamVR in your project, though converting it to OVR native should be a simple task.

Rigidbody on [CameraRig]

Start by adding the [CameraRig] prefab from the SteamVR package to your scene.

Add a Rigidbody component to the camerarig.

Freeze the rotation values.

Locking rotation is very important, if you don’t, your player will end up spinning around, flipping over, and getting sick

Now, add a box collider to the rig so you can land (this could be a capsule or any other collider that works well for your game)

Controller Setup

Expand the CameraRig and select both controllers.

Add the SteamVR_TrackedController component to them both.

JetpackBasic Script

Create a new script, name it “JetpackBasic”

Replace the contents with this.

Your controller should look similar to this: (the sphere collider and audio source are optional and beyond the scope of this article)

Press Play

Aim the controllers where you want to go.

Pull the trigger.

Fly!

Continue reading >
Share

Using Vector3.Reflect to cheat ball bouncing physics in Unity3D

Games based on balls can be a frustrating to build.  I’ve done a number of them myself and used a variety of different techniques.  The hardest part tends to be getting the physics right.  Some games require a ton of tweaking, adjusting rigidbodies, colliders, physics materials and scripts.  But in some other games, cheating is the way to go..  Cheating by not using the physics system for bouncing..

Cheating How?

Instead of allowing the physics system to control our ball bounces, we’ll create a simple script to do it instead.  Instead of adjusting the mass, bounciness, & friction our script will have a single float for the minimum bounce velocity.  This will keep our ball bouncing forever, and always at about the same speed.

What’s it look like?

The Code

Review

There are a couple things to cover here, we’ll go over them in order.

To start we have a Vector3 for an initial velocity.  Like the tooltip says, this is just for the example / debugging.  Our OnEnable method will set the initial velocity to this value so the ball starts moving.

The minVelocity field is used to control how slow the ball can go.  Every bounce will be at this velocity (or higher).

OnEnable() – Here, we’re caching the rigidbody reference and setting that initial velocity.

Update() – This one’s important.  Because we’re going to override the collision behavior, we need to keep track of the objects velocity each frame.  When the collisions happen, this velocity is going to change, but for our calculations, we want the velocity before the collision.  Saving it off here in Update is an easy solution.

OnCollisionEnter() / Bounce() – This is where we do the work.. even though it’s not much work.  Calculating direction is done using Vector3.Reflect.  We pass in the last frame velocity (that we cached in Update), along with the collision’s normal.  Then we multiply the result by the current velocity OR the minVelocity, whichever is greater.

Bouncing toward the player

In games where you have the player hitting a ball against a surface, you’ll often want to send the ball back toward the player.  This might seem ‘not realistic’, but it’s usually way more fun.  And it’s pretty easy to do.

The Code

The only real difference is when you’re setting the return velocity, we’ll lerp between the direction to the player and the reflect value.

In the editor, we expose a bias field that allows easy tuning of just how ‘toward the player’ the ball goes.  The right value for the bias will depend on your game and the play area size.

Example Download

If you want to try this out and don’t want to setup a couple cubes, you can download the example project here: https://unity3dcollege.blob.core.windows.net/site/Downloads/Ball%20Bounce.zip

Conclusions

This functionality isn’t appropriate for every game, or every interaction in every game.. but there are plenty where it works better than using the default physics setup.  Especially the bouncing towards a player.. I’ve used that more than once and found it always made the games more fun and engaging.  It’s also worth noting that you can use Vector3.Reflect for all kinds of other functionality, reflecting bullets, beams, or anything else.. so keep it in mind when coming up with new game mechanics 🙂

Continue reading >
Share

LINQ for Unity Developers

LINQ (Language Integrated Query) is a great feature available to C# Unity developers.  Many developers don’t know it exists or how to use it though, and lose out on the great time & code savings it can provide.  LINQ in Unity has a variety of great uses, and a couple pitfalls you’ll want to avoid.

Getting Started

To use LINQ in a C# script, you need to add a using statement for the namespace like this.

What Can I do with LINQ?

The most common uses for LINQ statements tend to be sorting, searching, and filtering (though there’s plenty more you can do).

For example, let’s take a scenario where we want to find the closest game object to the player.

Compare that to the LINQ version

As you can see, LINQ has an OrderBy extension method for collections.  Because we have a List of gameobjects, and List implements IEnumerable, we can use OrderBy to easily sort objects by distance, then take the first from the collection (since they’re in ascending order).

What the hell is that t => ????

You’ll see this a lot in LINQ statements, it’s called a lambda operator.

In the case of this OrderBy statement, think of the “t” as a reference to each object in the collection.  When it orders them by distance, it’s passing in our local objects position, and the position of “t”.

Does it have to be “t”?

That “t” is just a variable name.  It could just as easily be named “a”, “thingToCompare”, or any other variable name.  “t” is just a common standard in lambda examples.  Often, I’ll use a variable name that’s more descriptive, especially when building more complex LINQ statements.

What’s FirstOrDefault?

The FirstOrDefault() call makes our LINQ statement return the first object from the collection… OR whatever the objects default value is.  Default is just the value you’d have if you never assigned a value.. for objects that’s null.

There’s also a First() method, but that will throw an exception if there’s nothing in the collection.  Sometimes you want this, but I find myself almost never using it.

What about Performance?

Of course with anything, you’ll need to consider the performance of operations like this.  In the majority of cases using LINQ won’t hurt you at all.  It may add a couple nanoseconds here or there, but it can also shave some time off if your custom code isn’t completely optimized.

The one thing you definitely do need to look out for though is Garbage Collection.  LINQ statements will generate a little garbage, so avoid using them in something that’s going to be called every frame (don’t put them in your Update() calls).

For other events though, LINQ can be a huge time saver, make your code easier to read, and having less code always reduces the chance for bugs.

Multiple Lines or One Line?

When you look at LINQ statements, sometimes they’re written as a single long line.  Sometimes there’s a line per method..

Functionally, it doesn’t make any difference.  Personally, I prefer a line per method because it makes the call easier to read at a quick glance.  So while there’s no set rule on it, I recommend you split your LINQ statements with a new line before each method call (with the period on the newline).

Take()

The Take method can be used to “Take” a subset of a collection and put them into another collection.

As an example, imagine a case where you want to find the 4 lowest health enemies.  The Take method makes that easy.

Sorting by multiple things

You saw the OrderBy() method above, which is great for sorting, but sometimes you’ll need to sort by more than one thing..  For example imagine you have a scene full of Dogs… and you want to sort the dogs by Color and Size.

Switching The Order

If you want to sort in the opposite direction, you can use OrderByDescending() to reverse the order.

Deferred Execution

Often when you see a LINQ statement acting on a collection, you’ll see it end with ToList();

There’s a very important reason this is done.. and that reason is called deffered execution.

When you use a LINQ statement, the execution of the statement doesn’t happen until it’s needed.

Take a look at this example:

The ordering and distance checking of coins is only done if the player presses ‘A’.  And even then, it’s not until the delay has passed and we’ve reached the foreach statement.  If that’s never reached, the deferred call isn’t needed so it’s never run.

The downside to this is we have a bit less control over execution time.  Sometimes that’s fine, other times we want to enforce execution immediately.

And to force that execution, we can call ToList() or ToArray().

What about the other syntax?

It’s important to note that there are two different types of LINQ syntax.  There’s the one I’ve shown you so far, and another that looks a bit more like SQL.  For Unity developers, I’d recommend staying with the fluent syntax you see here and avoid the SQL one.  I’ve found developers who don’t do much SQL work tend to get a bit more confused by the other syntax and confusion causes bugs.

What other operators are there?

There are a TON of them.  I’ve covered a couple of the most common ones, but I recommend you view the larger list here just to know what’s available:
https://www.tutorialspoint.com/linq/linq_query_operators.htm

Here are some of the ones I find myself using more often

  • GroupBy() – groups things as you’d expect.. often end up using this with ToDictionary
  • ToDictionary() – yep it builds a dictionary with the keys/values you want from any other collection(s).
  • Any() – tell you if any object in the collection meets a criteria (returns true or false)
  • Skip() – great for paging, often used with Take
  • Contains() – easy way to check if a collection contains a specific object

Conclusions

LINQ is amazingly powerful, and with just a little time learning the syntax, it can be a huge time saver.  It’s also important to be able to read in other people code..  Outside gaming, in other C# projects, you’ll see LINQ everywhere.  As I mentioned above though, garbage collection and performance are extremely important for games, so you still need to think and profile when using it in your projects.  That’s true for everything though, so don’t let it discourage you from taking advantage of this amazing language feature.

For more examples and LINQ statements, check out this site: http://linqsamples.com/

Continue reading >
Share

Unity Slider Label Text

Need a slider in your game or app and want text on the handle?

The setup for this is pretty simple, you’ll need a slider, a label and a script..

The Slider

Create a slider.. then select a minimum and maximum value.  For the slider above, I’ve also checked “whole numbers”.

The Label

Once you’ve created the slider, find the Handle child.

Create a new Text (or ideally a TextMeshPro – Text) object under the Handle.

Center the text object, and set the color to something that works well with your handle.

And finally, add this script to the text (or textmeshpro text) object..

Code

Note: If you don’t use TextMeshPro (which you should), you’ll need to change the references to just be “Text” & remove the top using statement.

Formatting

The default format for the script above shows a degrees symbol.  You can modify that by editing the FormatText field.  The code uses string.Format, and the {0} is replace by the actual value.

 

Continue reading >
Share

SteamVR Laser Pointer Menus – Updated for SteamVR 1.2.2

If you build a VR game or experience, there’s a good chance you’ll end up needing some menus.  There are a lot of great ways to build VR menus, ranging from basic laser pointers to some amazing interaction based systems.  Since laser pointers are one of the simplest and most common systems, this guide will focus on how to create them.  We’ll discuss how to use the SteamVR Laser Pointer system ( SteamVR_Laserpointer.cs ).  And we’ll make your standard Unity UGUI (4.6 UI) interface work with the laser pointers.

SteamVR Laser Pointer (steamvr_laserpointer.cs)

The SteamVR Laserpointer is included in the SteamVR asset pack.  Once you’ve imported the asset pack, you can see the script located in the SteamVR/Extras folder.

CameraRig & Setup

For this example, we’ll use the included [CameraRig] prefab and make a few minor modifications.

Create a new scene.

Delete the “MainCamera” from the scene.

Add the [CameraRig] prefab to the scene.

The CameraRig prefab is located in the SteamVR/Prefabs folder.

Select both the Controller (left) and Controller (right) children of the [CameraRig]

Remove the SteamVR TrackedObject component.

Add the SteamVR_TrackedController component

Add the SteamVR_LaserPointer component

Select a color for your pointers.  I’ve chosen RED for mine…

VRUIInput.cs

Because the laserpointer script doesn’t handle input itself, we’ll need to add a new script to tell our UI when we want to interact with it.

Create a new c# script.

Name it VRUIInput

Replace the contents with this.

Attach the VRUIInput component to both the Controller (left) and Controller (right).

UpdatePoses

Update: this is fixed and not needed as of SteamVR 1.2.2, this fix is no-longer needed.  Upgrade to 1.2.2 and skip this section! 🙂

Before your controllers will track, you’ll need to add the SteamVR_Update poses script to the camera.  This is a known bug in the latest SteamVR asset pack.

Select the Camera (eye) child of the [CameraRig]

Add the SteamVR_UpdatePoses component to it.

Half way there!

If you press play now, you’ll see laser pointers beaming out of your controllers.  They won’t do much yet, but go ahead and check them out to make sure they’re visible.

The UI

It’s time to create a UI that we can interact with.

Canvas

Create a new Canvas.

Set the RenderMode to “World Space

Set the transform values to match these.

Scale x = 0.01
Scale y = 0.01
Scale z = 0.01
Width = 500
Height = 500
Position x = 0.0
Position y = 0.0
Position z = 8.0

Panel & Button

Under the Canvas, create a Panel.

Under the Panel, create a Button.

VRUIItem.cs

For our button to interact with the laser pointer, we need it to have a collider.

That colliders size & shape need to match our button.  We could do this manually, but to avoid having to resize the collider whenever the button changes, you can use this simple script.

Create a new c# Script.

Name it “VRUIItem”

Replace the contents with this.

Attach the VRUIItem component to the button.

You should see a BoxCollider added automatically and scaled to the proper size.

Select the Button Component.

Change the Highlight color to something more obvious.. like green..

Add an event handler that changes the Text component to say something different (so we can tell if the click worked).

Conclusions

Here, I’ve duplicated the button 12 times and resized it a bit to show a bit more action.  Go ahead and try that yourself, or build a real menu for your game now. 🙂

The SteamVR Laser Pointer component, combined with a couple simple scripts, can get you up and running in minutes.  From here, you can simply replace the OnClick events with any normal Unity UI click events you’d use in a non-vr game.

While I’m a big fan of unique and interesting menu systems for VR, laser pointers are definitely an easy to use and intuitive method for input.  And for some games or apps, they’re definitely the preferred choice.

VRTK

It’s worth noting that another great way to setup UI interactions is via VRTK (VR Tool Kit).  VRTK is something I’ve used in the past and love.  It’s pretty easy to get started with and adds a ton of functionality beyond just laser pointers.  You can read more about VRTK here.

 

Continue reading >
Share

Pooled Decals for Bullet Holes

The Bullet Decal Pool System

Todays article came out of necessity.  As you probably know, I’m wrapping up my long awaited VR Course, and one of the last things I needed to create is a decal setup for the game built in it.  To do decals properly, you’d want a full fledged decal system, but for this course and post, we have a system that does exactly what we need and no more.

What is that?  Well it’s a system to create a bullet hole decal where you shoot.  And do to it without creating and destroying a bunch of things at runtime.

It’s worth noting that I wrote this system for a VR game, but it is completely applicable to a normal game. This would work in a 3d game, mobile game, or anything else that needs basic decals.

The end result will look something like this

How do we build it?

Let’s take a look at the code

Code Breakdown

Serialized Fields

We open with 2 serialized fields.

bulletHoleDecalPrefab – The first determines which decal prefab we’ll use.  If you’re building a more generic decal system, you may want to re-name this.  Because it’s part of a VR course, I left the name as is, but if I were putting this in another game, it’d likely be more generic or maybe even an array that’s randomly chosen from.

maxConcurrentDecals – This sets the maximum number of decals the system will show.  We do this primarily for performance, but also to avoid visual cluttering.  Having too many decals could cause a hit on rendering, remember each one is a transparent quad.  This number is variable in the editor though, so you can adjust it as you see fit for your game.

 

Private Fields

We have two private fields in this class.  They’re both using the Queue type to keep a first in first out collection of decals.

decalsInPool – This is where we’ll store the decals that are available and ready to be placed.

decalsActiveInWorld – These are the decals that we’ve placed in the world.  As our pool runs empty, we’ll start grabbing decals from here instead.

 

Awake

Calls our InitializeDecals method()….

 

Private Methods

InitializeDecals() – This is our setup.  Here, we create our queues, then we use a loop to create our initial pooled decals.

InstantiateDecal() – Here we do the actual creation of a single decal.  This is only called by InitializeDecals & a special editor only Update you’ll see soon.

GetNextAvailableDecal() – This method gets the next available decal…. useful description eh’?  It actually just looks at the pool, if there’s at least one decal in it, the method returns the first one in the queue.  If there’s no decal in the pool, it returns the oldest decal that’s active in the world.

 

Public Methods

SpawnDecal(RaycastHit hit) – This is our only public method, it’s the one thing this class is responsible for doing.  In the code that calls it, we’re doing a raycast to determine where our bullet hits.  The raycast returns a raycasthit and we pass it into this method as the only parameter.

The method uses GetNextAvailableDecal() and assuming a decal is available, it places that decal at the raycasthit.point, adjusts the rotation to the raycasthit.normal, and sets the decal to active.  The method ends by adding the decal to the decalsActiveInWorld queue.

 

#if UNITY_EDITOR ????

Everything else in this class is actually wrapped to only run in the editor.

This code has a single purpose, to update our queue size at runtime.

It’s absolutely not necessary for your decal system, but it’s a nice little thing I enjoy having 🙂

I won’t cover each method, but you should play with the queue size at run-time and watch as it keeps everything in sync.

 

 

 

Continue reading >
Share

Unity OnInspectorGUI – Custom Editors, Gizmos, and Spawning Enemies

By Jason Weimann / September 12, 2016

Creating games can be difficult and time consuming.  You have to code all kinds of systems, add and modify art and sound, and of course design levels.

As a programmer, I often found myself overlooking level design, and forgetting just how time consuming and frustrating it could be.

But I also know that as a programmer, there are things I can do to make it easier for myself (and any designers working on the games).

Today, I’ll show  you one very useful technique you can use to drastically reduce the time spent on design work, while making it a much more fun process.

The Example – Spawn Points

Enemies are a very common thing in video games, and in a large number of them, enemies are created/spawn throughout the game.

The GameObject spawning them could be simple, instantiating an enemy on a set interval.

Before I show you my technique, let me show you how I used to create them.

Version 1 – A simple transform (very bad)

When I first started placing spawn points in a game, I did it by simply placing a transform.  The screenshot below is actually a step beyond what I used to do, because in this one I’ve actually enabled the Icon so you can see it.

Custom Editors - Spawn Point as Transform

If you haven’t used the Icons before, the selection dialog is just to the left of the Active checkbox in the inspector.

Custom Editors - Icon Selector

I quickly moved on from just placing a transform though because it got really hard to tell exactly where the spawn point was in the world.  If the transform is below the ground, I wouldn’t be able to tell without moving the camera all around.  The same goes for a spawn point that’s in a building, hovering over the ground, etc.

Version 2 – Using a cube (less bad)

The next evolution of my spawn points involved cubes.  Creating spawn points with a cube renderer mostly resolved the issue with not being able to easily see the position in the scene.

To make this work though, I needed my spawn points to disable the renderer in their Awake() call so I didn’t have random boxes showing in the world when the game was being played.

It also didn’t really solve the issue of spawning enemies on the ground, so I’d have to make my spawners do a raycast downward to the ground to get their spawn point before popping out an enemy.

I’d try to place the boxes just a bit over the ground, but found that I wasted a lot of time lining things up right, testing, making minor movements, testing, etc.

In addition to that, it felt ugly, but I used this technique for a very long time….

Custom Editors - Spawn Point as Cube

Version 3 – Custom Editors

After using the previous methods for way too long, I finally came up with a solution that solved my previous problems and made building levels much faster.

Custom Editors - Enemy Spawners Scene View

As you can see in the image, Version 3 looks drastically different.  There are colored spheres with lines attaching them.  There’s text over them instead of in an Icon, and that text has a lot of info to it.

Before I show you how it’s done, let me explain what it is you’re seeing.

The Green spheres show actual spawn points for this game.  These are points where enemies will be instantiated.

The Blue spheres are waypoints.  Enemies spawn at the green spheres then walk to the blue ones.

The lines between them show which waypoints belong to each spawnpoint.

What’s that Text?

The text over the spawn point shows a few things.  Let’s examine the top left spawn point.

Custom Editors - Spawn Point Up Close

Intro 1 0:25-0:28 Spawn 2 [1/3] after 5(8s)

Intro 1 – This is the name of the wave/area this spawn point belongs to.  In this case, it’s the first introductory wave the player gets when they start the game.

0:25-0:28 – Here you see the time in the wave that this spawn point will be active.  This spawn point is active for a very short time, starting 25 seconds into the wave and ending only 3 seconds later.

Spawn 2 [1/3] – This tells us how many enemies will spawn from this point.  It’s going to spawn 2 zombies, one every three seconds (the [1/3] shows the count and interval).  The first one will spawn immediately, and the second after 3 seconds.

after 5 – This part isn’t visible on all spawn points, only on spawn points that delay their start.  You can see that in the Hierarchy, this spawn point is under a gameobject that enables after 20 seconds.  Each spawnpoint in a timer can have an additional delay added to them to avoid a large list of timers in the hierarchy.  The 5 second delay is what makes this spawner start at 0:25 instead of 0:20.

Custom Editors - Hierarchy

(8s) – The last thing you see just shows how long this spawnpoint is enabled.  For this one, after 8 seconds it will auto disable itself.  This is just time of the last spawn minus the time the spawn point becomes enabled (28 – 20 in this case). 

Snapping to the Terrain or Navmesh

One final benefit of this system that I want to show before getting into code is the ability to have your spawn points and waypoints automatically snap to the terrain or navmesh.  In the example below, you can see that when I move this waypoint around it will automatically find its place on the ground as soon as I release it.

This saves a ton of time and resolves that entire issue of lining things up.  Don’t do these things manually, have the editor do it for you.

Custom Editors - Waypoint Snapping

How It Works

To make my custom spawn points work like they do, I take advantage of two great features in Unity, Gizmos and Custom Inspectors.

Both parts do about half of the work required to get the full functionality.

Let’s start with this snippet from my EnemySpawner.cs script

The first thing we do here is get the Wave parent of this spawner.  This is the GameObject that all spawners and timers will be under for a specific wave or area of the game.

In the example above, you saw the green part “Intro 1“.  That part was just the name of the wave we find right here.

Line 6 takes this wave name and formats uses string.Format to split the wave name from the current spawners name, which is why “Intro 1” is above the spawning details.

On Line 8, we check to see if the wave this gizmo is for is currently selected.  We then use that to determine if we want a green spawner gizmo or a gray one.  I do this so we can easily tell which spawners are related.  All spawners in a wave will be colored at the same time, and all the ones from other waves will just show up as gray.

Custom Editors - Disabled Spawners

Line 12 draws the sphere using Gizmos.DrawSphere, in whichever color we’ve chosen.

Lines 14-15 will draw the final text above the sphere if the spawner is in the selected wave.

The OnDrawGizmos code is pretty short, and on it’s own it does a bit of really useful stuff, but there’s a lot missing.  It does show the spheres, and it places the name above the sphere with the wave name as a prefix, but there’s a lot more we want to happen.

For example the label from line 15 has a lot of useful info, and we pull that from the name, but we don’t want to manually enter that info, we want it auto generated and updated whenever we change things.

Overriding ToString()

To generate the name, with all the useful data, we override the ToString method of our EnemySpawner class.

If you’ve never overridden the ToString method, you may want to check out this description for a simpler sample of how it works https://msdn.microsoft.com/en-us/library/ms173154.aspx

Every object in c# implements the ToString method that you can override (the default return value for most objects is the name of the class/type).

In this example, we’re building up the rest of the label text.  While I won’t go into the details of each line, the end result of this method looks like this:

"0:25-0:28 Spawn 2 [1/3] after 5(8s)"

The Custom Editor

To tie this all together, we use a custom editor for the EnemySpawner.

Before you see the bigger parts of the script, let’s start with the initial attribute that tells Unity this class is a custom editor.

The CustomEditor attribute allows you to tell the engine which MonoBehaviour you want the editor to be used for.  This is specified by giving it the type of the MonoBehaviour.  In this example it’s typeof(EnemySpawner).

Also remember to add the using UnityEditor statement and make the base class for your custom editor be of typeEditor“.

The Editor class has one important method you need to override.  Check out this expanded version of the script and the OnInspectorGUI method that’s being overridden.

This method is called every frame in the editor while the Inspector window is visible and the object is selected.  If the Inspector is not visible, or is showing some other game object, this code won’t be called.

Code Breakdown

The first thing we do in this OnInspectorGUI method is cache the component we’re working with.

On line 12, we assign the target gameobject to the _enemySpawner variable.

The variable target is defined by the editor class and specifies the gameobject this editor is showing currently

Line 13 calls the base editor class version of OnInspectorGUI so it can handle anything that we’re not dealing with.  This is required because we’re overriding the behavior of OnInspectorGUI.

Lines 14-19 are a single method call to create a range slider that will fill the min and max movement speed.  I do this just to enforce the idea that the max must be greater than the minimum.  As a benefit, it also makes the value a little easier to visualize.

custom-editors-movementspeed-range-slider

Lines 21-24 are there to add waypoints to the spawners.  I won’t cover in detail how they work, but these buttons essentially add a child object that will be used as a waypoint.  If it’s a random waypoint, my navigation code will select one at random, if it’s static, the enemies will path around them in order.  These also have their own gizmo and custom editor code to make them show up as blue in the scene view.

Line 28 just calls a method to disable any left over colliders or renderers on the spawner.  Generally there aren’t any, but sometimes one gets created with a cube or sphere and I want to make sure that’s disabled right away.  I could just remove them here too, but disabling does the same job and feels safer.

Line 30 does one of the most important parts.  It calls the method to stick the spawner to the ground.  Sticking the spawner down is done by a raycast from the spawners current position aimed downward.  We get the hit point and update the spawners position.

Line 33 wraps it all up by updating the spawners name.  It uses the overridden ToString() method we created above to determine the objects new name.

Auto Naming in Action

custom-editors-naming-in-action

Important Note

For a custom editor to work, you need to place the script in a sub-folder named “Editor“.  This sub-folder can be anywhere in your project, and you can have multiple Editor folders, but only scripts in an Editor folder will work.

Custom Editors - EditorFolder

Custom Editors - EnemySpawner

Continue reading >
Share

Unity Interfaces

By Jason Weimann / September 4, 2016

Unity Interfaces – Getting Started

Lately, I’ve realized that many Unity developers have never programmed outside of Unity projects.
While there’s nothing wrong with that, it does seem to leave some holes in the average Unity developers skill set.
There are some great features and techniques that aren’t commonly used in Unity but are staples for typical c# projects.

That’s all fine, and they can be completely productive, but some of the things I see missing can really help, and I want to make sure to share those things with you.

Because of this, I’ve decided to write a few articles covering some core c# concepts that can really improve your code if you’re not using them already.

The first in this series will cover c# interfaces.

If you google c# interfaces, you’ll come across the msdn definition

An interface contains definitions for a group of related functionalities that a class or a struct can implement.

Personally, I prefer to use an example to explain them though, so here’s one from an actual game.

The ICanBeShot interface

In Armed Against the Undead, you have guns and shoot zombies..Armed Against the Undead
But you can also shoot other things like Ammo pickups, Weapon unlocks, Lights, etc.

Shooting things is done with a standard raycast from the muzzle of the gun.  Any objects on the correct layer and in range can be shot.

If you’ve used Physics.Raycast before, you’ll know that it returns a bool and outputs a RayCastHit object.

The  RayCastHit has a .collider property that points to the collider your raycast found.

In Armed, the implementation of this raycast looks like this:

private bool TryHitEnvironment(Ray ray)
{
	RaycastHit hitInfo;

    if (Physics.Raycast(ray, out hitInfo, _weaponRange, LayerMask.GetMask("EnvironmentAndGround")) == false)
        return false;

    ICanBeShot shootable = hitInfo.collider.GetComponent<ICanBeShot>();

    if (shootable != null)
		shootable.TakeShot(hitInfo.point);
    else
        PlaceBulletHoleBillboardOnHit(hitInfo);

    return true;
}

Here you can see that we do a raycast on the EnvironmentAndGround layer (where I place things you can shoot that aren’t enemies).

If we find something, we attempt to get an ICanBeShot component.

That component is not an actual implementation but rather an Interface which is on a variety of components.

It’s also very simple with a single method named TakeShot defined on it as you can see here:

public interface ICanBeShot
{
    void TakeShot(Vector3 hitPosition);
}

If you’ve never used an interface before, it may seem a little strange that there’s no actual code or implementation.  In the interface, we only define how the methods look and not the implementation.  We leave that part to the classes implementing our interface.

How the Interface is used

So now that I have my interface, and I have a method that will search for components implementing that interface, let me show you some of the ways I’m using this interface.

Implementation #1 – Ammo Pickups

public class AmmoBox : MonoBehaviour, ICanBeShot
{
    public void TakeShot(Vector3 hitPosition)
    {
		PickupAmmo();

		if (_isSuperWeaponAmmo)
			FindObjectOfType<Inventory>().AddChargeToSuperWeapon();
		else
			FindObjectOfType<Inventory>().AddAmmoToWeapons();
	}
}

This ammo script is placed on an Ammo prefab.

Ammo Scene and Inspector

Ammo Scene and Inspector

Notice the box collider that will be found by the raycast in TryHitEnvironment above (line 5).

 

Ammo Inspector

Ammo Inspector

In the case of the AmmoBox, the TakeShot method will add ammo to the currently equipped weapon.  But an AmmoBox isn’t the only thing we want the player to shoot at.

Implementation #2 – Weapon Unlocks

public class WeaponUnlocker : MonoBehaviour, ICanBeShot
{
    public void TakeShot(Vector3 hitPosition)
    {
        WeaponUnlocks.UnlockWeapon(_weaponToUnlock);
        PlayerNotificationPanel.Notify(string.Format("<color=red>{0}</color> UNLOCKED", _weaponToUnlock.name));

        if (_particle != null)
            Instantiate(_particle, transform.position, transform.rotation);

        Destroy(this.gameObject);
    }
}

Compare the AmmoBox to the WeaponUnlocker.  Here you see that we have a completely different implementation of TakeShot.  Instead of adding ammo to the players guns, we’re unlocking a weapon and notifying the player that they’ve unlocked it.

And remember, our code to deal with shooting things didn’t get any more complicated, it’s still just calling TakeShot.  This is one of the key benefits, we can add countless new implementations, without complicating or even editing the code to handle shooting.  As long as those components implement the interface, everything just works.

Implementation #3 – Explosive Boxes

These are crates that when shot will explode and kill zombies.

Implementation #4 – Destructible Lights

In addition to everything else, the lights can also take a shot (in which case they explode and turn off the light source component)

Recapping

Again to make the benefits of Unity interfaces clear, re-examine our code in TryHitEnvironment.

ICanBeShot shootable = hitInfo.collider.GetComponent<ICanBeShot>();

if (shootable != null)
	shootable.TakeShot(hitInfo.point);

We simply look for any collider on the right layer then search for the ICanBeShot interface.  We don’t need to worry about which implementation it is.  If it’s an ammo box, the ammo box code will take care of it.  If it’s a weapon unlock, that’s covered as well.  If we add a new object that implements the interface, we don’t need to touch our current code.

Other Benefits

While I won’t cover everything that’s great about interfaces in depth here, I feel I should at least point out that there are other benefits you can take advantage of.

  1. Unit Testing – If you ever do any unit testing, interfaces are a key component as they allow you to mock out dependencies when you write your tests.
  2. Better Encapsulation – When you code to interfaces, it becomes much more obvious what should be public, and your code typically becomes much better encapsulated.
  3. Loose Coupling – Your code no-longer needs to rely on the implementations of methods it calls, which usually leads to code that is more versatile and changeable.

 

 

Continue reading >
Share
Page 3 of 6