All posts in "Unity3D"

Using Multiple Scenes

By Jason Weimann / September 26, 2016

Project Example Source Code

The source code and example projects for this post are available.  If you'd like to grab it and follow along, just let me know where to send it.

Using Multiple Scenes in Unity

One of the great things about Unity 5 is the ability to load multiple scenes at the same time effectively.
You may have noticed that your scene name now is visible in the top left of the hierarchy.

Multiple Scenes - Empty Room with no other scenes loaded
This is here to show which scene the game objects are in.

If you load multiple scenes, you’ll see them as separate collapsible groups in the list.

Multiple Scenes - Scenes added in Hierarchy

There are a variety of ways you can use additive level loading in your projects.  In this article, we’ll cover some of the most common uses.

  • Splitting scenes for shared editing
  • Randomly generated game content
  • Seamless loading of large worlds
  • Smooth transitions from a menu scene to a game level

Shared editing / splitting the world

Multi-user scene editing in Unity can be painful.  Merging changes isn’t easy, and even when you successfully do a merge, it can be hard to tell if everything is right.

For many games, the new scene management systems in Unity 5 will allow you to split up parts of your world into separate chunks that are in their own scene files.

This means that multiple designer can setup part of the world.

Our Starting State

To demonstrate how this would work, I’ve built two scenes.  There’s a purple scene and a yellow scene.

Multiple Scenes - Yellow and Purple Scenes

With both of them loaded at the same time, you can see that their seams line up and they combine to be a larger scene.

The advantage though is we can have a designer working on the yellow scene while another designer makes changes to the purple one.

This example has simple scenes.  In a real game, just imagine the scenes are different quadrants of a city, chunks of a large castle, or a large scene in one of your previous projects.

Mario Changed the Purple Scene

To show the benefit and how it works, we’ve modified the purple scene.  It now has another sphere and an extra word!

Check out the Hierarchy and notice that only the purple scene has been modified, so when we save, we’re not affecting the yellow scene at all.Multiple Scenes - Yellow and Purple Scenes - Puple Changed

Luigi changed the Yellow Scene

It’s a good thing we didn’t touch the yellow scene too, because another designer has made some changes to it while we were modifying the purple one!  They added a cube and more words!

Multiple Scenes - Yellow Scene Changed

Not a problem

Since we only edited the purple scene, nobody’s overwritten someone else’s work.

Multiple Scenes - Yellow and Purple Scenes - Both Changed

Our end result has changes from two separate designers working in parallel.  Depending on your game, this could be split among any number of people, all in charge of their own area, or at least coordinating who’s editing each area to avoid stepping on each others work.

Generating a level at run-time

The first situation we’ll cover today is loading multiple scenes to build a bigger level dynamically.

For this example, I’ve built two rooms.  One is red and the other is blue.

Multiple Scenes - Blue Room

The Blue Room

Multiple Scenes - Red Room

The Red Room

I’ve also created another scene named ‘EmptyRoom‘.

This scene holds a camera, a light, and a gameobject with a RoomLoadController script.

Multiple Scenes - Empty Room

The RoomLoadController is responsible for loading in our red and blue rooms during the game.

For this sample, our RoomLoadController will watch for the a keypress of the numpad plus and numpad minus keys.  If the user presses either of them, we’ll load add another scene to our game.

using UnityEngine;

public class RoomLoadController : MonoBehaviour
{
    private int zPos = 0;

	private void Update()
	{
		if (Input.GetKeyDown(KeyCode.KeypadMinus))
		{
			AddRoom("RedRoom");
		}

		if (Input.GetKeyDown(KeyCode.KeypadPlus))
		{
			AddRoom("BlueRoom");
		}
	}

	private void AddRoom(string roomName)
	{
		zPos += 7;

		var roomLoader = new GameObject("RoomLoader").AddComponent<RoomLoader>();
		roomLoader.transform.position = new Vector3(0f, 0f, zPos);
		roomLoader.Load(roomName);
	}
}

You may have read the script and wondered, where’s the scene loading part?  Well for this project, I wanted to load a bunch of scenes in and I want them to always be offset by 7 meters.

To keep the code separated and simple, I spawn a new object called RoomLoader to do the work.  We give the RoomLoader is a position and a room name, it will handle the rest.

Let’s take a look at the RoomLoader.

using System.Collections;
using UnityEngine;
using UnityEngine.SceneManagement;

public class RoomLoader : MonoBehaviour
{
    public void Load(string roomName)
	{
		SceneManager.sceneLoaded += SceneManager_sceneLoaded;
		SceneManager.LoadSceneAsync(roomName, LoadSceneMode.Additive);
	}

	private void SceneManager_sceneLoaded(Scene scene, LoadSceneMode mode)
	{
		SceneManager.sceneLoaded -= SceneManager_sceneLoaded;
		StartCoroutine(MoveAfterLoad(scene));
	}

	private IEnumerator MoveAfterLoad(Scene scene)
	{
		while (scene.isLoaded == false)
		{
			yield return new WaitForEndOfFrame();
		}

		Debug.Log("Moving Scene " + transform.position.x);

		var rootGameObjects = scene.GetRootGameObjects();
		foreach (var rootGameObject in rootGameObjects)
			rootGameObject.transform.position += transform.position;
	}
}

Check out the load method.  This is what’s being called from the RoomLoadController. It does two things.

  1. Registers a callback for the SceneManager.sceneLoaded event.
  2. Calls SceneManager.LoadSceneAsync, using the LoadSceneMode.Additive option.

SceneManager.sceneLoaded Add a delegate to this to get notifications when a scene has loaded

After line 10 executes, the scene specified in roomName will start loading. Because of the LoadSceneMode.Additive option, we will keep our current scene open as well, including our camera, light, and RoomLoadController.

Once the scene finishes loading, our SceneManager_sceneLoaded method will be called by the delegate (registered on line 9).  The first thing we do is deregister from the event, so we don’t get called by every other scene that loads.  Then we kick off a coroutine to wait for the scene to be completely ready.  Lines 21-24 do the waiting…. and waiting…. until the scene.IsLoaded.

I’m not sure why the scene isn’t completely loaded when the sceneLoaded event fires.  I’m sure there’s a reason for it, but I haven’t found the explanation yet.  If you happen to know, please comment.

On line 28, we get the root gameobjects of the newly loaded scene.  We then move those objects over to be offset by the amount this RoomLoader is.  This is why the RoomLoadController is moving the RoomLoader.

Blue Room Root Objects

Blue Room Root Objects

Let’s check out the end result.

Multiple Scenes - Loading Red and Blue Rooms

Again, for this example, we’re controlling the loading of scenes, but there’s no reason we couldn’t randomly pick some.

This same technique can be used to randomly generate a dungeon out of pre-built scenes or load new scene parts as a player explores the world.

Part Two

Scene management is a huge subject, and while we've covered some important basics, there's a lot more to learn.

If you're interested in this subject, you can get part two delivered directly to you as soon as it's ready.

Part Two of this post will cover:

  • Seamless loading of large worlds
  • Smooth transitions from a menu scene to a game level

 

Continue reading >
Share

Unity OnInspectorGUI – Custom Editors, Gizmos, and Spawning Enemies

By Jason Weimann / September 12, 2016

Creating games can be difficult and time consuming.  You have to code all kinds of systems, add and modify art and sound, and of course design levels.

As a programmer, I often found myself overlooking level design, and forgetting just how time consuming and frustrating it could be.

But I also know that as a programmer, there are things I can do to make it easier for myself (and any designers working on the games).

Today, I’ll show  you one very useful technique you can use to drastically reduce the time spent on design work, while making it a much more fun process.

The Example – Spawn Points

Enemies are a very common thing in video games, and in a large number of them, enemies are created/spawn throughout the game.

The GameObject spawning them could be simple, instantiating an enemy on a set interval.

Before I show you my technique, let me show you how I used to create them.

Version 1 – A simple transform (very bad)

When I first started placing spawn points in a game, I did it by simply placing a transform.  The screenshot below is actually a step beyond what I used to do, because in this one I’ve actually enabled the Icon so you can see it.

Custom Editors - Spawn Point as Transform

If you haven’t used the Icons before, the selection dialog is just to the left of the Active checkbox in the inspector.

Custom Editors - Icon Selector

I quickly moved on from just placing a transform though because it got really hard to tell exactly where the spawn point was in the world.  If the transform is below the ground, I wouldn’t be able to tell without moving the camera all around.  The same goes for a spawn point that’s in a building, hovering over the ground, etc.

Version 2 – Using a cube (less bad)

The next evolution of my spawn points involved cubes.  Creating spawn points with a cube renderer mostly resolved the issue with not being able to easily see the position in the scene.

To make this work though, I needed my spawn points to disable the renderer in their Awake() call so I didn’t have random boxes showing in the world when the game was being played.

It also didn’t really solve the issue of spawning enemies on the ground, so I’d have to make my spawners do a raycast downward to the ground to get their spawn point before popping out an enemy.

I’d try to place the boxes just a bit over the ground, but found that I wasted a lot of time lining things up right, testing, making minor movements, testing, etc.

In addition to that, it felt ugly, but I used this technique for a very long time….

Custom Editors - Spawn Point as Cube

Version 3 – Custom Editors

After using the previous methods for way too long, I finally came up with a solution that solved my previous problems and made building levels much faster.

Custom Editors - Enemy Spawners Scene View

As you can see in the image, Version 3 looks drastically different.  There are colored spheres with lines attaching them.  There’s text over them instead of in an Icon, and that text has a lot of info to it.

Before I show you how it’s done, let me explain what it is you’re seeing.

The Green spheres show actual spawn points for this game.  These are points where enemies will be instantiated.

The Blue spheres are waypoints.  Enemies spawn at the green spheres then walk to the blue ones.

The lines between them show which waypoints belong to each spawnpoint.

What’s that Text?

The text over the spawn point shows a few things.  Let’s examine the top left spawn point.

Custom Editors - Spawn Point Up Close

Intro 1 0:25-0:28 Spawn 2 [1/3] after 5(8s)

Intro 1 – This is the name of the wave/area this spawn point belongs to.  In this case, it’s the first introductory wave the player gets when they start the game.

0:25-0:28 – Here you see the time in the wave that this spawn point will be active.  This spawn point is active for a very short time, starting 25 seconds into the wave and ending only 3 seconds later.

Spawn 2 [1/3] – This tells us how many enemies will spawn from this point.  It’s going to spawn 2 zombies, one every three seconds (the [1/3] shows the count and interval).  The first one will spawn immediately, and the second after 3 seconds.

after 5 – This part isn’t visible on all spawn points, only on spawn points that delay their start.  You can see that in the Hierarchy, this spawn point is under a gameobject that enables after 20 seconds.  Each spawnpoint in a timer can have an additional delay added to them to avoid a large list of timers in the hierarchy.  The 5 second delay is what makes this spawner start at 0:25 instead of 0:20.

Custom Editors - Hierarchy

(8s) – The last thing you see just shows how long this spawnpoint is enabled.  For this one, after 8 seconds it will auto disable itself.  This is just time of the last spawn minus the time the spawn point becomes enabled (28 – 20 in this case). 

Snapping to the Terrain or Navmesh

One final benefit of this system that I want to show before getting into code is the ability to have your spawn points and waypoints automatically snap to the terrain or navmesh.  In the example below, you can see that when I move this waypoint around it will automatically find its place on the ground as soon as I release it.

This saves a ton of time and resolves that entire issue of lining things up.  Don’t do these things manually, have the editor do it for you.

Custom Editors - Waypoint Snapping

How It Works

To make my custom spawn points work like they do, I take advantage of two great features in Unity, Gizmos and Custom Inspectors.

Both parts do about half of the work required to get the full functionality.

Let’s start with this snippet from my EnemySpawner.cs script

The first thing we do here is get the Wave parent of this spawner.  This is the GameObject that all spawners and timers will be under for a specific wave or area of the game.

In the example above, you saw the green part “Intro 1“.  That part was just the name of the wave we find right here.

Line 6 takes this wave name and formats uses string.Format to split the wave name from the current spawners name, which is why “Intro 1” is above the spawning details.

On Line 8, we check to see if the wave this gizmo is for is currently selected.  We then use that to determine if we want a green spawner gizmo or a gray one.  I do this so we can easily tell which spawners are related.  All spawners in a wave will be colored at the same time, and all the ones from other waves will just show up as gray.

Custom Editors - Disabled Spawners

Line 12 draws the sphere using Gizmos.DrawSphere, in whichever color we’ve chosen.

Lines 14-15 will draw the final text above the sphere if the spawner is in the selected wave.

The OnDrawGizmos code is pretty short, and on it’s own it does a bit of really useful stuff, but there’s a lot missing.  It does show the spheres, and it places the name above the sphere with the wave name as a prefix, but there’s a lot more we want to happen.

For example the label from line 15 has a lot of useful info, and we pull that from the name, but we don’t want to manually enter that info, we want it auto generated and updated whenever we change things.

Overriding ToString()

To generate the name, with all the useful data, we override the ToString method of our EnemySpawner class.

If you’ve never overridden the ToString method, you may want to check out this description for a simpler sample of how it works https://msdn.microsoft.com/en-us/library/ms173154.aspx

Every object in c# implements the ToString method that you can override (the default return value for most objects is the name of the class/type).

In this example, we’re building up the rest of the label text.  While I won’t go into the details of each line, the end result of this method looks like this:

"0:25-0:28 Spawn 2 [1/3] after 5(8s)"

The Custom Editor

To tie this all together, we use a custom editor for the EnemySpawner.

Before you see the bigger parts of the script, let’s start with the initial attribute that tells Unity this class is a custom editor.

The CustomEditor attribute allows you to tell the engine which MonoBehaviour you want the editor to be used for.  This is specified by giving it the type of the MonoBehaviour.  In this example it’s typeof(EnemySpawner).

Also remember to add the using UnityEditor statement and make the base class for your custom editor be of typeEditor“.

The Editor class has one important method you need to override.  Check out this expanded version of the script and the OnInspectorGUI method that’s being overridden.

This method is called every frame in the editor while the Inspector window is visible and the object is selected.  If the Inspector is not visible, or is showing some other game object, this code won’t be called.

Code Breakdown

The first thing we do in this OnInspectorGUI method is cache the component we’re working with.

On line 12, we assign the target gameobject to the _enemySpawner variable.

The variable target is defined by the editor class and specifies the gameobject this editor is showing currently

Line 13 calls the base editor class version of OnInspectorGUI so it can handle anything that we’re not dealing with.  This is required because we’re overriding the behavior of OnInspectorGUI.

Lines 14-19 are a single method call to create a range slider that will fill the min and max movement speed.  I do this just to enforce the idea that the max must be greater than the minimum.  As a benefit, it also makes the value a little easier to visualize.

custom-editors-movementspeed-range-slider

Lines 21-24 are there to add waypoints to the spawners.  I won’t cover in detail how they work, but these buttons essentially add a child object that will be used as a waypoint.  If it’s a random waypoint, my navigation code will select one at random, if it’s static, the enemies will path around them in order.  These also have their own gizmo and custom editor code to make them show up as blue in the scene view.

Line 28 just calls a method to disable any left over colliders or renderers on the spawner.  Generally there aren’t any, but sometimes one gets created with a cube or sphere and I want to make sure that’s disabled right away.  I could just remove them here too, but disabling does the same job and feels safer.

Line 30 does one of the most important parts.  It calls the method to stick the spawner to the ground.  Sticking the spawner down is done by a raycast from the spawners current position aimed downward.  We get the hit point and update the spawners position.

Line 33 wraps it all up by updating the spawners name.  It uses the overridden ToString() method we created above to determine the objects new name.

Auto Naming in Action

custom-editors-naming-in-action

Important Note

For a custom editor to work, you need to place the script in a sub-folder named “Editor“.  This sub-folder can be anywhere in your project, and you can have multiple Editor folders, but only scripts in an Editor folder will work.

Custom Editors - EditorFolder

Custom Editors - EnemySpawner

Continue reading >
Share

Unity Interfaces

By Jason Weimann / September 4, 2016

Unity Interfaces – Getting Started

Lately, I’ve realized that many Unity developers have never programmed outside of Unity projects.
While there’s nothing wrong with that, it does seem to leave some holes in the average Unity developers skill set.
There are some great features and techniques that aren’t commonly used in Unity but are staples for typical c# projects.

That’s all fine, and they can be completely productive, but some of the things I see missing can really help, and I want to make sure to share those things with you.

Because of this, I’ve decided to write a few articles covering some core c# concepts that can really improve your code if you’re not using them already.

The first in this series will cover c# interfaces.

If you google c# interfaces, you’ll come across the msdn definition

An interface contains definitions for a group of related functionalities that a class or a struct can implement.

Personally, I prefer to use an example to explain them though, so here’s one from an actual game.

The ICanBeShot interface

In Armed Against the Undead, you have guns and shoot zombies..Armed Against the Undead
But you can also shoot other things like Ammo pickups, Weapon unlocks, Lights, etc.

Shooting things is done with a standard raycast from the muzzle of the gun.  Any objects on the correct layer and in range can be shot.

If you’ve used Physics.Raycast before, you’ll know that it returns a bool and outputs a RayCastHit object.

The  RayCastHit has a .collider property that points to the collider your raycast found.

In Armed, the implementation of this raycast looks like this:

private bool TryHitEnvironment(Ray ray)
{
	RaycastHit hitInfo;

    if (Physics.Raycast(ray, out hitInfo, _weaponRange, LayerMask.GetMask("EnvironmentAndGround")) == false)
        return false;

    ICanBeShot shootable = hitInfo.collider.GetComponent<ICanBeShot>();

    if (shootable != null)
		shootable.TakeShot(hitInfo.point);
    else
        PlaceBulletHoleBillboardOnHit(hitInfo);

    return true;
}

Here you can see that we do a raycast on the EnvironmentAndGround layer (where I place things you can shoot that aren’t enemies).

If we find something, we attempt to get an ICanBeShot component.

That component is not an actual implementation but rather an Interface which is on a variety of components.

It’s also very simple with a single method named TakeShot defined on it as you can see here:

public interface ICanBeShot
{
    void TakeShot(Vector3 hitPosition);
}

If you’ve never used an interface before, it may seem a little strange that there’s no actual code or implementation.  In the interface, we only define how the methods look and not the implementation.  We leave that part to the classes implementing our interface.

How the Interface is used

So now that I have my interface, and I have a method that will search for components implementing that interface, let me show you some of the ways I’m using this interface.

Implementation #1 – Ammo Pickups

public class AmmoBox : MonoBehaviour, ICanBeShot
{
    public void TakeShot(Vector3 hitPosition)
    {
		PickupAmmo();

		if (_isSuperWeaponAmmo)
			FindObjectOfType<Inventory>().AddChargeToSuperWeapon();
		else
			FindObjectOfType<Inventory>().AddAmmoToWeapons();
	}
}

This ammo script is placed on an Ammo prefab.

Ammo Scene and Inspector

Ammo Scene and Inspector

Notice the box collider that will be found by the raycast in TryHitEnvironment above (line 5).

 

Ammo Inspector

Ammo Inspector

In the case of the AmmoBox, the TakeShot method will add ammo to the currently equipped weapon.  But an AmmoBox isn’t the only thing we want the player to shoot at.

Implementation #2 – Weapon Unlocks

public class WeaponUnlocker : MonoBehaviour, ICanBeShot
{
    public void TakeShot(Vector3 hitPosition)
    {
        WeaponUnlocks.UnlockWeapon(_weaponToUnlock);
        PlayerNotificationPanel.Notify(string.Format("<color=red>{0}</color> UNLOCKED", _weaponToUnlock.name));

        if (_particle != null)
            Instantiate(_particle, transform.position, transform.rotation);

        Destroy(this.gameObject);
    }
}

Compare the AmmoBox to the WeaponUnlocker.  Here you see that we have a completely different implementation of TakeShot.  Instead of adding ammo to the players guns, we’re unlocking a weapon and notifying the player that they’ve unlocked it.

And remember, our code to deal with shooting things didn’t get any more complicated, it’s still just calling TakeShot.  This is one of the key benefits, we can add countless new implementations, without complicating or even editing the code to handle shooting.  As long as those components implement the interface, everything just works.

Implementation #3 – Explosive Boxes

These are crates that when shot will explode and kill zombies.

Implementation #4 – Destructible Lights

In addition to everything else, the lights can also take a shot (in which case they explode and turn off the light source component)

Recapping

Again to make the benefits of Unity interfaces clear, re-examine our code in TryHitEnvironment.

ICanBeShot shootable = hitInfo.collider.GetComponent<ICanBeShot>();

if (shootable != null)
	shootable.TakeShot(hitInfo.point);

We simply look for any collider on the right layer then search for the ICanBeShot interface.  We don’t need to worry about which implementation it is.  If it’s an ammo box, the ammo box code will take care of it.  If it’s a weapon unlock, that’s covered as well.  If we add a new object that implements the interface, we don’t need to touch our current code.

Other Benefits

While I won’t cover everything that’s great about interfaces in depth here, I feel I should at least point out that there are other benefits you can take advantage of.

  1. Unit Testing – If you ever do any unit testing, interfaces are a key component as they allow you to mock out dependencies when you write your tests.
  2. Better Encapsulation – When you code to interfaces, it becomes much more obvious what should be public, and your code typically becomes much better encapsulated.
  3. Loose Coupling – Your code no-longer needs to rely on the implementations of methods it calls, which usually leads to code that is more versatile and changeable.

 

 

Continue reading >
Share

Using the Valves – Lab Renderer for VR

If you’ve done any VR development, one of the key things you should know is that FRAMERATE is crucial!  Get ready to learn how the Lab Renderer can help you keep that framerate high.

Dipping under 90FPS will quickly make your game nauseating and ruin the fun.The Lab Renderer - Low FPS Stats

Even if you have the most entertaining, exciting, innovative, other adjective game, if you can’t stay over the target frame rate consistently, your game will suck.

If you’re working in Unity, my original recommendation would have been to keep your vertex count low, bake as much lighting as possible, and limit yourself to 1 or 2 real-time lights where they really give a big payoff.

A few weeks ago though, Valve did something amazing and released their custom renderer for VR as a Unity plugin.

As the name implies, this is what they used to build the VR title “The Lab“, and I’d say was integral in making it look as nice as it does.

The Lab Renderer

The Lab Renderer - Asset Store Screenshot

The description on the asset store page doesn’t do it justice though, so I want to cover the performance gains I’ve seen by switching to this plugin for VR projects.

Benefits

Real Time Lighting

With Unity in a typical VR game, most lighting is baked.  In fact, most games in general use quite a bit of light baking.

If you don’t know the difference yet between baked and real time lighting, I’d recommend you give this article a read:

UNITY 5 – LIGHTING AND RENDERING

It’s long but worth your time.

The reason most lighting is baked though is purely for performance.  Ideally, given unlimited performance, real time lighting for everything would be great.  With the lab renderer, you get much closer to this.

In my previous VR games and experiments, I’d come to accept a limit of 1-3 real time lights being active at a time.

The lab renderer however supports up to 18 real time lights with little to no performance hit.

In my most recent VR game Armed Against the Undead, prior to switching to the Lab Renderer, I couldn’t have more than 1 real-time light enabled at a time without my FPS dipping below 90.

When using baked lighting, I was stuck waiting for light baking every time there was a change, in some cases waiting for 15-60 minutes, which as you can imagine can completely break the flow of work.

Once I made the switch, I was able to immediately turn off all baking, enable many more lights, and even make the lights all destructible!

Anti Aliasing

This comes straight from the Lab Renderer page, because I don’t think I could state it better…. but essentially, you get fast MSAA

   Single-Pass Forward Rendering and MSAA

Forward rendering is critical for VR, because it enables applications to use MSAA which is the best known anti-aliasing method for VR. Deferred renderers suffer from aliasing issues due to the current state of the art in image-space anti-aliasing algorithms.

Unity’s default forward renderer is multi-pass where geometry is rendered an additional time for each runtime spotlight or point light that lights each object. The Lab’s renderer supports up to 18 dynamic, shadowing lights in a single forward pass.

Framerate

The key reason you should look into using the Lab Renderer is FPS.

In my experience, the FPS gain from the lab renderer is around 50%.  This could of course vary drastically from project to project, but the difference I’ve seen is huge.

For Armed Against the Undead, the FPS gain is enough to take the game well over 100fps on an NVidia 970.

To demonstrate the difference, I’ve copied the existing project and built a side by side comparison of the game using both the standard and lab renderers.

I’ve added screenshots of both, so you can see there’s no real difference visually, but the FPS gain is huge.  I’ll let the screen shots speak for themselves..

The Standard Shot

Using the Standard Renderer

Using the Standard Renderer (79FPS)

The Lab Shot

Lab Renderer - Using Lab 110fps

Using the Lab Renderer (110FPS)

Negatives

While the Lab Renderer offers some great performance gains, there are a few downsides that you need to take into account before making the switch.  Some things just aren’t supported yet, and I don’t know when / if that will change.

Shaders

To take advantage of the Lab Renderer, you need to use the “Valve/VR_Standard” shader.  If you have your own custom shaders, this may be an issue.  If your project is all setup with legacy shaders, you’ll need to manually convert them and make your assets look right again.  While I haven’t had a hard time doing this, I have to admit I’m not an artist and ‘close enough’ works fine for me.  But if you’re very picky about the art, this may be an extra time sink or in some cases a breaking change.  That said, if you’re using the standard shader for everything, the conversion is practically automatic and works great.

In addition to this, your general full screen shaders won’t work either.  If you have special full screen shaders you really need, you may be able to figure out a way to get them working, but in my experience none have worked so far.

Terrain

The unity terrain uses it’s own shaders, not the standard shader.  From what I’ve seen so far, there isn’t an easy way to make the lab renderer work right with terrains.  There are of course tools that convert terrains to meshes, and if you already have one that you like, that may be a good solution.  If your project makes heavy use of the Unity terrain system though, and you can’t easily swap that out, the Lab Renderer may not be right for it.

Getting Started

So now that you know the benefits, and can accept the drawbacks, it’s time to start implementing.

Luckily, doing this is easy and only takes a minute.

 

Commit your current project to source control!

If something goes wrong, or you find a drawback/issue you didn’t expect, you’ll want an easy way to revert.

GIT – You should be using it

If you’re not using any source control, read my article on Git and set it up before you do the conversion.  It will only take you 15 minutes and will prevent a ton of pain going forward.

Find your camera and add the “Valve Camera” script to it.

The Lab Renderer - Valve Camera

If you’re using the [CameraRig] prefab from the SteamVR Plugin, add this to the the (eye) not the (head)

The Lab Renderer - CameraRig - Eye

Disable Shadows

Shadows are handled by the lab renderer, so you need to disable the built in system.  When you want to adjust shadow settings, look at the “Valve Camera” script you added above.

In your project settings, you need to turn off shadows.  To do this, open Edit->Project Settings->Quality

Set Shadows to “Disable Shadows
Disable Shadows

 

 Update your materials to use the new shader

Use the Menu item “Valve->Convert Active Shaders to Valve”The Lab Renderer - Update Materials to Valve Shader

Wait…

This will convert all of your materials using the Standard shader over to use the “Valve/VR_Standard shader“, and could take a while if you have a large # of them.

Optionally, you can use “Convert All Materials to Valve shaders”, that will find everything in your project, not just your open scene.  I recommend the Active ones first because it’s usually a lot faster and will give you a good idea of how things are going to run.  If your system is fast though, just select the All option from the start so you don’t need to do it later.

Setup your Lights

Find all of your lights that you want to be realtime (In my case that was all of them), and add the “Valve Realtime Light” script to them.

When you add this, take a look a the options available.  You may not need to modify anything right now, but it’s good to know what you can do with them.

Lab Renderer - Valve Realtime Light Script

Debugging

 

After you’re setup and running, you may find your performance isn’t as high as you expect.  This could be caused by some materials not being correct.

The automatic update of materials works great to catch things using the standard shader, but if you had some legacy ones you missed/forgot, they could be causing less than optimal performance.

Luckily, Valve already thought of this and added some helper options to the “Valve Camera” script.

With the game running, enable the “Hide All Valve Materials” on your “Valve Camera“.The Lab Renderer - Hide All Valave Materials

Everything using the correct shader will disappear and only things that still need to be updated will be visible.

Swap those over to the “VR Shader” and check your performance again.

Start Now

My final recommendation is to start using the lab renderer today.  Don’t wait until your project is complete as the task will become much more difficult.

If you start early, you can tune your game for the performance available, and avoid the pitfalls of using something that’s incompatible.

If you want to learn more about the Lab Renderer, you can check out the steam forum here: http://steamcommunity.com/app/358720/discussions/0/

And if you have any feedback or tips you’d like to share, please comment or email me.

Continue reading >
Share

VR discussion on .Net Rocks

If you haven’t heard it already, check out this discussion on Unity3D & VR/Vive development.

1310 Building Virtual Reality Apps for Vive VR in Unity3D with Jason Weimann

 

The hosts of the show, Richard and Carl have a wide variety of experience and brought some interesting points to the chat.

If you haven’t heard them before, the show is about programming in general with a slight leaning toward c# (aka the best Unity language).

 

Continue reading >
Share

Recommended Unity3D Assets you should be using!

By Jason Weimann / June 9, 2016

Preface: None of these asset creators are funding this, and I get nothing for sharing these outside the joy of spreading some of my favorite assets.

In this post, I wanted to cover some of my favorite assets that I’ve been using recently. There are a ton of posts on the best free assets out there.  I've noticed though that people tend to shy away from recommending some of the better paid assets, so I wanted to go over the premium ones that I constantly find myself recommending to other developers.

While this is not a complete list of every asset I'd give 5 stars to, it is the list of ones I've found myself recommending more than a few times in the last month.  All the things on this list are assets I've purchased myself and I lead all my friends to buy as well.

I've also been taking feedback on other readers recommendations and was pleased to see some overlap with my recommendations​.  If you have your own recommendations for assets you couldn't work without, please let me know!


Font Styling

Update: TextMeshPro is FREE now and will eventually be integrated into the engine (that's how good it is)

If you’re doing any UI work at all in your game, TextMeshPro is something you should be using. When I first saw the videos and pictures, I was pretty skeptical.

But I gave it a try and was blown away. Within 5 minutes, I’d turned my crappy looking UI into something professional looking just by adjusting a few sliders.

Previously, I’d have to jump into Photoshop, write out some text, add some effects, import it, and see how it worked out. Now I just use the TextMeshPro component instead of the Text component and I’m done.

Insert Icons into your text

In addition to making the text just look great, it also supports in-line icons,bolding, italics, color codes, and more. My description can’t do it justice, so definitely jump over to the videos and look for yourself.

Again, if you’re using text in your game, get this one.


Cartoon Environments: BitGem Dungeon

If you’ve tried some of my demos, you’ll see that I use this one quite a bit.  I really like the way this one looks and how well it performs.  It was easy to get over the required 90fps for a VR game and runs great on mobile too. If you need a dungeon (and some amazing characters to go along with it), I definitely recommend the BitGem ones.

Dungeon Starter Set


Guns & Weapons: Weapon Pack

This pack is amazing. It’s advertised as a weapons pack, but once you open it up, you have the start of a full on FPS. The demo scene lets you run around and shoot things, swap weapons, kill stuff, etc. It has the sounds, particles, and animations to really tie everything together. This is my new go-to weapon pack. While the price is near the highest for the asset store, I think it’s still an amazing deal given the crazy amount of really high quality work you get.

FPS Weapons

Just to reiterate why I think this pack is so great, here’s a list of the stuff it includes!

  • Over 20 weapons
  • Fire & Reload Animations
  • Bullet models (for some of them, but I reused across them all)
  • Impact particles (bullet holes)
  • Muzzle flashes
  • Fire & Reload sound effects
  • A fully playable demo implementation
  • A flame thrower!

CartoonFX Easy Editor
The CartoonFX packs have been around for a while, and for the longest time, they were the only cartoon particles worth mentioning. Now there are a bunch of good ones, and CartoonFX still stands out as one of my favorites. The thing I like the most about it though is the EasyEditor script that’s included.

If you’ve ever needed to scale a particle effect, you probably know how tedious it can be. With the CartoonFX Easy Editor, it’s ultra simple. Pick a new scale, click a button, and it’s done. It covers the scale of all the children (including scale over time).

Even if you don’t want the particles, get one of these packs just for the editor and the time it will save you. Even if you don’t want, like or need cool cartoony particle effects… get one of these packs just for the editor. It’s a huge time saver and something I recommend using.

Continue reading >
Share

Armed Against the Undead Alpha

02
Days
03
Hours
40
Minutes
26
Seconds

Today I’m happy to announce an early alpha build of Armed Against the Undead.  The game is available now to anyone who owns the HTC Vive, but only for the next 48 hours.  I’d like to get 100 people in and some feedback so the project can be moved to the next phase and become what the players want.  If you have the HTC Vive and want to play, sign up here:

The alpha signup has expired.  If you really want to play though, sign up to be notified when the next alpha test starts so you don’t miss out.

Note: When you close the game, it will pop up a survey to get feedback.  I know it may be annoying, but please fill it out so I can work to make the game as good as possible.

Game Description

The game takes place in the depths of a dead city.  Your goal is to get out alive….  There are lots of weapons and even more zombies..

Game Modes

The game has 2 distinct modes and more are planned for later.

 

Survival

Armed - Survival ScorePlayer Health: 1 (it’s survival)

This mode is exactly what you’d expect.  Stand in one place, shoot zombies, and try not to die.  The zombies (and other monsters) come from all directions.  Occasionally, you’ll get new weapons to fend off the beasts.  One hit and you die!  Survive as long as you can and beat the top score.

 

Story

Armed - Save the InfectedPlayer Health: 20 (no you can’t heal)

Story mode takes you through the city, fighting your way out.  You’ll progress through different areas collecting new weapons along the way.  You can choose your path, save the humans, search for an easy way, or go find some better weapons down a back alley.

In this alpha build, only one path is available for testing.

Protect any humans you come across, they’ll bring you nice new weapons.  Make it to the subway and escape alive!

 

WeaponsArmed - Grip To Reload

Armed Against the Undead has Armed in the name!  There are over 20 weapons available to destroy the undead horde with.  For the alpha, the following weapons are unlocked and available.

Pistol

Damage: 1

Rate: Slow

Shotgun

Damage: 10

Rate: Very Slow

Armed - Pistol and Shotgun

Chainsaw

Damage: 5 per second

Armed - Chainsaw

SMG

Damage: 1

Rate: Medium

Armed - SMG

Flamethrower

Damage: 1

Rate: Fast

Minigun

Damage: 1

Rate: Very Fast

 

Tips

  • Aim for the head.  Zombies die from 1 shot to the head every time.
  • Shooting zombies in the legs can slow them down (they’ll start crawling if you hit them enough).
  • Use the shotgun for up-close encounters.  It’s range is small, but it kills just about everything in a single shot.
  • Listen for the growls.  If a zombie comes within 9 feet of you, it will growl, and you’ll have a second or two to shoot it.
  • The chainsaw needs to be IN the zombies to cut them, but kills stuff quickly and doesn’t use ammo.
  • Don’t forget to use the Grip to reload.Armed - Save the Infected

Get more ammo for your weapons by shooting at the ammo box (with the weapon you want ammo in)

Continue reading >
Share

Unity Coding Standards

Today, we’ll talk about Unity Coding Standards.  We’ll cover things to do, things to avoid, and general tips to keep your projects clean, maintainable, and standardized.

Things to avoid

I want to prefix this by saying that the thoughts in this post are guidelines and not meant to be a criticism of anyone.  These are personal preferences and things I’ve picked up from experience across a variety of different projects.

If you find that you commonly do and use some of these things, don’t be offended, just try to be conscious of the issues that can arise.  With that little disclaimer, here are some of the key things I always fight to avoid and recommend you do your best to limit.

Public Fields

I won’t go deep into this as I think I’ve already covered it here.  Just know that public fields are generally a bad idea.  They often tend to be a precursor to code that’s difficult to read and maintain.

If you need to access something publicly, make it a property with a public getter.  If you really need to set it from another class, make the setter public too, otherwise use a property that looks like this:

public string MyStringThatNeedsPublicReading { get; private set; }

Large Classes

I’ve seen far too many Unity projects with class sizes that are out of control.  Now I want to clarify that this is not something only specific to unity, I’ve seen classes over 40k lines long in some AAA game projects.  I’ve seen .cs & .js files in web apps over 20k lines long.

That of course does not make them right or acceptable.

Large classes are hard to maintain, hard to read, and a nightmare to improve or extend.  They also always violate one of the most important principals in Object Oriented Programming.  The principal of Single Responsibility.

As a general rule I try to keep an average class under 100 lines long.  Some need to be a bit longer, there are always exceptions to the rules.  Once they start approaching 300 lines though, it’s generally time to refactor.  That may at first seem a bit crazy, but it’s a whole lot easier to clean up your classes when they’re 300 lines long than when they reach 1000 or more.  So if you hit this point, start thinking about what your class is doing.

Is it handling character movement?  Is it also handling audio?  Is it dealing with collisions or physics?

Can you split these things into smaller components?  If so, you should do it right away, while it’s easy.

Large MethodsCoding Standards - too long list

Large classes are bad.  Large methods are the kiss of death.

A simple rule of thumb: if your method can’t fit on your screen, it’s too long.  An ideal method length for me is 6-10 lines.  In that size it’s generally doing one thing.  If the method grows far beyond that, it’s probably doing too much.

Some times, as in the example below, that one thing is executing other methods that complete the one bigger thing.  Make use of the Extract Method refactoring, if your method grows too long, extract the parts that are doing different things into separate methods.

Example

Take this Fire() method for example.  Without following any standards, it could easily have grown to this:

Original

protected virtual void Fire()
{
	if (_animation != null && _animation.GetClip("Fire") != null)
		_animation.Play("Fire");

	var muzzlePoint = NextMuzzlePoint();
	if (_muzzleFlashes.Length > 0)
	{
		var muzzleFlash = _muzzleFlashes[UnityEngine.Random.Range(0, _muzzleFlashes.Length)];

		if (_muzzleFlashOverridePoint != null)
			muzzlePoint = _muzzleFlashOverridePoint;

		GameObject spawnedFlash = Instantiate(muzzleFlash, muzzlePoint.position, muzzlePoint.rotation) as GameObject;
	}

	if (_fireAudioSource != null)
		_fireAudioSource.Play();

	StartCoroutine(EjectShell(0f));

	if (OnFired != null) OnFired();

	if (OnReady != null)
		OnReady();

	var clip = _animation.GetClip("Ready");
	if (clip != null)
	{
		_animation.Play("Ready");
		_isReady = false;
		StartCoroutine(BecomeReadyAfterSeconds(clip.length));
	}

	_currentAmmoInClip--;
	if (OnAmmoChanged != null)
		OnAmmoChanged(_currentAmmoInClip, _currentAmmoNotInClip);

	RaycastHit hitInfo;

	Ray ray = new Ray(muzzlePoint.position, muzzlePoint.forward);
	Debug.DrawRay(muzzlePoint.position, muzzlePoint.forward);

	if (TryHitCharacterHeads(ray))
		return;

	if (TryHitCharacterBodies(ray))
		return;

	if (OnMiss != null) OnMiss();

	if (_bulletPrefab != null)
	{
		if (_muzzleFlashOverridePoint != null)
			muzzlePoint = _muzzleFlashOverridePoint;
		Instantiate(_bulletPrefab, muzzlePoint.position, muzzlePoint.rotation);
	}
}

This method is handling firing of weapons for an actual game.  If you read over it, you’ll see it’s doing a large # of things to make weapon firing work.  You’ll also notice that it’s not the easiest thing to follow along.  As far as long methods go, this one is far from the worst, but I didn’t want to go overboard with the example.

Even so, it can be vastly improved with a few simple refactorings.  By pulling out the key components into separate methods, and naming those methods well, we can make the Fire() functionality a whole lot easier to read and maintain.

Refactored

    protected virtual void Fire()
	{
		PlayAnimation();

		var muzzlePoint = NextMuzzlePoint();
		SpawnMuzzleFlash(muzzlePoint);

		PlayFireAudioClip();
		StartCoroutine(EjectShell(0f));

		if (OnFired != null) OnFired();
		HandleWeaponReady();

		RemoveAmmo();

		if (TryHitCharacters(muzzlePoint))
			return;

		if (OnMiss != null) OnMiss();

		LaunchBulletAndTrail();
	}

With the refactored example, a new programmer just looking at the code should be able to quickly determine what’s going on.  Each part calls a method named for what it does, and each of those methods is under 5 lines long, so it’s easy to tell how they work.  Given the choice between the 2 examples, I’d recommend #2 every time, and I hope you’d agree.

CasingCasing

The last thing I want to cover in this post is casing.  I’ve noticed in many projects I come across, casing is a mess.  Occasionally, project I see have some kind of standard they’ve picked and stuck to.  Much of the time though, it’s all over the place with no consistency.

The most important part here is to be consistent.  If you go with some non-standard casing selection, at least be consistent with your non-standard choice.

What I’m going to recommend here though is a typical set of C# standards that you’ll see across most professional projects in gaming, business, and web development.

Classes

Casing: Pascal Case

public class MyClass : MonoBehaviour { }

Methods

Casing: Pascal Case (No Underscores unless it’s a Unit Test)

private void HandleWeaponReady()

Private Fields

Coding Standards - Private FieldCasing: camelCase – with optional underscore prefix

// Either
private int maxAmmo;
// OR my prefered
private int _maxAmmo;

This is one of the few areas where I feel some flexibility.  There are differing camps on the exact naming convention to be used here.

Personally, I prefer the underscore since it provides an obvious distinction between class level fields and variables defined in the scope of a method.

Either is completely acceptable though.  But when you pick one for a project, stick with it.

Public Fields

It’s a trick, there shouldn’t be any! 😉

Public Properties

Casing: Pascal Case

public int ReaminingAmmoInClip { get; private set; }

These should also be Automatic Properties whenever possible.  There’s no need for a backing field like some other languages use.

Again you should also mark the setter as private unless there’s a really good reason to set them outside the class.

 

Wrap Up

Again, this is just a short list of a few things that I think are really important and beneficial for your projects.  If you find this info useful, drop in a comment and I’ll work to expand out the list.  If you have your own recommendations and guidelines, add those as well so everyone can learn and grow.
Thanks, and happy coding!

Continue reading >
Share

Getting Started with SteamVR and Unity 5.6 [Updated]

Do you have your Vive?  Are you looking at SteamVR?

Are you ready to start building fun games and experiences in Unity?

Read along for some basic setup steps and a couple useful tips!

The SteamVR Plugin

If you’ve created a new project, the first thing you’ll need is the SteamVR Plugin.

This plugin has everything you need to get up and running, including some sample scenes.

If you’re looking for info on SteamVR with previous version of Unity, it’s been archived here: http://unity3d.college/steam-vr-unity-5-4-beta/

The [CameraRig] Prefab

The SteamVR team has done a great job at making it easy to start out with the Vive.

Once you’ve imported the SteamVR plugin, you can find the [CameraRig] prefab located in the SteamVR\Prefabs folder.

Create a new Scene

Delete the Default Camera

In a new scene, drag the [CameraRig] prefab into your hierarchy.

Now hit play again and enjoy the boring blue skybox.

If you don’t see anything, check your error log.

You may have a message saying:

VR: OpenVR Error! OpenVR failed initialization with error code VRInitError_IPC_ConnectFailed: "Connect to VR Server Failed (301)"!

If so, you need to launch SteamVR.

To do that, open steam and click the SteamVR icon in the top right corner.

Once it’s started, go back to Unity and click play again.

If it still fails to start, post any error message you see in the console into the comments below so I can address your problem.

An Empty World

Looking around and replacing the controllers

 

If all is working well now, you see the skybox and nothing else, until you turn on your controllers

Turn them on and you should immediately notice they appear in-game.  The triggers should adjust as you press them, and the trackpad should light up as you touch it (just like in the SteamVR Tutorial).

The controllers are available because of the [CameraRig] prefab.  If you expand it out in the Hierarchy, you’ll see the “Controller (left)” and “Controller (right)” children.

In the image shown here, I’ve turned on only the right controller, so the left is still deactivated (dark grey).

When you’re in play mode, the “Model” child of the controller creates children for the different components.

No Controllers? – Important fix for Unity 5.6 and SteamVR

Update: this is fixed and not needed as of SteamVR 1.2.2, this fix is no-longer needed.  Upgrade to 1.2.2 and skip this section! 🙂

If you’re using unity 5.6 and the current version of the SteamVR plugin, you’ll notice that the controllers don’t actually turn on.

Until the SteamVR plugin is updated, you’ll need to implement this quick fix to get the controllers updating properly.

Select the Camera (eye)

Add the “SteamVR Update Poses” Component to it.

And done..  Now the controllers will track again

Replacing the Controllers

One of the questions I get quite often is “how do I replace the controllers with a [sword/gun/hand/random other thing]?”

As you may already expect, you can simply add the thing you’d like to replace the controller with as a child of the “Controller (right)” or “Controller (left)” GameObjects.

For this example, we’ll replace the controller with a shotgun (like I did in the Zombie shooter game)

First, I drag the shotgun model under the “Controller (right)” GameObject

This ‘works’, but there’s a bit of a problem.  While the shotgun will move around with the controller, it won’t be aligned correctly.

Now there are a variety of ways you can fix this, but the simplest one and the one I recommend you use is to make a new GameObject for the Shotgun and have the model be a child of it.

With the “Controller (right)” GameObject selected, click GameObject->Create Empty Child

You should see this

Rename the new “GameObject” to “Shotgun

Move the “Shotgun” model (in my case named DBS) to be a child to the “Shotgun” GameObject

Fixing alignment and position

Press play and go to your Scene view.

Because we can’t see the controller without hitting play, these changes must be done in Play mode, follow along to see how to keep those changes once you’ve left play mode.

In the Scene view, adjust your weapon model to be aligned with the controller how you want it to be (some guns for example hold at a different angle than the shotgun pictured below)

While still playing, look to the Inspector to copy the transform values

Stop playing, the gun will reset.

Now go back to the model for the gun (child of “Shotgun” in this example) and use the Paste Component Values menu option.

When you play again, your gun (or other object) should be properly aligned and move with your controller.

Once it looks right, disable the “Model” child of the “Controller (right)” GameObject.

You can delete it, but disabling it gives the same effect and allows you to easily re-enable it if you decide to make adjustments to your handheld items.

That’s all you need to get started!  You should be able to look around, place objects where your hands are, and get to building stuff!

Serious about VR?

If you really want to get started with SteamVR and build your own game today, my Professional VR Game Development course can jumpstart your project.

Start Today!

 

Using the Valves – Lab Renderer for VR

Getting started with SteamVR Controller Input

Continue reading >
Share
Page 14 of 17