All posts in "Unity3D"

VR Interaction – SteamVR Lab Interaction System – Throwables

Today, I’ll introduce you to VR Interaction using the Lab Interaction System.  If you’ve played The Lab, you already know it includes a variety of different interactions spread across a flurry of mini-games.  With a 98% positive review rating, it’s one of the most popular VR experiences, and the interaction system is one of my favorite parts.

Getting started with The Lab Interaction System only takes a couple quick steps.  In this guide, I’ll show you the setup and how to create your first grabbable & throwable object.

Setup The Lab Interaction System

Create an empty project

Import the SteamVR asset from the Asset Store

In your empty scene, delete the “Main Camera”

Add the [CameraRig] prefab from the SteamVR/Prefabs folder into your scene.

Create a new empty gameobject and name it Player

Reset the position of the Player in the Inspector

 

Right click on the “Player” GameObject in the hierarchy and create 2 Empty Children

Hands Setup

Name them LeftHand & RightHand

Add the Hand component to both the LeftHand & RightHand

Select the LeftHand

In the Inspector, drag the RightHand to the “Other Hand” field.

Select the Right Hand

Assign the “Left Hand” to the “Other Hand” field

Change the “Starting Hand Type” to Right

Controller Prefab

Because we’re not assigning any actual hand models here, we’ll want to fill in the Controller Prefab field.  This will allow the hands to automatically create controller models much the same way the default [CameraRig] controllers do.

Select the left hand and assign the “BlankController” prefab to the “Controller Prefab” field.

Repeat this for the RIGHT HAND

The BlankController prefab can be found in the SteamVR/InteractionSystem/Core/Prefabs folder

Player Setup

Select the “Player” in the Hierarchy.

Add the Player Component to the “Player” GameObject

Drag the hands onto the “Hands” array.

Expand the [CameraRig] and select the “Camera (eye)”

Right click on the Camera (eye) and add a new 3d Object->Sphere as a child.

In the Inspector, change the Sphere Scale to 0.1, 0.1, 0.1

Uncheck (or remove) the Mesh Renderer component of the Sphere

Select the Player

Assign the newly created Sphere to the “Head Collider” field of the Player component

Drag the [CameraRig] to the “Rig Steam VR” field

Environment Setup

So far, we have the player all setup with hands and ready to go.  It’s time to create a little environment where we can grab and throw things!

Create an empty Plane

Reset the position

Create a Cube

Set the transform values to match the image above – position = (0, 0, 2.5) | scale = (1, 1, 4)

Your scene view should look similar to the image here.

Throwables

Ok it’s time to make something throwable, luckily this is the easiest part.

Create a new empty Sphere

Rename it “Ball

Set the Scale to (0.1, 0.1, 0.1)

Set the Position to (0, 0.6, 0.7)

Your ball should now be just above the Cube like this

Add the “Throwable” component to the “Ball”

You’ll immediately see a few other components get added to the ball automatically.

That’s it..

Press Play

Pickup your ball (with the Trigger) and throw it!

Conclusions

The SteamVR Lab Interaction system is actually very featured and this only covers the first of many possible interactions available.  The player and hand setup you have here will work as a great starting point for exploring the other interactions.  For a full demo of all the interactions, make sure you check out the “Interactions_Example” scene in the SteamVR/InteractionSystem/Samples/Scenes folder.  Also if there’s a specific interaction type you’d like more info on, please comment below and let me know.

 

 

Continue reading >
Share

SteamVR Locomotion and Teleportation Movement

SteamVR Teleport

SteamVR ships with an easy to use teleport system that requires very little setup.  If you’ve played the lab, you’ve seen the system in action.  It allows for quick teleportation to specified areas.

To setup the SteamVR teleportation system, there are a couple requirements.

Basic Setup

If you want to work along with the steps below, you’ll need the SteamVR plugin installed.  If you just want to read though, skip ahead to “Hands”

Next, you’ll want to create an empty cube and scale it to (20, 1, 20).

Add some material to the floor, in my example below, you’ll see I went with a wood plank material I found somewhere..

Finally, add a [CameraRig] prefab from the SteamVR plugin prefabs folder.

Hands

The SteamVR teleport system is coupled with the lab’s interaction system.  As such, it requires the use of the Hand and player objects.

The way it’s done in the samples, and the way I prefer to do it is create two new empty children under the [CameraRig].

Re-name them “Left Hand” & “Right Hand”

Add the “Hand” component to each of them.

Set the “Other Hand” field on both of them by dragging the other ‘hand’ into the field for each of them.

For the Right hand, make sure to set the “Starting Hand Type” to right (the default value is left).

Player

For teleport interactions to work, you’ll need to add the “Player” component to the [CameraRig].

Drag & add the Camera(eye) to the “Hmd Transforms” array.

Add the Left & Right hands to the “Hands” array.

Teleport Prefab

Next, you’ll need to add the “Teleporting” prefab to the scene.

You can find the “Teleporting” prefab in SteamVR/InteractionSystem/Teleport/Prefabs

Drag & Add the Teleporting prefab to the scene.

There are plenty of options you can adjust on the Teleport prefab, but for now, we’ll leave it at the defaults.

Teleport Points

Before we can teleport, we’ll need to tell the system where a valid teleport point is.  Again, the SteamVR system comes with a great prefab to get started.

Take the TeleportPoint prefab and place a couple instances in the scene above the ground.

Teleport Points – Testing

Press Play

Now use the trackpad to teleport to the different teleport points.

Teleport Points – Scene Loading

Though we won’t use it here, I also wanted to point out that the Teleport Point has a “Teleport Type” option on it.

This allows you to make the point load a different scene when the player moves there.  If you want to use this functionality, just switch the type and enter the scene name.

Teleport Area

The final step before we can teleport is to add a “Teleport Area”.

A teleport area works a lot like the teleport point, but allows you to specify a bigger ‘area’ where you can land.

One way to set this up would be to just add the “Teleport Area” component to the floor.  If you try this though, the ground material will change, and when you’re not teleporting, the ground will be disabled completely.

Instead, what we’ll do is clone the ground object, move it a bit, and make that our teleport area.

Duplicate the floor.

Make the duplicate a child of the floor

Re-name it to “Floor – Teleport Area”

Set the Y position to 0.01

Add the “Teleport Area” component to it

As I mentioned above, you’ll notice that the material has automatically changed to “TeleportAreaVisible”.  Again, this is controlled by the “Teleporting” prefab in the scene and one of the reasons we can’t just add the component to the ground.

Teleport Area – Testing

With this in place, you should be able to hit play and teleport all around the area.  Also notice that the teleport points will work fine alongside the area.

Conclusions

The SteamVR interaction system used in the lab is a great starting point for many projects.  If you’re looking to get the basics down, definitely check this system out.  It doesn’t cover every form of movement though and is only a small sampling of the possible locomotion types you can add.  But if you just need good teleport movement, it’s easy to setup and works well.  If you’re interested in other locomotion systems, drop a comment below and let me know what you’d like to learn more about.

Continue reading >
Share

Building an Interactive Mobile 360 Video Player for GearVR

 

Building an interactive 360 Video player for Mobile / GearVR

This post will run you through the basics of creating a 360 video player for the Oculus GearVR.  The same techniques will work for google cardboard, daydream, htc vive, and oculus rift with minor changes.  You’ll learn how to play a video, bring up a quick menu, and switch to another video.

Note on 3D 360

This post covers 360 video but not stereo 3D video.

If you’re looking to play 3D 360 Video check out this post: https://unity3d.college/2017/07/31/how-to-play-stereoscopic-3d-360-video-in-vr-with-unity3d/

Sphere Setup

Before we can play 360 video, we need an inverted sphere.

In a previous post, I showed how to use an inverted sphere mesh.

Today, I’ll show you how to create one in Unity using a single editor script – original source here

Create a new folder named “Editor”

Create a script named InvertedSphere.cs in the “Editor” folder.

Replace the script contents with this.

You may need to hit play before continuing just to force the editor to compile your new editor script

In a new scene, right click the Hierarchy and select “Create Other”->”Inverted Sphere

Set the size to 100.

Click “Create Inverted Sphere”

Select your “Inverted Sphere” in the Hierarchy

Add a “Video Player” component to the sphere.

Importing Videos

Create “Videos” folder.

Download some sample 360 videos that you enjoy from youtube or your favorite source.

Place them in the “Videos” folder.

Transcoding

Some videos will play fine on GearVR without transcoding them, but many won’t.  Luckily, transcoding is as easy as checking a box and waiting.

Select your videos.

In the inspector, check the “Transcode” box.

Click Apply

Wait (this took 15 minutes for me)

Video Selection Script

To manage our basic menu system, we’ll need two scripts.

Create a “Code” folder.

In the folder create these two scripts.

VideoSelection.cs

GazeInput.cs

Menu Setup

For this sample we’ll create a basic menu out of cubes.  Once it’s working, feel free to pretty them up however you like or replace the cubes with something that fits better to your style.

Create an empty gameobject in the Hierarhcy

Name it “Menu”.

Reset the transform so it’s in the center of your scene.

Right click on the “Menu” object and create a cube as a child.

Re-name your cube to be “Menu Item – yourvideoname” (example I used a bunny video so mine looks like this)

Adjust the transform values so that X & Z are 1.

Add the VideoSelection component to the new Menu Item

Drag your first video clip onto the Video Clip field of the component.

The second Menu Item

To create another menu item, duplicate the existing one (ctrl-d).

Re-name it

Change the X to -1

Swap out the video clip with a different one.

Camera Setup

The last thing we need to do is setup the camera.

Select the “Main Camera” object in your Hierarchy.

Reset the position.

Set the Position Y value to positive 1.

Add the GazeInput component to the “Main Camera”

 

Building to the phone

The project should be good and running now.  The final step is to put it onto a GearVR (or whatever platform you’re using).

If you already have an OSIG file, place that in the Plugins/Android/Assets folder of your project (create this folder structure if it doesn’t exist).

If you don’t know what I’m talking about, go here and follow the instructions (only takes a minute) – https://dashboard.oculus.com/tools/osig-generator/

With the OSIG in place we’re good to go.

Save the scene and open the Build dialog

Add the saved scene.

Click build and run with the phone plugged in and if all goes right, you’ll have your video player up and running.

**** IMPORTANT – Make sure your phone is plugged in via USB and UNLOCKED or the deployment will fail ****

Controls

The controls for this video player are very simple.

Tap the GearVR touchpad to bring up the 2 cubes.

Look at the cube you want to play from and tap again.  The video will start playing.

Just tap again to bring the menu (cubes) up again.

 

Conclusions & Extensions

This is not the prettiest video player, but should give you a good idea how to get started.  Once you get it working, the first thing I’d recommend doing is making the ‘menu’ a whole lot prettier, adding some text to show what the video names are, or some video preview on the items.

Overall though, the new video player functionality, along with the included Oculus SDK makes it really easy to get going with your own video player now.  I’m excited to see all the variations people build, and may end up building a more full featured one soon myself.

Continue reading >
Share

Unity3D Object Pooling

By Jason Weimann / May 11, 2017

In Unity3D Object Pooling is a technique to pre-instantiate GameObjects before they’re needed, and minimize instantiation and destroy calls. We use object pooling to minimize slow frames or studders, but it’s also a key technique to remove un-necessary garbage collection. If you search for Unity Object Pooling, there are dozens of samples all with their own opinions and nuances.. and most of them are perfectly good options. Today though, I wanted to share one of my recent pooling systems.

What’s different about this Unity3D Object Pooling system?

The main thing I wanted to do here was keep the pooling system very light and low on ceremony while still enforcing some basic functionality. The way this pooling system works, my objects get a reference to the pool or pools they need at startup and do it by passing in the poolable prefab that’s assigned.  Of course, keeping it simple does mean that it’s not ultra configurable and feature rich, it does basic clean pooling, nothing more.

How do I use it?

Before I go into the details of how the system works internally, let’s get a quick look at how you use it.

Take a look at this sample usage from my ProjectileLauncher class. [ LINE 21 ]

You can see in the Start method that I get a reference to the pool that’s needed by simply passing in the prefab.
Then in the Fire method, I request a projectile from my pool and launch it.

How does it work?

One important thing to note is that for this system to work, the prefabs need to implement a very small interface IPoolable

The IPoolable interface only has a single Action to track when the object is ‘destroyed’.

Now since we want our gameobjects to be re-used, not destroyed and re-created, this event is actually called in the OnDisable call of the prefabs.

When the poolable objects get disabled, the pool automatically re-adds them to a collection of available objects, then when a pooled object is requested, it pulls the next one out of the collection, moves it to the desired location and re-enables it.

For example, take a look at the OnDisable method in this projectile class.


You can view the entire Projectile.cs class here: https://gist.github.com/unity3dcollege/a241c5677dbf8cd8ba772c7c70224f0a

The guts of the Object Pooling System

The majority of the pooling system is really handled in a single small class, appropriately named “Pool”.

You can download or view the entire Pool.cs class here: https://gist.github.com/unity3dcollege/21c082ca4caf94fddc75fc188441e0ee

Pool.cs Class Overview

The class starts out with some static methods for getting or pre-warming a pool. If you look back that the ProjectileLauncher, you can see that it’s calling the GetPool method to get a pool for the desired projectile.

GetPool / Prewarm


GetPool & Prewarm both do very similar stuff (and should get refactored).

  1. First, we check the static dictionary of pools to see if a pool already exists for the specified prefab.
  2. We then return the pool if already exists in the dictionary and isn’t null (from being destroyed by a scene load or something else).
  3. If it doesn’t exist, or was destroyed and went null, we create a new pool and initialize it.
  4. If the initialization happens via the Get() call, we just use a standardized default pool size.
  5. When we need tighter control over the pool size, the Prewarm method allows that size to be passed in as a parameter.

Initialize

The Initialize method is responsible for actually creating the gameobjects we need pooled.

On line 55 they’re instantiated.
Next the OnDestroyEvent is registered to call AddObjectToAvailable()
And on line 60, we set the instantiated object to inactive, triggering the OnDestroyEvent and adding the object to the available pool.

Get

The get method grabs the next available pooled object from the queue and returns it. If no more objects are available, it will grow the pool size by 10% (or 1 if 10% is equal to none).

The public Get method also takes in a position and rotation so that the pooled object can be placed before it’s activated.

Update

The last thing we do is move any recently deactivated pooled objects to become children of the pool.

We do this because when the pooled object is active, we want freedom to control it’s parenting. This allows us to do things like make it’s position to be relative to the thing that spawned it. But when it’s ready to go back into the pool, I really prefer to have it as a child of the pool so that the scene is clean and it’s easy to avoid mistakes.

But how do I pre-warm it??

You may have noticed that the pooling system’s Prewarm wasn’t called by any of the code so far.

While having a pooling system that lazy initializes is better than no pooling system, pre-warming is almost always a good practice.

For that, I use the PoolPreparer class on a GameObject in my scene.

The pool preparer script gets added to a gameobject in my scene that needs the pools.

In it, I assign the prefabs I want pre-warmed, and it does the work during awake.

Most of the Awake method is just validation to prevent pre-warming multiple times, the actual initialization is on line 31.

The OnValidate() method is there just to make sure the system isn’t used incorrectly by accident.

Limitations, Thoughts, and Conclusions

Like I mentioned before, this object pool system is very basic, and is meant to be very lightweight. In the past, I’ve used and even built my own pooling systems that had all kinds of different requirements from a BasePoolable class to editor heavy tooling. For this one though, I wanted to be able to accomplish similar results with almost no special code, just implementing a single small interface.

That said, the system is not extremely configurable out the gate, if you wanted to initialize different pool sizes, you’d have to have multiple pool preparer components (which you could do all on a single gameobject), or you’d have to modify something.
On top of that, the pool growing setup is super simplistic and just increases the pool size by 10% (all in one frame as well). So if you aren’t initializing the pool with the size you actually need, you could still run into a hiccup on occasion.
And finally, the setup is not intended to work across scenes. If you need to pool objects across scenes, the Pool class would need to be updated with some DontDestroyOnLoad calls for itself and the pooled objects it creates, though that wouldn’t be much work.

But it does do the job it’s intended to do, and does it with very little ceremony, and it just works..

If you have another favorite pooling system that you really like though, please share below. And if there’s enough interest, I may do an update where I show a couple of the previous pooling systems I’ve used and built over the years.

Continue reading >
Share

Avoid the VR waiting room with SteamVR Skybox

Valves SteamVR plugin makes getting started with VR in Unity quite a bit easier.  The [CameraRig] prefab is a perfect starting point for most projects, and the controller and input tracking is amazing.  But there’s quite a bit more to the package than just that.  The SteamVR_Skybox is one of the features in the SteamVR plugin that you should definitely know about and use.

What’s the SteamVR_Skybox do?

The SteamVR Skybox component will allow you to change what a player in the HMD sees when your game drops frames or is is ‘not responding’.

Of course removing frame drops is the best solution, but sometimes you have cases where that’s just not an option.

In many games the most common time this happens is during scene loading.  Even using asynchronous methods to load your scene won’t resolve it for every case.

How do I use the SteamVR Skybox?

What I’ll usually do is create a child under the [CameraRig] and name it SteamVRSkybox.

I set the Y position of it to 1.5 and leave the X & Z alone.

This is to simulate a player of about 1.5 meters in height

Then I’ll add the the SteamVR_Skybox component to it and clickTake Snapshot

What’s this do?

With this component in place, when the game loses a frame and would go back to the loading area, it instead shows what was visible when I took the snapshot.

Sometimes this is enough and the game will just work, but other times you may have a special ‘loading area’ or some intermission setup that you use.

In that case, just place your SteamVRSkybox in the proper place for your snapshot instead of under the CameraRig.

Any Downsides?

While this is a great solution for most cases, depending on your implementation, you may see some less than perfect stuff.

First is that the skybox only shows what’s visible when you take the snapshot, so any gameobjects you’ve spawned at runtime won’t be there…  so if the frame drops are during gameplay, while it’s a tiny bit better, it will still feel weird.

And when you use it to load levels, I recommend you put the player into a loading area ideally.  This would be some room where stuff doesn’t change at runtime and your snapshot can match up correctly.

Also because this is part of the SteamVR plugin, this method won’t work for games targetting the Oculus SDKs.  For oculus, there’s another route you can follow that I’ll share some time in the future.

Conclusions

The SteamVR Skybox is only a tiny part of the awesome SteamVR package.  It’s easy to setup in just a couple minutes and on it’s own can make your game or experience feel a little bit more polished.  Again it’s not a fix all for bad performance, that should be addressed on it’s own, but it is a useful component and worth trying out.

 

Continue reading >
Share

Ceiling and Wall Navigation in Unity3D

By Jason Weimann / May 9, 2017

If you’ve ever wanted to build a game with spiders crawling on the walls, or a world where your player can walk along the ceiling, you’re in luck. In the past, it’s been a bit of a pain to get wall navigation or ceiling navigation to work in Unity. There are some asset packs out there and a couple guides on how to make it work, but now, it’s getting much easier. With the new Unity 5.6 navigation system, you can have spiders climbing your walls in just a few minutes.

Unity 5.6 Navigation System

First, it’s important to note that the navigation system updates talked about in this article are still in development and are not included in the 5.6 installer. I don’t know when they’ll be completed bundled in, but I really hope it’s with 2017.1.

Getting the Components

To download the navigation system, visit the git repo here and click Clone or Download: https://github.com/Unity-Technologies/NavMeshComponents/

The download option gives you a .unitypackage that you can import like any other package.

The NavMeshSurface

This new script is the key to walking on walls and ceilings (or any other non-standard navigation surface).

Setting it up is quick but can be a bit confusing because options on the component change how it works pretty drastically.

Using a Single / Global NavMeshSurface

You can add a NavMeshSurface object using the GameObject->AI menu (once you’ve imported the .assetpackage).

Take a look at the Collect Objects option.

If you’re using a single NavMeshSurface on an empty gameobject, the “All” option makes sense.

What it does is bake for every object in the scene..

For wall climbing though, it doesn’t really help, because the orientation of the empty gameobject determines the orientation of the navmesh..

You could create 4 of them, rotate them in the 4 directions of the walls, and bake (and 2 more for the floor and ceiling), but I didn’t find that to be too useful personally.

Using multiple NavMeshSurfaces

Where I had more success is by adding the NavMeshSurface component to the specific parts I want the player/npc to walk on.

In this example image, I’ve created 2 cubes.

They both have a NavMeshSurface script on them with Collect Objects set to “Children”

This makes the NavMeshSurface only bake the meshes on that object and it’s own children.

To get the side wall to generate the navmesh where I wanted though, I needed to rotate the wall -90 degrees.

This is because currently the “Bake” option generates the navmesh based off the navmeshsurface’s orientation.

Linking the Navmeshes

With the current version of the system, automatic creation of links isn’t supported.

OffMeshLinks can still auto generate for the objects using the old baking system, but the new NavMeshLink used by NavMeshSurface has to be setup manually.

The documentation hints that this will be automated in the future, but for now, it’s a relatively tedious process, but it’s a tedious process that works..

To set it up, I added a NavMeshLink component to the wall.

When you add the NavMeshLink, you’ll see two little orange squares that represent the “Start Point” & “End Point” variables in the component.

If you click on them in the Scene view, you can move them around to line them up with where they should link.

Making it wide

The old offmeshlink system required you to create a bunch of links (or auto created them) so that agents could get across navmeshes at different points.  Without multiple links, you’d end up with the agents walking to the center point to cross.

The new NavMeshLink component has a width variable you can adjust.
Make the link wide enough to cover the connection and the agents will walk right over to the other navmesh without going out of their way.

What’s it look like?

The end result of this simple setup is a spider that walks around on walls, jumps back and forth between the wall and floor, and pretty much ‘just works’ with minimal effort.

Of course in a full game, you’d probably want to polish up that transition, maybe with a jump or some animation and IK work, but it’s also possible that the actions so fast and your game is so active that you don’t care and can use it as is.

Making the agent walk – Testing

If you want to play with this yourself, there’s a good script in the navigation package for allowing you to click move.

For my spider I just added the “Click To Move” component.  You should be able to add that to your character and see them walking on the walls or ceilings in no time.

Conclusions

While the new navigation system is awesome, it’s still under development.  Some of the functionality that isn’t quite ready yet could make setting this up very time consuming.  If you want to play around with wall walking, or have a small project where you can setup your navmeshes in a short amount of time, I’d say go for it.  If your project is big, your navmeshes are huge, or you really can’t afford to re-work everything when stuff changes (and I’m sure stuff will change), then you may want to hold off.

Whatever you do though, it’s a ton of fun watching your characters climb walls…  Now I just need to make them jump down from the wall at the player for extra fun 🙂

Continue reading >
Share

Unity3D Spectator Mode in VR

If you’ve seen many VR games recently, you’ve probably come across ‘spectator mode’ in at least a few of them.  Games going back to Holopoint and big hits like Rick & Morty both implement their own spectator modes.  Luckily, creating a spectator mode is actually very simple, though there are some things to consider when you do it.

The Spectator Camera

The first thing you’ll need to do is create a new camera in your scene.

Next, take a look at the “Target Eye” field.

Set it to “None (Main Display)”

Now position the camera where you want the spectator view to be.

You can make it a child of the play area, set it in a static position, or create a script that allows the user to move the camera freely using mouse and keyboard.

Adding a generic mouse look/move script from the asset store would give you a free move camera.

What do I show?

Most VR games without a spectator mode or multiplayer functionality don’t show the local players body.

The best thing to show however isn’t a full body, instead, I recommend you show the head and hands only.

As a sample, I created a basic head out of 3 spheres and attached it as a child of the CameraRig’s camera.

When I play now, the spectator camera looks like this.

 

A Problem – The Inside of the Head…

If you copy what I’ve done so far, you’ll quickly realize an issue… right when you put on the headset.

Two big black circles… it’s the eyes!

We have an easy fix though.  Set the head (or body if you use a body), to be on a different layer.

Now select your main VR camera and adjust the culling mask to deselect the PlayerHead layer.

Problem solved!

Now the head and eyes won’t be rendered by the VR camera.

 

Other Considerations

It’s very important to note here that adding this spectator view comes at a performance cost.

You’re now rendering for another camera, and depending on your game, this could be a big performance hit.

In general, I’d recommend having spectator mode be OFF by default and allow it for people who want it, but give them a warning that it can impact performance. (Rick & Morty does a great job of this)

Also, if you do setup a spectator mode, go wild on what you allow the player to do..  let them view the game from the sky, behind the player, maybe even as an ant or some select NPCs in your game.. and just make it fun to watch 🙂

Continue reading >
Share

Using Static Events in Unity

By Jason Weimann / May 5, 2017

If you read my previous post on events, actions, and delegates in Unity, you may know that I love proper usage of the c# event system for class and object interactions.  Using events/actions gives you a clean, maintainable framework for building your games, and allows for easy extension and expansion of your projects without touching the same files over and over. (and breaking things)

Since I wrote that post though, I’ve made two small but important changes to how I’ve been doing things personally.

Events with empty delegates

First, I’ve been defining my events in a slightly different way.

Because Unity doesn’t yet have access to the ?. operator in c#, I started to become frustrated with having to do null checks before invoking every event.  Luckily, a buddy of mine named Kyle shared a simple fix.

Instead of having the event be null, I just assign it an empty delegate in the declaration.

Take a look at this ScoreController class for an example.

On line 6 you’ll see the OnScoreChanged event.

Notice that it’s initialized as an empty delegate that just does… nothing.

So now, when I want to call OnScoreChanged (on line #13), I no-longer need to do the null check.

What about performance??

If you’re freaking out and worried about performance, calm down.  The overhead of making this call to do nothing is so tiny it’s not going to impact your app performance at all.

Static Events

The other things I’ve started using is Static events..  If you’d asked me a year ago, I’d advise against them heavily.  Part of that is just a dislike for static methods due to my life of dependency injection and unit testing.  But if you’re not unit testing and using a DI framework, static events can give a great clean way to know when something happens that you want to react to.

What’s this look like?

On line 6 you can see that I have a static event named OnAnyTookShot.

The invocation of this event actually passes in the current instance of the Shootable that took the shot.

Why??

If you take a look at my decalcontroller script, you can see that I use the static event to have the decal controller get notified whenever ANY shootable takes a hit.

This way, my decal controller doesn’t need to know what Shootables exist, it doesn’t need to register for multiple events, and it still gets the shot event and allows me to spawn a decal where it needs to be.

If you want to learn more about the decal controller, I’ve written a bigger post focused on it here

Warning

One thing you want to be sure to do is unsubscribe from these events on destroy.

If you don’t, the instance won’t be able to be garbage collected because of that lingering event registration.

Wrap Up

These are just some minor points I wanted to share.. if you haven’t been using events & actions in your code, I highly recommend you take a look at my previous post on the subject here: https://unity3d.college/2016/10/05/unity-events-actions-delegates/

Using these 2 additional techniques and slightly different format will help take your code to another level of simplicity and make your projects a bit easier to maintain and grow.

Also, if you have some great suggestions (or some questions) about events in Unity, please drop a comment below.

Continue reading >
Share

Vision Summit Roundup

By Jason Weimann / May 4, 2017

This week was Vision Summit 2017.  If you’ve never been to a Vision Summit, it’s a great VR / AR event held in Hollywood every year, hosted by Unity.  The event is one of my favorite, with only Unite outshining it.

Today, I want to share some of the things I found most valuable at the summit..

Keynote

The keynote was semi-interesting, but didn’t reveal a whole lot of new stuff.  I think that’s mostly because the Unity team is already so public about what they’re doing that there really aren’t too many secrets to let out at these events.

There does seem to be a push by Microsoft though with the new budget VR headsets.  In-fact they even decided to follow Valves lead and give out free Acer VR headsets to all attendees.

I’m not sure where these headsets will settle in the market.  We’ll have to wait and see how much demand there is for budget headsets.

Sessions

Sessions at Vision Summit are typically really high quality.  They’re also very short…  They mostly ranged from 30 to 45 minutes, which isn’t a lot of time to cram in all the great knowledge to be shared.

However, even with the tight time constraints, there were some really great ones that everyone should look for replays of.

NVIDIA – VRWorks/Ansel

This talk was great and covered some of the work the teams at NVIDIA are doing to make it easier to get great performance in your VR games.  I even liked the Ansel part enough to write a separate post just about that asset.

Lessons From Oculus

Oculus is of course a great leader in VR, and the information presented in this talk was as good as you’d expect.  The main focus was on getting a real sense of presence in your VR games.  With simple tricks like the hand test, it was easy to take away quite a bit of value.

The hand test involves putting on the headset and touching your virtual hand with your other real hand, and seeing if the part you expect to touch is where you actually hit.

Vive Trackers

The Vive tracker talk was labeled as a ‘best practices’ but really didn’t seem to cover that much.  It did reveal a bit more about HTCs plans for the trackers though, and those plans sound great.  While there weren’t too many details spilled, it sounds like retail versions of the trackers, and some accessories, will be available sometime ‘soonish’.

I really love the trackers, so even that little bit of info was enough to make me leave happy.

Expo Hall

The expo hall was surprisingly unfilled.  At the previous Vision Summits, this room was packed, and maybe that’s why they scaled it back a bit.  Microsoft had another cool Hololens demo in there, and Oculus was in full force showing off the new GearVR controllers.  While I’m excited for the new GearVR, I really wish the controller had positional tracking like the Daydream one.

OTOY also had a really interesting demo of their new lightfield tech utilizing new facebook cameras.  I’m not sure when or how I’ll use it, but I feel like the tech is cool enough that I should find a way.

 

Demos

The demo section of the Summit is always full of small to mid size studios with their recently released or in development titles.  Normally, I go through them all and have a blast talking to all the developers.  This time though, the lines and crowds were LONG.  I don’t know if it was because they let too many people into the demo area at once, or if my timing was just terrible, or perhaps attendance was just a lot higher than before.. but I didn’t get to actually try much.  The games I did try were pretty fun though, and chats with the developers were still the best part.

 

Location

The Summit has been held at the Loews Hollywood every time, and the location is amazing.  It’s also really expensive, but there are a ton of food options and the hotel is one of the nicer ones I’ve been to..  As you can see in the cover photo, the reception party is held at the pool, and my room just happened to be positioned with the perfect view.

 

Conclusions

Overall, I had a great time at the Vision Summit…  I did think that the previous ones were bit better though (hell they gave out free Vives at the 2016 summit).  I’m still very happy to have attended though, and got to meet a lot of new developers and catch up with many others I’d met in past years.  If you find yourself in a position to attend next year, I’d definitely give it serious consideration (though again, I’d take Unite over it if I had to pick only one).

Big Thumbs up!

Continue reading >
Share

Taking high resolution and 360 screenshots in Unity with Ansel

By Jason Weimann / May 3, 2017

Taking HD Screenshots in Unity

One of my favorite sessions from Vision Summit 2017 was presented by NVIDIA about VRWorks and Ansel.  I was really impressed with the ability to create high definition screenshots in Unity using Ansel.  It seemed amazingly simple to setup, and the example images I saw were breathtaking.  The ability to also take 360 degree screenshots AND 360 degree 3D screenshots for VR games set it over the top though.  So when I got home, I started playing with it and wanted to share the results.

Requirements & Setup

Ansel is an NVIDIA tool, so it only works for people with an NVIDIA card.

If you have one though, setup is simple.

Drivers

First, make sure you have drivers that support it.

If your driver version is 368.81 or higher, you should be good.

You can get the most recent NVIDIA drivers here: http://www.nvidia.com/Download/index.aspx

Asset Pack

Once you have them installed, you’ll need to add the plugin from the asset store.

 

The Script

Once you have the asset pack imported, you simply need to add the Ansel script to your camera.

If you’re doing this for VR using the SteamVR CameraRig prefab, make sure to attach it to the child camera (eyes), not the parent.

Not Allowed – The Command Line to enable it

Before you can start taking screenshots though, you’ll need to run the following command to ‘whitelist’ everything (unless you have your app whitelisted).

NvCameraEnable.exe whitelisting-everything

You can find the executable in your NVIDIA install folder.

Mine is in C:\Program Files\NVIDIA Corporation\ansel\Tools

Activating & Taking Screenshots

To activate Ansel, create a build, run your build, then press Alt-F2.

You’ll have a nice dialog appear with a variety of options for taking and adjusting your screenshots.

Movement

While you’re taking screenshots, you can move the camera around!

Use W,A,S,D & X,Z to move, and hold left mouse to rotate.

Results

I wanted to share an image of the super sized screenshots.

Click on the image to view the full size version.

VR Issues

My main use for Ansel is to take great screenshots of VR games.

Unfortunately, it’s a bit more complex than I’d expected, with a few issues that popped up for me.

Luckily, those can be overcome pretty easily.

  1. Single Pass Rendering – This is not supported at the time of this writing.  It does appear to be coming soon, but from what I’ve seen, screenshots with SPS enabled do not appear to work properly.
  2. VR Support & Camera Movement – If you have “VR Supported” checked in player settings, you won’t be able to move the camera around.  In my case, I was able to un-check the option and still take my screenshots, though I’d really prefer being able to move it without breaking VR mode.

Overall Conclusion

I really like Ansel and what they’re doing to make 360 & VR media easier to share.  Right now, it’s a bit rough, but I fully anticipate that to be resolved with the 2017.1 version.  NVIDIA has some great engineers and Ansel is just a small part of what they’re providing on top of all the awesome hardware (VRWorks looks amazing).

Continue reading >
Share
1 13 14 15 16 17 21
Page 15 of 21