• Home  / 
  • Author's archive:
About the author

Jason Weimann

Using the HTC Vive Tracker POGO Pins in Unity3D

If you have an HTC Vive Tracker, you may have already hooked it up and started tracking objects in Unity3D.  That part isn’t too hard, and it’s a lot of fun.  But you can also flip that Vive Tracker over and start using the POGO pins on the back.  The pins are pretty easy to use and give you access to the same buttons you’d have on a normal Vive wand and even include an output for haptic feedback.

HTC Vive Tracker Documentation

To get detailed info, HTC has a guide here: https://dl.vive.com/Tracker/Guideline/HTC_Vive_Tracker_Developer_Guidelines_v1.3.pdf

Video Version

Setting up the HTC Vive Tracker in Unity3D

I’ve gone over it already and just wanted to share the steps involved to get going.

First, you need your tracker in-game.  To keep it simple, we’ll use the CameraRig prefab.

Drop a [CameraRig] into an empty scene (and delete the existing maincamera).

Under the CameraRig, add an empty gameobject and name it “Tracker“.

Add the SteamVR_TrackedObject component to the “Tracker” you’ve just created.

Select the [CameraRig].

Drag the “Tracker” to the Objects array on the SteamVR_ControllerManager.

Turn on both controllers and press play.

Move the tracker around, if you see it move in-game, you’re good to move on to the next part.

Reading Vive Tracker POGO Pins

To show how to read the inputs, I’ve created this example script.  Put it in your project, then add it to the “Tracker” object.

<script src=”https://gist.github.com/unity3dcollege/6b097fb4163abf6e6d36b33ff0d48776.js”></script>

Start playing again and make sure the Tracker is still moving (remember both controllers need to be on too).

Now let’s take a look at the image from the official documentation.

If you’re not familiar with electronics, don’t worry, this one’s pretty simple.

All you need to do is make a connection from GND (Pin 2) to whichever pin you want to trigger.

You can do this with a single wire, or ideally hook it up to a switch that’s attached to your physical device.

To test this, simply touch pin 2 & pin 4 with the same wire, and you’ll see the “Trigger” field set to true.

Pins

  • 2 – Ground
  • 3 – Grip
  • 4 – Trigger
  • 5 – Trackpad
  • 6 – Menu Button

Hooking up Hardware

If you’re not sure what to use, try digging out an old electric nerf gun like this: http://amzn.to/2uDRI9b (or find a broken used one on craigslist for free/cheap)

Rip it apart and hook up the wires coming from the trigger to the tracker’s pogo pins.

There are a few different adapters out there you can order/print, like this: https://www.thingiverse.com/thing:2127180 – It’d probably be a good idea to have some sort of adapter in there to make pin access easier…

 

 

 

Continue reading >
1 Shares

Unity UGUI – HorizontalLayoutGroup, VerticalLayoutGroup, & GridLayoutGroup.. and LayoutElement

By Jason Weimann / August 11, 2017

When building a UI with Unity3D’s UGUI system, it’s usually important to make your interface scalable and easy to extend.  There are also many cases where the number of items in your UI is dynamic.

The components I’ll cover here are all designed to make layout easier.

LayoutElement

Before we dive into the different layouts, we need to talk about the LayoutElement.  Without this component on your UI children, these UI elements aren’t going to do what you want.

The LayoutElement has a couple of options, though you don’t have to use any for layouts to work.

The properties are used in the following manner when a layout controller allocates width or height to a layout element:

  • First minimum sizes are allocated.
  • If there is sufficient available space, preferred sizes are allocated.
  • If there is additional available space, flexible size is allocated.

What this means is when the layout group is laying out your UI, it’s going to first allocate enough space for everything to have it’s minimum sizes.

After all the elements have met their minimum sizes, it’ll try to expand them out to their preferred sizes.. if there’s not enough space, the space is allocated % wise.  It’ll calculate out how much space each object wants and assign an equal % of the preferred to each object.

Example: We have two buttons in a group, each with no minimum set.

Button 1 has a preferred height of 100.

Button 2 has a preferred height of 300.

The panel they’re in has a size of 100.

Button one will be scaled to 25 and button two will be scaled to 75.

Example: With Minimum Heights

If the layout elements also have a minimum height, this calculation will be done AFTER the minimum height is assigned.

Button 1 has minimum height of 20, preferred height of 100.

Button 2 has minimum height of 20, preferred height of 300.

Panel size is still 100.

Button one will be 33 and button two will be 67.

Flexible Width & Height

The flexible width and height boxes should probably just be toggles… they accept a numeric value, but that value is always 0 or 1.

When it’s set to 1, the element can expand beyond the preferred setting if the layout controller requests it.

An element set to 0 (or unchecked) will not be used to fill the layout.

LayoutGroups

Now that we’ve covered the LayoutElement, it’s time to look at the different layout group components that can control these elements.  There are types available, HorizontalLayoutGroup, VeritcalLayoutGroup, and GridLayoutGroup.  The names should give away their layout strategies..

HorizontalLayoutGroup

The Horizontal Layout Group can automatically relocate and resize your UI components on a horizontal plane (left to right).  The first child will be on the left and the last child will be on the right.

By default, the layout groups are set to force children to expand and fill in the group’s area.  Sometimes that’s the behavior you want, but if you’re using layout elements you may want to un-check those boxes. (or at least the box relevant to the layout type.

For example, on a horizontal layout group, I’ll often uncheck Child Force Expand – Width, and leave Height checked.  This will make the elements expand out to get taller, but let me have finer grained control over the width using my layoutelement components.

It’s also important to check Child Controls Size on the width here, so the layout elements actually gain control over their horizontal size (width).

Here you can see the difference.  When I check the width box, the children expand out to fill the entire panel.  With it unchecked, they’re using the width specified in the LayoutElement components.

Child Alignment

This setting allows you to specify where in the panel the objects will start their layout from.  The position you choose depends on your specific UI part, but I tend to default horizontal layout groups to middle left or middle center most of the time.

Padding & Spacing

The padding settings allow you to add a margin between the elements and the edge of the panel the layout group is on.  It’s not always needed, but I usually choose a standard padding value for the UI controls.  It’s typically between 10-20, but again it totally depends on your layout.  My only recommendation is that you choose a value and standardize it as best you can.

Spacing is used between elements.  It’s almost always set to something other than 0 in my projects to give a little separation between controls.  If you find yourself over-sizing elements in your UI to add a little space.. try switching to using the spacing variable instead.

VerticalLayoutGroup

The vertical layoutgroup is almost exactly the same as the horizontal.. the only difference is the orientation.  Because of that, I won’t go into detail as I’d just be repeating myself.  Just imagine everything above, with height and width swapped and you’re done 🙂

GridLayoutGroup

Grid Layouts are a bit different.  The GridLayoutGroup is used to create a standard grid of rows and columns.  It’s a bit like a table in HTML or a grid in many other systems (WPF comes to mind).

Instead of relying on the layout elements to determine size, we specify a cell size on the GridLayoutGroup itself.

In-fact, you don’t need layoutelements at all to use the GridLayoutGroup.

For the grid to work, you need to choose both the X & Y size of the cells (width and height).

Constraint – Flexible

By default, the GridLayoutGroup is set to Flexible.  This means the grid will automatically determine the # of rows and columns based on the height and width of the panel.

Example

Your cell size is 100 x 100 and your panel is 300 x 300 wide.

You have 5 children of the grid.

If your Start Axis is Horizontal, you’ll have 3 columns and 2 rows.

If Start Axis is set to Vertical, you’ll have 3 rows and 2 columns.

Constraint – Fixed

You can also choose to specify the column count using the “Fixed Column Count” or “Fixed Row Count” settings for Constraint.

These do what you’d expect and set hard values for the number of columns or rows.

Conclusions

With these layout objects, you can quickly build a UI that’s easy to adjust & automatically fits to your resolution.  You can also stack these layout groups inside each other.  For example when I build a standard window with a header and body, I’ll start with a VerticalLayoutGroup, then have children under it that have HorizontalLayoutGroups on them.  If you’re building an interface with UGUI, spend a little time with these components, get comfortable with them, and your experience will definitely be a lot better.

Continue reading >
10 Shares

How to use the Unity 2017 SpriteMask component & create a seamless scrolling background

By Jason Weimann / August 10, 2017

Unity3D SpriteMask & Infinite Scrolling Background

The SpriteMask component in Unity3D 2017 makes it easy to show a sprite through another’s alpha channel.  In this video, I go over the steps required in two minutes.

Once you have the SpriteMask setup, we can make the window scroll infinitely along.. giving the feeling that you’re riding on a high class train.

Here’s the video showing the steps required.

 

 

Resources

I found the background image on OpenGameArt.org here: https://opengameart.org/content/seamless-hd-landscape-in-parts

The train window came from a google image search here: https://www.google.com/search?q=train+window&tbm=isch&source=lnt&tbs=ic:trans&sa=X&ved=0ahUKEwjy_6nY0c3VAhVG8WMKHR82B2IQpwUIHw&biw=2048&bih=1137&dpr=1.25

Continue reading >
6 Shares

Opinion: Where to put Unity3D components on your gameobjects

By Jason Weimann / August 8, 2017

Last night, I caught myself doing something bad while prototyping..  I was adding a script component on a child.. deep in the hierarchy.  In-fact, I added the script, played around a bit, then stepped away for a minute to play some Overwatch with my 9yr old.

When I came back, I went looking for the component and my hand instinctively floated toward my forehead… (I’ve been re-watching TNG to fall asleep this week)..

What’s wrong with that?

You may already realize the problem.. or you may be wondering why it matters?  Why is this bad?

In this case, it may have been ‘okay’ since it’s just me prototyping something out, but it’s a terrible habit to get into.

The reason for this is complete lack of visibility.  Having components this low down, makes them practically hidden.  Of course you can find them, if you know where to look, and for that matter, you know that you need to look for it..  But the odds that you’re going to remember what child of a child of a child the component for ice skating marks is on are pretty damn low.  Especially if you do this as a common practice and have components spread all throughout your hierarchy.

I’ve worked on dozens of Unity projects, and I’ve seen this mistake enough times to realize how painful it can be.  And of course, I’ve done it myself countless times..

It starts out simple..

“I just need to add this component here…”

“It needs to know where the hand is, so I’ll put it on the hand..”

But quickly the hand starts getting more components.. next thing you know, each finger has a component, on each character, and if one of them is missing, something’s gonna break.

Or I’ll need to change the timing on some character’s attack, but that’s burred 4 layers deep on an AI object, or under that AI object on an AI-AttackSettings object..

And once you have to share with someone else.. game over..  they’re gonna struggle, you’re gonna struggle, and everyone will wish these components were easier to find.

Where should the scripts be?

I like to keep them at level 1 or 2.  Depending on the object setup, there are situations where level 2 makes more sense, but I try to default to keeping them at the root of the gameobject/prefab.  Often these gameobjects are getting nested under some global root (like my [Players] above), so keeping them on the first level of the prefab keeps them easy to find.

Are there any exceptions?

As with anything, there are of course some exceptions.  The main one that comes to mind is with colliders.. if you want to register for an OnCollision… method, you’ll need the component where the collider is.  In that case, I’d just make sure that there aren’t any serialized fields on the component and instead try to keep those edited fields further up the hierarchy.

What if I need to know where the foot, hand, other random thing is?

You may be thinking.. I need the transform of this object.  I need the gun to fire from this position.. or this hand needs to hold the objects..

Not a problem.  With the component on the root object, add a serialized field for the transform you need, and assign the child there.  Now when you look at the root, you can see everything that’s going on with your gameobject in one place.  You’ve reduced the cognitive load required to use that gameobject, made your project a bit cleaner, and your work a bit easier.

Agree / Disagree?

Like I said above, this is all my personal opinion, based on my experiences.. and of course there are always exceptions..  but as a general rule, I think it’s an ideal practice.  If you have thoughts on this though, please drop a comment and share with everyone.

Continue reading >
8 Shares

How to hook up & use Unity UGUI UI Buttons in code or the inspector

By Jason Weimann / August 8, 2017

Most games need a UI, and most UI’s need buttons!  In this article, I’ll show you how to use the Unity3D UGUI Button OnClick events.  We’ll do it in the inspector and through code with onClick AddEventListener.  We’ll change the text and color of a button, and talk a bit about when you should do it in code vs the inspector.

Video Version

If you prefer video, you can watch everything in this post in video format here: https://www.youtube.com/watch?v=-bc7Ut8ijd4

Hooking up the Unity3D UGUI Button

 

To get started, we’ll need a button.  In an empty project, add a Button through the GameObject->UI->Button menu.

With the button selected, you’ll see the OnClick event section in the inspector.

To add an event, click the plus button.

Drag the Text child of the button onto the object field.

For the Function, select the Text.text option.  This will allow the button click to change the text shown by the text object.

The field below the function will become editable.. type in the word “CLICKED

Press play and watch your text change as you click it.

Calling Custom Code on the UGUI Button

The built in functions can be useful, but often you’ll want to call your own code on a button click.

Create and add a new script and name it ChooseRandomColor.cs

This class exposes one method on it named ChangeImageColor.  The method will get our Image component (on the button) and set the color to a randomly chosen one.

With this component on the button, we can assign it to an OnClick event and have the button change colors.

Add another event to the button by clicking the plus button.

Drag the ChooseRandomColor component onto it (you can drag from the bold component name in the inspector, or alternatively drag the button from the hierarchy)

For the function, select ChooseRandomColor->ChangeImageColor()  – This is selecting the ChangeImageColor method of our ChooseRandomColor component.

Press play again and watch your button change colors as you click.

Hooking up the button in code instead – AddListener

There’s another way to hook up our events.. we can do it in code.  Sometimes it makes sense to assign events in the inspector.  When we want to hook into other gameobjects, like the child text object, or we want do do something simple like toggle the GameObject Active/Inactive…

But other times, it’s better to do the assignment in code.  This makes it easier to figure out what’s calling the method.. making “FindReferences” work for example.  When all your events are hooked up in the editor, it can be hard to tell what’s actually used, and what it’s used by.  I’ve seen plenty projects where it took hours of digging through the hierarchy to figure out what’s calling all of the code.. and to know what’s unused and safe to delete.

Luckily, hooking up these events in code is easy to do as well.

Edit the ChooseRandomColor.cs file like this.


Here, we’re using the Start() method to register for the Button’s onClick event.  We do this with the AddListener method, where we assign our own ChangeImageColor method.  Notice that we don’t put parenthesis after ChangeImageColor on line 9, if you accidentally do, you’ll get a compiler error (It happens to all of us 🙂

I’ve also changed the ChangeImageColor method to be private.  We don’t need anything outside the class calling this method, so it should be marked private to keep it properly encapsulated.

Back in the editor, remove the 2nd event.  To do this, left click it and it’ll highlight, then click the minus button.

Press play again and everything will work the same.. but now you have references in code for your method calls.

 

Continue reading >
7 Shares

Build a Responsive Unity3D UGUI Expandable Window with HorizontalLayoutGroup and ContentSizeFitter

By Jason Weimann / August 6, 2017

If you’ve never used the Unity3D ContentSizeFitter and HorizontalLayoutGroup components, they can be a little frustrating to get under control the first time.  Even after using them a few times, I still occasionally forget a step or checkbox here and there.  In this article I’ll show you how to use the Unity3D ContentSizeFitter and HorizontalLayoutGroup components to build a responsive scalable UI window.  We’ll add the ability for it to expand and collapse at runtime.  And the window will scale correctly across different resolutions with no extra work.

Video Version

Building The Window

To start, you’ll need a Canvas.  Your canvas needs a CanvasScaler component as well so it can automatically adjust the size based on your resolution.

The root of our window is created from a rect transform with a ContentSizeFitter added to it.

The HorizontalFit and VeritcalFit should be set to “Preferred Size”

Name the window “Collapsable Window”

Window Contents

The Collapsable Window will have a single child named “Window Contents

This object has the following components.

  • VerticalLayoutGroup – With ChildControlsSize checked and Child Force Expand Checked.
  • ContentSizeFitter – With Vertical Fit set to “Preferred Size”

Adjust the width of the “Window Contents” to 500.

Make sure the Anchor is set to top left.

Window Content Sections

Under window Contents, we’ll create 3 children.  These will represent different parts of the window.  The first two are mandatory for the window to work right, while the third is an optional footer.

Header

The first child is our Header.  The header will have a bar to look like a typical window would, along with an expand/collapse button, and finally some header text.

Create the header by adding an Panel as a child of the “Window Contents”.  Select an image you want to use for your background or keep the default and adjust the color.

Next, add a LayoutElement and set the minimum and preferred height both to 80.

Check the FlexibleHeight box and enter a value of 0.  This will prevent the header from resizing to fill in extra space.

FlexibleHeight is typically set to a value of 0 or 1, with 0 meaning do not resize, and 1 meaning resize to fill extra space as needed

Buttons (Expand and Collapse)

Our header is going to have two buttons, one to expand, and another to collapse.  The user will only ever see one of these buttons at a time, based on the state of the window.

Create a button and name it “Expand Button”.

Delete the text object child, and change the image to be a downward facing arrow (or whatever other icon you want for expanding).

Anchor the button to the left and set the Pox X to 20.

Duplicate the button and name the copy “Collapse Button”.  Flip or replace the image.

You can flip the image by changing the Y scale to -1.

On the expand button, add an OnClick event, assign the “Expand Button” to it and select GameObject.SetActive with the checkbox Unchecked.

Add another event, but this time drag the “Collapse Button” to it and select GameObject.SetActive with the checkbox Checked.

Collapse Button

Select the Collapse Button and create the same events, but reverse the check state.

We’ve made a toggle!  Uncheck the Expand Button’s active box (top left of the Inspector), and press play.  You should be able to toggle between the two buttons.

Header Text – (optional)

Before we finish the header, we’ll want to add some text..

Create a text object and anchor it to the right.  Set the “Right” value to 20 so it has a little margin from the edge.

On the Text component, check the “Best Fit” box and set the color / font as you like them.

You could also use TextMeshPro for your text.  If you haven’t tried it out yet, check out my post here.. it’s great..

The Body

Now that we have the header and our expand button, we need something for it to actually show.

Create another Panel child on the “Window Contents”.  It should be a sibling of the Header.

Name it “Body”

Look at the Image component – This is going to be the background image for our window.  The ImageType must be set to Sliced.

If your imageType is set wrong, you may see the window expand to a giant size, this happened because it’s attempting to match the sprites resolution.

Add a ContentSizeFitter, with HorizontalFit left at “Unconstrained” and VerticalFit set to “Preferred Size

You’ll notice a warning on the ContentSizeFitter.  This is expected and normal behavior so don’t worry about it.

Add a VerticalLayoutGroup.  Set the Left, Right, Top, and Bottom to 10.

Check all 4 checkboxes for Width & Height.

Body Contents

Under the “Body” add a button, a slider, some text, and whatever else you’d like in the window.

What you add is totally up to you, but make sure you add something or your window body won’t have anything to show and won’t be visible.

For each child of the “Body”, add a LayoutElement and set the Min Height and Preferred Height to values that make sense for your component.  In my test project, I set them all to values ranging from 40-60.

Leave the width boxes & flexible boxes unchecked.

The Footer

It’s time to add the footer.  This is optional but I like to have it on the default window and disable it when we don’t want to use it.

Create another Panel under the “Window Contents” object.

Add a LayoutElement and set the Min Height & Preferred Height values to 30.

Set FlexibleHeight to 0.

Select an image for your footer background and add some text if you like.

 

More Events

It’s time to jump back to the buttons.  Right now, they toggle each other, but we want them to also toggle our “Body” & “Footer” objects.

Add two onClick events to the “Expand Button”.  Drag the Body to one, and the Footer to the other.

Select GameObject.SetActive and check the box.

Collapse Button Events

Repeat this process for the “Collapse Button”, but this time with the box unchecked.

Project Files

You can download the project example here as an asset pack.  It includes a single scene with the window setup similar to what’s shown here (without any purchased assets included)

https://unity3dcollegedownloads.blob.core.windows.net/vrgames/CollapsableWindow.unitypackage

 

 

Continue reading >
2 Shares

Create & Customize your Unity3D Splash Screen, Icons, and application config dialog

By Jason Weimann / August 3, 2017

If you’re building a Unity game, or even a business application, eventually you’ll end up doing a build.. and when you do, you want to make it look as polished as possible.  One way to add a little extra touch of shiny goodness is with custom splash screens and icons.  Adding custom Unity3d splash screens only takes a few minutes, and adding custom icons can be done in just a few minutes.  So before you release your game to itch.io or steam.. or send your completed app over to your client, make sure you spend a few minutes and make it look professional.  In this article, I’ll show you the options available, what images you need, at what sizes, and where to put them.

Splash Screens

The new Unity3d splash screen system is really flexible and easy to use.  You can easily add more images to the intro than you’d ever want.  I added 10 before I quit clicking the button.. who knows if there’s a limit.

To customize your splash screen, you need to open the “player settings” window.

Once it’s open, expand out the “Splash Image” section.

If you’re using the “Personal Edition” of Unity, the “Show Splash Screen” option can not be turned off.  This is because Unity forces the “Made with Unity” branding on games built with personal edition.

Take a look at the Logos section.  It starts off empty.

To add a screen, click the + icon.

Next you can select an image.  But the image must be a Sprite.

Image Settings

If your image is not set to a sprite, you can change that by selecting the image in your project view and changing the “Texture Type” to “Sprite (2D and UI)

You’ll also want to change the mesh type to “Full Rect“.. if you don’t and your image has transparency, it’ll be clipped on import and look stretched out when it shows in the splash screen.

What resolution should my Unity3d splash screen image be?

This depends on your target devices.  If you’re aiming at devices displaying 2560×1440 (most new phones and monitors), you can of course have your splash image match that resolution.  If you’re worried about install size though, you can always shrink it down to 1920×1080 and it’ll look just fine.  If you do go with 2560×1440 though, make sure you adjust the “Max Size” on the import settings to 4096 or your image will be re-scaled down and you’ll lose that extra resolution.

Your image can be transparent or opaque, the choice is up to you.  If you go with an opaque image, you’ll probably want to have your splash screen settings set to sequential, or it’ll look a bit weird… (we’ll get to that setting in a minute)

 

Draw Modes

You’re given two options for draw mode.  You can either choose “Unity Logo Below” or “All Sequential”.

In Unity Logo Below mode, your icon will take up the upper 50% of the screen and the Unity logo will be below it.  Each additional logo you add will also be shown this way.

Unity Logo Below

Sequential mode will show your logos full screen.

For the animation settings, you can choose “Static”, “Dolly”, or “Custom”

The animation that plays is your logo growing..  in Dolly mode, it’ll start off a little smaller and grow.

Static mode keeps it from animating and just sets it to it’s actual size.

Custom lets you adjust the zoom speed of the logo and the background image.

Dolly Mode

Background

You can also set a background image.  This is a full screen image that get stretched to fit.

A low res background image being scaled up to fit

The Icon

You can set your icon in the player settings.

The icon you set here is shown for the application file and in the top of your app if it’s windowed… (and on the toolbar)

App Config Dialog

The last thing you’ll want to configure is an application configuration dialog image.

If you’re forcing the game to a specific resolution, you don’t need this. But if you allow the user to select their resolution before launching, you’ll want to put something here.

The image here needs to be exactly 432 x 163.

With this set right, your app dialog will look like this.

Conclusions

Adding unity3d splash screens and icons is an easy task.  But it’s also something a lot of people skip right over.  I can’t even count the # of games and apps I’ve come across with the default icons.  Now that you know how to change them, go in and update your apps, add a little extra polish for just a couple minutes of work.

 

 

Continue reading >
2 Shares

How to: Create a Unity3D building placement system for RTS or City builders – Let your player place a 3D object in the world

By Jason Weimann / August 2, 2017

Many ‘sandbox’ type games are built around the idea of letting players place and build things in the world.  Your players might be placing turrets to defend from crazy hordes of orcs, building houses and structures for their virtual families, or just putting up barriers to keep other players and zombies away.  Whatever the game, it’s usually done by putting the object on the mouse cursor and letting the player move it around until it’s in place, then place it by clicking the mouse.  Today, I’ll show you how to get started with your own placement system in Unity3D.  We’ll build a Unity3D Building Placement system that allows you to place and rotate objects with the mouse cursor.

Video Version

Environment & Art

To start, you’ll need some sort of environment to place objects on.  You can grab just about anything from the free environments section of the asset store.  I’ll be using this Free Autumn Mountain.

The mountain does not have a mesh collider though, so I need to add one first

The Camera

In the mountain scene, the camera that’s placed is a bit high, and not tagged as “mainCamera“.  I moved it a bit and set the “mainCamera” tag on it.

Turret

We’ll also need something to place.  We could use a cube or some other primitive, but that’s a little boring, so I’m downloading a free turret from the asset store.

The turret came with some old standard assets stuff that won’t compile, and we don’t need it.. so I deleted those standard assets from the project view

The Code – Ground Placement Controller

To make our placement work, we’ll need a new c# script.  I’m going to name it GroundPlacementController.cs

Let’s take a look at this code and see what’s going on..

We start off with two serialized fields.  One for the prefab we’d like to place and another for the hotkey we’ll use to create a new object.  We keep these in private serialized fields so we can modify them in the editor later.

There are also two private fields, one to hold the object we’re currently placing, and another to deal with rotation.

In our Update method, we call a few things.  I prefer to call a couple well named methods in Update whenever possible, instead of doing actual work.  This makes it easier to see what’s going on at a quick glance and makes the code much easier to read.

Our first method checks to see if the player has pressed our new object hotkey (defaulted to A).  If they do, we either create a new object to place by using Instantiate, or we destroy the current one that’s on the cursor.  This lets our player cancel the placement easily if they change their minds.

Next, we check to see if there’s an object being placed – aka assigned to currentPlacableObject.

MoveCurrentObjectToMouse

If there is, we jump into our movement method.

In here, we do a raycast from the mouse position into the world.  The Camera’s ScreenPointToRay method will generate a ray beaming straight into the scene from our viewport.  This means we’ll get a physics ray that goes exactly where we click.

If we click on something that has a collider, the raycast will return that object in the hitInfo object we declare on line 46, and it will return true, running the code to place our current object at that hitinfo.point.

We’ll also set the rotation of the object using FromToRotation.  This will make the object always point upward, so if we place it on a slope, it’ll look right and not just clip through sideways.

RotateFromMouseWheel

Next, we go into the method to rotate based on our mouse wheel.  This assumes we want the player to be able to adjust the orientation of their placed objects..

In here, we adjust the mouseWheelRotation by the mouseScrollDelta Y value.  Then we rotate the current object using the Up vector and the current value of the mouseWheelRotation.  We also multiply it by 10 to make it a bit faster / more sensitive.

ReleaseIfClicked

The final method is there to ‘place’ or release our object.  All we do here is check for a click, and if we get one, we clear out the currentPlaceableObject.

Adding the Manager

Create an empty gameobject in the scene.

Add the GroundPlacementController component.

Assign the turret prefab (or your own)

Press play!

Conclusion / Extensions

This covers the basics of placing an object with your mouse cursor.  In a bigger game, you may want to extend out the object selection, maybe have an array of placable objects and a hotkey to cycle through them, or even a full UI to select one.  You could also fire some event when the object is released, play an animation.. set it to start firing?  You could also swap the material when the object is being placed and give it a holographic look.

Continue reading >
5 Shares

How to make games – Making the transition from business apps and web development into gaming – Part 2

Last week, I shared a bit of advice for web and business developers who are looking to make the change to video game development.  It’s a big subject though, so today I’m going to follow up with a few more important things I’ve learned that you may want to know before taking the dive.

First, I want to address a couple objections I came across..  There are plenty of game developers who’ve had less than great experiences.  I’ll start off with the worst stereotypes since they seem to be most prevalent.

If you haven’t read Part 1, I recommend you check it out as well, though the order doesn’t really matter – How to make games; Making the transition from business apps and web development into gaming – Part 1

Do I have to work 80hrs a week?

There are horror stories all over the internet about the work conditions of your typical game developer.  I’ll never forget when I heard about the EA spouses website.. a site dedicated to the spouses of overworked and underpaid EA employees who were exploited, working 60-80hrs a week with no overtime pay.

It’s true that there are places like this.  I know a few developers who work at one big unnamed AAA game company that I’m told still acts like this.  But I also know a few web developers who work in similar conditions.. on call all night, taking midnight meetings with teams across the world, working from 9am to 9pm..

Most places however are not like this..  It may have been the case that “everyone did it” decades ago, but today most game shops realize this is a terrible idea.  And even more importantly, most game developers won’t put up with it.

Now it is the case that some game developers are kinda ‘workaholics’..  One of my best friends is like this.  I’ve seen him work 12hrs/day building games for weeks at a time.  But this isn’t something that’s usually pushed on them, they’re people who love what they do, love their project, and want to make it the best it can be.  You’ll find people like this all across the professional spectrum, and they’re the exception, not the rule.

What about “Crunch” mode?

Crunch mode is what they call working way too many hours to finish a project that’s poorly managed and not on schedule.  I’ve heard horror stories from friends in the past who lost their weekends, canceled vacations, and spent the night at the office on regular occasions.  But most game developers I know have never experienced anything so crazy.  There will always be times when you need to stay a bit late, maybe even work a really long day…  If you’re launching the game you’ve been building for months or years, it’s probably a good idea to hang out and keep an eye on things.  For the first few days, there may be bugs, crashes, etc that need to be fixed right away.

But in general, if you’re interviewing somewhere that says “crunching” is a normal thing, it’s probably not a great place to work.  Any place that needs you to work long stints of overtime has a serious management failure.  They’re failing to properly estimate and scope their projects.  They could be overshooting, under-staffing to save money, or more often than not, just bad at time management and management in general.  A well run project has attainable goals that are tracked throughout the process, without requiring the employees to sacrifice their entire life to get it done.

What about Pay?

This is another theme that comes up quite a bit.  “I want to build games, but I also don’t want to be poor”.

There is a grain of truth to this concern.  If you’re starting out in game development, as a Jr. Programmer, there are plenty of places happy to take advantage of you.  There are countless people who ‘want to make games’, and are willing to work for peanuts.  When I started game development, I was just happy to get into the industry.  Getting paid 50k/yr to do something I loved seemed great.  And I saw plenty of other developers come in at numbers as low as 30k.

If you compare that to web development, it is a bit lower.. but it’s not that much lower.  And it depends a whole lot on where you’re working.  While some game shops pay their entry level developers near minimum wage, there are plenty of others that start people out at or above web developer salaries.  In my experience it seems to be a typical representation of the supply & demand system.  For popular games, there are 100’s of people applying, and many of them are willing to work for the bare minimum.. so those jobs are more likely to pay less because they can.  But that willingness to work for low wages is much less prevalent for games that aren’t so well known.

It’s also important to note that on the high end, you can make just as much or more than in typical business development.  Experienced Sr level game developers are a highly sought after commodity and can demand pay rates over 200k/yr.  I’ve known a few experts making over 300k/yr.

Commissions & Bonuses

Another thing I’ve only seen in game development are launch bonuses & commissions for the entire team.  Not every place does this, but plenty do, and they can on occasion be huge.  My first launch bonus was a check for 10% of my salary, and at one company, I was getting these about every 9 months.  Sometimes these bonuses are based on sales.  If you get in on a game early and it turns out to be a huge hit, these checks can be more than your salary.  Now not everyone gets these, if you get an offer somewhere, you may want to ask if they have a commission or launch bonus program.

Testing / QA / Where are my unit tests?

This is a tough subject.  In web & line of business development, you can get a good idea of the code quality by checking out Unit test coverage.  In most places I’ve worked, we had unit tests for anything important.  Builds are run through continuous integration services, unit and integration tests are automatically run, and we know early if there’s a problem.

When you make the jump to games, be ready to see 0 automated tests.  Of course a few select places have unit tests and automation, but they’re the exception to the rule.

This is something you’ll have to learn to accept, at least for the time being.  I’ve seen a few friends really struggle with this.  The feeling of comfort and correctness they get from unit tests in their day job is something they don’t want to lose on their ‘fun projects’.  Most of the time though, after a lot of discussion, and burning 100’s of hours trying, they eventually give up on the tests.

As much as I love unit tests, there’s a good reason they’re not popular for game developers.  Setting up tests for games is HARD.  Much harder than it is for a website or your typical C.R.U.D. app with a few fancy screens.  On a website, you have pretty tight control over what the user can do.  They can fill out a couple fields, maybe leave some blank, click a button or two, and run through a few code paths to complete some operation.  In most video games, your player can do all kinds of things, sometimes it’s stuff you’d expect but more often than not, they’re doing things you’d never think of.

Combine that with the countless interactions going on in your game, objects being destroyed, picked up, thrown in seemingly random directions..  The physics systems, rendering pipeline, the art pipeline which is out of your control.. design data put in by non-engineers, input systems, and everything else…  It can quickly become a mess, impossible to test even a tiny % of possible scenarios.

On top of that, you don’t have the luxury of dependency injection.  You can’t easily mock out substitutes for your player, or your enemies, or that remote player in Antarctica with a 5000 ping.

But there’s even more… for many games, you have to deal with devices nobody else does.  Your game may need to run on xbox, playstation, pc, & ios.  If you do get tests running, they’re probably not running on these actual devices.  There’s no Selenium like solution to automate your gameplay across devices.  And there’s no way to peek into the game and see how it’s responding to interactions like you can with a webpage.

So what do game developers use?

Most game development shops have abnormally large QA / Test teams.  In line of business development, I’ve experienced many setups where we had 1 tester per 10 developers.  In games, I’ve never seen that ratio go above 1/3.  In many situations I’ve seen it reversed, where there are more people testing the game than building it.  This may seem counterproductive or expensive, but it’s done because it’s usually cheaper and more accurate.  To get a game fully under automated tests would cost a whole lot more in expensive developer time than it does to hire a good QA team.  It wouldn’t catch as many issues, and given how drastically games change in short periods of time, it’s not nearly as flexible.

For the time being, big test teams are something you’ll want to get used to.  And it’s important to communicate well with them.  If you release a new feature and don’t explain what it’s supposed to do, you’re going to get bad bug reports or missed issues.

Of course you’ll want to manually test your own changes first… don’t forget that step.  And when you do, make sure to think.. “is this needlessly tedious?”.. “is there a way I can make this easier for QA?”.  For example, if you know your QA team is going to need to summon 2000 items one by one, consider making a command that will let them pass in a text file of the items to load instead of wasting days of their time doing it item by item.

Back to technical stuff – The Over-engineering

So far, I’ve focused on some of the less technical issues.  This subject is a bit of a mix.  It’s part engineering and part mindset.

One of the biggest mistakes I see people make as they transition is a tendency to over engineer things like crazy.  People often start pulling in their web development patterns, attempting to re-create systems from their favorite framework, so they can ‘do game dev like they do web dev’.

This is a classic example of fighting against the engine.  Instead of using what’s available or right, we attempt to use what’s comfortable.  Many of us jump in and start trying to implement MVC everywhere.. our player’s visuals are the view right?  The model has the characters data or something?  The controller.. um maybe it’s the NPC?  Or is it the NPC controller?  “let me just build up a bunch of base classes and framework, THEN I’ll make my game”.

Now there’s nothing wrong with good design patterns.. where it makes sense.  For your first few games, it almost never makes sense.  Until you have a good level of experience in game development, with the engine you’re using, and have built a few games, you’re really just guessing what’s good.

When you start out in game development, especially if you start out on your own.  Don’t start by building big systems.  Build a game, learn, and only implement existing patterns when they really make sense and aren’t fighting against your engine.

And if you’re interested in learning some game specific patterns, this book comes highly recommended by a few developers: Game Development Patterns

Early Optimization (especially without profiling)

Some programmers love to optimize.  They love tricky algorithms designed to shave a few cpu cycles.  They want to use arrays instead of lists because “they’re faster!”.

Optimization is a great thing.  If you never optimize your game, you’ll probably run into serious performance problems.

But early / pre optimization is the bane of development.  It’s often misguided.. attempting to make things better while accomplishing nothing (or even making things worse).

If you haven’t been developing games for a while, and profiling them, you’re not going to know what’s actually slow.  If you think you need a “really fast lookup” for some objects, or you “don’t want to use Coroutines because you heard they’re slower”.. take a step back.

Remember that most of the optimization you know how to do from high scale web apps probably don’t apply to a game running on someone’s iPhone.  In-fact, your optimizations probably won’t help at all, but knowing to check the GPU Instancing checkbox on the right things may make a huge difference.

And before you optimize anything, make sure you profile.  For Unity developers, the profiler is built right in, and can usually show you exactly what’s slow.  Often, you’ll be surprised by what it shows.  For example, your Debug.Log statements are a huge performance hit.  Those statements get a stack trace to display and will quickly kill your FPS if they’re called at a high rate.

Also make sure to test your release builds on occasion.  The framerate difference from the editor to a release build can be big.  That’s not to say you shouldn’t have a good level of performance in the editor too, but it’s something important to remember.

Source Control & Merging

The last thing I want to touch on today is source control.  Most of the non-game world is overcome with GIT now.  It’s a great source control system, and it’s the one I use for everything.

There are some downsides to GIT with games though.  Specifically the fact that art is LARGE, and it keeps all revisions locally.  Having a bunch of art revisions in your main project can become problematic.  Especially if your artists are using it for their art source files.

Many game development studios use alternative systems like Perforce, PlasticSCM, etc for this reason (and because Perforce has a strong hold in AAA game development).

Personally, I prefer GIT and use it for everything.  But I always keep artist source work in another repository and only have them put the working or final product into the main repository.

I also recommend using GIT LFS for projects that are going to be art heavy.  If you haven’t heard of LFS, it’s worth looking into here: https://git-lfs.github.com/

Merging

Merging your code in a game project isn’t really any different from merging anything else.  But merging other assets can be a pain.  In-fact it’s often an impossible task.

What other assets am I talking about?  Things like your scenes (aka levels) & your prefabs.  These are stored in big serialized files, ether in binary or text format.  For minor changes, if you’re using text serialization, merging can be managable.

But most of the time, merging these types of things is more trouble than it’s worth.

So how do you handle multiple people editing a level?

You have a couple options.  The first and most common is to break up your level into smaller chunks.  You can create multiple scenes and have people work on them independently (designer 1 is working on the north east corner of this level in the LVL1_NE scene and designer 2 is working on the scene for the south east corner).

Or a more common thing is to make everything in your scene into a prefab.  That way people can work on individual prefabs that are placed in the scene, and as long as they’re not working on the same one, everything is fine.  If they’re granular enough and you have some decent management and communication, this won’t be too much of a problem.

But if you keep everything in the scene and don’t use prefabs, be prepared for a struggle.

There’s also an alternative in the Unity world called SceneFusion.  I’ve only lightly played with it myself, but it does seem like an interesting fix for this kind of problem.

So should you make the transition and become a game developer?

This is a question you’ll have to answer for yourself.. but before you do, think about what you’re doing now.

Do you love your job, find it challenging and fun?  Or do hate your job and sit around thinking about how cool it’d be to make your own game?

Are you ready to accept that you won’t start off being the best game developer ever and will need to practice and build up your skills?

Do you only want to work on games if you can be on the team building Call of Duty, or only want to do it so you can be on stage at E3?

Or are you just into video games and like the idea of making something that puts a smile on other peoples faces.. something you can share with almost anyone and have them really enjoy what you’ve made?  (I’ve never once shown my friends or kids a webpage I built that they cared about even a bit, but they’re always begging to help ‘test my new games’)

The final decision is up to you.. but I definitely recommend at least giving it a try.

How can I get started?

Of course there are dozens of ways to start learning game development.  My favorite way to learn this is by just doing them.  Pick a game you want to build, find a tutorial and build the game.  Try a few different ones, then start expand on them with your own changes.  Once you feel comfortable, start building a game of your own from scratch.

I always like to recommend my basic 2D tutorial that takes you through the steps to build a Flappy Bird clone in an hour or two.  If you don’t like reading though, I also love Brackey’s youtube videos.  And if you’re really interested in VR, of course I’d point you at my VR Course.

Whatever you choose to learn with, if you get stuck, have some questions, or just wanna chat about game development, feel free to toss me an email (jason @ this website).. and make sure you have fun 🙂

 

Price: $34.20
Was: $39.95

 

Continue reading >
34 Shares

How to play Stereoscopic 3D 360 Video in VR with Unity3D

If you’ve played 360 video in VR, you know it’s kinda cool… But lately, 3D video is starting to take off.  With off the shelf cameras like the Vuze line, it’s gotten easy to record your own without spending the cost of a new car to get started..  Before today, playing 360 3D video in VR with Unity3D was a bit complicated.. but now, thanks to an open source project put out by Unity Technologies, it’s getting easier.  Earlier today, I stumbled on a post and github project they’ve put together to make 3D 360 video simple to implement.

Video Version

Prefer to watch video?  The entire process is available on youtube here.

Project Setup

To get going, you’ll need a couple things..  First, you need a 3D video.  For this article, I’m using an Over-Under video you can download from here: http://www.panocam3d.com/video3d360.html#!portfolio/project-13.html

Once you’ve downloaded your video, you’ll need to grab the script and shader from this github project: https://github.com/Unity-Technologies/SkyboxPanoramicShader

You can download it or clone the repository, whatever you feel most comfortable doing.

Place the shader and script into your project along with the video file you want to play.

You’ll also need to visit your player settings and make sure the Virtual Reality Enabled box is checked.

Render Texture

To use this shader, we need a render texture.  Create a new render texture and name it “Panoramic Render Texture”

Select the RenderTexture and change the size to 2304 x 2304.

The render texture resolution should match your video resolution.

Change the depth buffer to “No depth buffer”.

Render textures are textures that can be rendered to. They can be used to implement image based rendering effects, dynamic shadows, projectors, reflections or surveillance cameras.

The Video Player

To create a video player, drag the video from the project view into the scene view.  A player will automatically be created with the video assigned to it.

Select the video player and look to the inspector.

Change the render mode to “Render Texture”.

Drag the render texture from the project view into the target texture field.

The Material

Next, we need to create a material for the shader.

Create a new material, name it “Skybox”

Drag the render texture onto it.

Set the mapping type to “Latitude Longitude Layout”

Change the image type to 360 Degrees.

Set the 3D layout to “Over Under”

Skybox Setup

The last step is to assign our material to the skybox.

Open the Lighting window.

Drag the Skybox into the Skybox Material field.

All Done

That’s it, save your scene…

Then put on the headset and press play, the video should start playing in 3D.

What about other video types?

This shader appears to have support for a few different video formats.  In this article, we covered a simple 360 degree over under video, but you may have noticed the options for 180 & side by side.  I haven’t tried those yet, but if you’re interested in them, I’d recommend you check out the full documentation they’ve provided here: https://docs.google.com/document/d/1JjOQ0dXTYPFwg6eSOlIAdqyPo6QMLqh-PETwxf8ZVD8/edit#

Continue reading >
32 Shares
Page 3 of 13