• Home  / 
  • Author's archive:
About the author

Jason Weimann

7 VR / VIVE Games – Day 3 – Zombie Shooter

HTC VIVE Game Challenge

This week, I’ve decided to start a new VIVE challenge.  Each day, I’ll be creating a new Unity game developed specifically for the HTC VIVE.

I’ll spend a single day on each game (really about 4-6h of a day), creating the base gameplay and polishing up the experience a little.htc-vive-steam-vr-logo

After each game is complete, I’ll post the results here with a gameplay video and some thoughts on how it worked out.  I’ll also provide a download link for any current Vive owners to try it out and see how the experiences turn out.

 

Game 3 is ready now!

VIVE – Zombie Shooter

Zombie Shooter was my 7yr old sons first choice.  When I got the Vive, his favorite game quickly became the Space Pirates.  He’s always been a big FPS fan (mainly playing TF2), and wanted me to make one too.  Where he got the Zombie idea, I’m not sure, but it seemed like a good fit and a reasonable one to put together.

In the Zombie Shooter, you have one goal…. stay alive..
The zombies keep coming, until you’re dead.  They spawn in waves every 10 seconds with a 10 second delay between each wave.  Their spawn rates are slightly randomized as are their positions, but they come from all sides.  In addition to that, some of them like to run!

 

 

 

Result:

I really liked the result here.  After a bit of iteration, my wife seemed to really get into it as well.  If you watch the video, she went for almost 6 minutes in that session alone.  The mixing of slow zombies with fast ones adds quite a bit of intensity to the game, even though only 12% of them actually run.  My kids haven’t yet had a chance to play it though, so I won’t know the full result until after this post goes live.  I’d love to hear from others though on how you like it.

Zombie - Spawner

Implementation:

This game has a few interesting implementation details that aren’t present in the previous ones.  It takes advantage of the mecanim animation system and has some basic wave spawning.

 

Guns

Zombie - GunZombie - Gun

Zombie – Gun

The guns are were a little difficult to calibrate.  For them, I’ve added a mesh from a gun pack and a “Gun” script.  The script checks for the trigger pull of a controller, and when it’s pulled does a raycast using Physics.Raycast.  The Raycast actually comes from an empty transform placed at the tip named “muzzlePoint”.  I also use this to determine where the muzzle flash should appear.
If any Zombie is in front of the gun (detected by the raycast), I call a TakeHit method on the zombie.  Other than that, they have a reload timer which is adjustable in the editor and currently set to 1 second.  If the timer isn’t refreshed, I just ignore trigger pulls.Zombie - CameraRig

They also have a laser scope, which is actually a copy of the one from the Dungeon Defense game.  This is toggled by the grip buttons and helped my wife get used to aiming.  Once she had it down though, she was able to play fine without them.

 

Zombie

The zombies took a little longer to setup due to an issue I had with animations.  For some reason, my walk animation wouldn’t properly move the zombie using RootMotion.  In the end, it turned out that just re-importing the walk animation (deleting and re-adding it) fixed the problem.  I got the Zombie and his animations from Mixamo.com.  I believe they’re free, but it’s possible I paid for them and didn’t notice.  There are also a variety of other cool zombie animations on there, but I didn’t want to go overboard on prettying this game up right now.

When they spawn, they have a 12% chance to run.  If they fall into that 12%, I just use a trigger to switch animation states from Walk to Run.  Because I’m using Unitys RootMotion, the zombie starts moving faster without any other changes.

Zombie - Animation Controller

Zombie – Animation Controller

Aside from the animations, there’s a basic call in the Update() method that just finds the player by a tag and does a transform.LookAt().  While this wouldn’t be the best way to go in a real game, because you’d want to use proper navigation, it was fine for this quick project.  If the zombie is in “attack range” of the player, he’ll make a noise from his audio source, play an attack animation, then kill the player (restart the level) if he doesn’t die within 3 seconds.

I also have the TakeHit method mentioned above in the Zombie.  This method reduces his health by 1 (initially I was going to require 2 shots, but ended up doing away with that).  When the health reaches 0 (on the first hit), I switch his animation with an animation trigger and make him schedule a GameObject.Destroy after a few seconds via a coroutine.

Spawners

This is actually the longest script of the game.  Because of that, and because there’s been some interest in code samples, here it is:

using UnityEngine;

public class Spawner : MonoBehaviour
{
    // Note I don't use the NavMeshAgent.  I was planning to but decided to ditch it for the simpler method of .LookAt
	// I do still use this _prefab though, so the zombies have a NavMeshAgent on them.  To cleanup, I should remove this
	// but I wanted to show what I initially planned vs what actually happened.
	[SerializeField]
	private NavMeshAgent _prefab;
	[SerializeField]
	private float _respawnTimerMax = 5f;
	[SerializeField]
	private float _respawnTimerMin = 2f;

	[SerializeField]
	private float _waveDuration = 20f;
	[SerializeField]
	private float _waveDelay = 10f;

	private float _countDownTimer;
	private float _waveTimer;
	private float _waveDelayTimer;
	private bool _spawning = true;

	private void Update()
	{
		_waveTimer += Time.deltaTime;
		if (_waveTimer >= _waveDuration)
		{
			PauseSpawning();
		}
		else
		{
			_countDownTimer -= Time.deltaTime;
			if (_countDownTimer <= 0)
				Spawn();
		}
	}

	
	private void PauseSpawning()
	{
		_spawning = false;
		_waveDelayTimer += Time.deltaTime;
		if (_waveDelayTimer >= _waveDelay)
		{
			_waveDelayTimer = 0f;
			_waveTimer = 0f;
			_spawning = true;
		}
	}

	private void Spawn()
	{
		var randomOffset = new Vector3(UnityEngine.Random.Range(-1, 1), 0, UnityEngine.Random.Range(-1, 1));
		var agent = Instantiate(_prefab, transform.position + randomOffset, transform.rotation) as NavMeshAgent;
		_countDownTimer = UnityEngine.Random.Range(_respawnTimerMin, _respawnTimerMax);
	}
}

I want to note that this is not to be taken as an example of a good / clean script.  It is in-fact pretty terrible and messy.  In a real game, we’d have a state system that would handle different game & spawn states.  But as a throwaway, it gets the job done fine.  Also, if you’ve used Unity much before, please take note of and start using [SerializeField].  If you don’t know what it’s for, or why you should be using it, please read my article on the subject here.

 

Up Tomorrow: Catch Something

 

Continue reading >
Share

7 VR / VIVE Games – Day 2 – Archery

HTC VIVE Game Challenge

This week, I’ve decided to start a new VIVE challenge.  Each day, I’ll be creating a new Unity game developed specifically for the HTC VIVE.

I’ll spend a single day on each game (really about 4-6h of a day), creating the base gameplay and polishing up the experience a little.htc-vive-steam-vr-logo

After each game is complete, I’ll post the results here with a gameplay video and some thoughts on how it worked out.  I’ll also provide a download link for any current Vive owners to try it out and see how the experiences turn out.

 

Game 2 is ready now!

VIVE – Archery

The goal for this game was to build a semi-realistic archery simulator.  It didn’t quite turn out that way, but it’s still an interesting experiment.

Each round, you receive 10 arrows.  Your goal is to score as many points as you can by hitting both stationary and moving targets.

 

 

 

Result:

In my head, this was a lot more fun than it turned out to be.  I’ve learned that holding something in front of your head in VR doesn’t feel great.  I think the lack of peripheral vision may be the reason for this.  While it was a neat experiment, I think in the future, I’ll avoid games that have mechanics like this.

Bow & arrow pulling also turned out to be a little more time consuming than I initially expected.  While I could get the arrow to stick to the string pretty easily, making that look right and have proper limits wasn’t feasible in such a short time.  In the end, I went with making the bow play it’s normal pull animation when you touch the string and removing the restriction of holding the arrow in place.  So while in the video it may look like it’s being pulled back and released, you can just as easily tap the string and pull the arrow controller away.  I think the lack of a proper physical feedback mechanism (like a bow built for VR) makes this interaction just not quite work.

All that said, my wife somehow still enjoyed the game.  It was one of her recommendations, so perhaps that had some influence.  Or maybe I’m just not great at judging these, I’m not sure yet.

Implementation:

There are only a few components to this game.  The more advanced ones got pulled due to not working out (bow pulling)Scene View

As a side note, I do have a Giveaway going on right now for a $25 Asset Store voucher.  If you want to win, you can enter here.

Bow

This was an asset from the Asset Store for $5.  There’s also a free one available, but in the end I wanted the animation so I paid up.

 

It works by having a trigger on a box collider attached to the string.  When the trigger is entered by the right hand controller, I reload the arrow and play the animation.Archery - Hierarchy

Arrow

The arrow was pulled off the bow asset and the pivot point had to be fixed.  For some reason, many art assets in the store have strange offsets and pivots that make no sense, this followed that trend.  Once it was fixed though, I placed a simple arrow script and an audio source on the arrow.  The script has a single public Fire method that sets its’ parent to the root (initially the parent is the bow so it will track), then it adds some force in it’s forward direction.  Because the bow’s forward was a bit messed up, I threw in a quick hack of an extra gameObject named “DirectionalTransform” that I used to get the correct forward direction (this could have been done in max/maya too, but a lot of the time these little hacks are quicker for people who haven’t used them much before).

Archery - Bow InspectorArchery - Bow Inspector

Archery – Bow Inspector

They also check for collision and if they hit the ground or a target/box I set the isKinematic property to true on it’s rigidbody so it appears to stick into the target.  You may notice in the video that there’s a bug when she hits the moving targets.  I didn’t think to make the arrows become a child of the target, so they stay floating wherever they hit it.  I considered fixing this, but thought it was more interesting to discuss the issue than resolve it.

Boxes & Targets

Moving Target

Moving Target

These just have a trigger collider and a “Target” script.  The target script just watches for trigger enter events.  When they fire, it increments the score and plays a sound.

 

 

 

 

Up Tomorrow: Zombie Shooter

 

Continue reading >
Share

7 VR / VIVE Games in 7 Days

HTC VIVE Game Challenge

This week, I’ve decided to start a new VIVE challenge.  Each day, I’ll be creating a new Unity game developed specifically for the HTC VIVE.

I’ll spend a single day on each game (really about 4-6h of a day), creating the base gameplay and polishing up the experience a little.htc-vive-steam-vr-logo

After each game is complete, I’ll post the results here with a gameplay video and some thoughts on how it worked out.  I’ll also provide a download link for any current Vive owners to try it out and see how the experiences turn out.

 

The first game in the list is ready now….

VIVE – Dungeon Defense

In this game, you play as a magic sky wizard presiding over a dungeon full of treasure.

Your goal is simple, fight off the invading goblins with your magic wands for as long as you can.

They’ll keep invading until they steal all your gold.

 

 

Result:

I really like the view in this game.  Standing over a world full of miniatures is a really interesting feeling and I think is one of my favorite ways to experience VR.
I noticed this at Oculus Connect too when they had the long CV1 demos.  One of the key demos that stood out to me was a tiny city that you could see but not interact with.

The mechanics of shooting things is pretty basic, but that helps new players get in and understand what to do quickly.

I found myself getting really low a few times to get a close up view of the little goblins I was destroying.  I think with some polish, this could be a fun little game, but I’m certain it’s lacking something key gameplay wise to make it great.

Implementation:

This game has some pretty basic systems that run it.

Goblins

The Goblins spawn from hidden spawnpoints (you could see them if you walk over to them).  Each of them is assigned a target destination that they walk toward (the chests).  When they reach the chest, they cause a Trigger event to happen that plays some sound, poofs the goblin, and reduces your remaining gold by 1.

Dungeon Defense - Spawner

Dungeon Defense – Spawner

Wands

Dungeon Defense - Camera Rig

The wands are from a pack on the asset store I grabbed for a few $.  I put a sphere on them with some random particle material that grows to full size based on the amount of time left in their refresh timers.  The refresh on them is controlled by a simple float that I tweaked in the editor until it felt right.  Each wand has a projectile prefab assigned that they spawn and launch forward using the physics system on the Rigidbody.  When they collide with anything other than the wands, they destroy themselves, spawn an impact particle, and do a quick SphereOverlap to find any Goblins within range to destroy.  Dungeon Defense - Wand Script

 

Scale

The last thing I want to explain is how to scale the world.  In this demo, the world was all built at normal scale.  Getting the miniaturized effect was as simple as scaling up the [CameraRig] 10x.  Amazingly, that worked fine with no other alterations to the game..

Baseball Home Run Derby

Also interested in a Baseball / Home Run Derby Game?
Check out my Baseball game on Steam now
<iframe src=”http://store.steampowered.com/widget/458370/” width=”646″ height=”190″ frameborder=”0″></iframe>

Up Tomorrow: Archery

 

Continue reading >
Share

Vive Baseball – Home Run Derby – On Greenlight

If you’ve visited this site before, you know that about a week and a half ago Valve sent out Vive headsets to attendees of the Vision Summit.

Since then, I’ve been hacking away to make some demos & tutorials for all of you.

In that process, I came across a prototype that appeared to be a lot of fun, so I polished it up and turned it into a real game.

It’s up on Steam Greenlight now for you to check out. (And please vote for it if you like it)

http://steamcommunity.com/sharedfiles/filedetails/?id=640736956

 

 

 

If you happen to have a Vive Pre and want to try it out now, send me an email using the Contact/Question form below.

Viveball-2

Description (Pulled from my steam page)

Vive Baseball – Home Run Derby is an exciting active baseball simulator for the Vive.
Challenge your friends and family to see who can get the most home runs before the timer runs out.
There are 3 difficulty levels, from easy (for chickens) to pro.
Do well and the crowd will cheer. Try to bunt and get booed out of the stadium!
There are a variety of bats to choose from, both wooden and aluminum.
You can play at night, during the day, or even on another planet!

It’s a lot of fun, and can be a great workout as well.
Grab it now and see how many Home Runs you can hit!

 

 

Continue reading >
Share

Vive – First Impressions & Reviews

It’s Here

If you’ve been following VR news, you know that valves Gabe Newell announced at the vision summit that they’d be giving all attendees a free Vive PRE.
Today, it arrived, and I have to say it’s pretty amazing.

image

It came in a giant box that contained many more boxes

 

image

 

image

The packaging was great and definitely kept the Vive safe.

image

Once it was all opened, you can see there is a whole lot included.

One key differentiation from the Oculus is the inclusion of hand controllers.  They’re extremely accurate, even showing the exact point on the touch pad that you’re touching.

During the tutorial, you can get a great feel of how well they’re tracked, while you inflate colored balloons and smack them around.

The Games

Some of the games/experiences were not playable with my current video card.  This isn’t the fault of the games as I’m using an AMD 280x which is at the low end for compatibility.  I plan to upgrade to the 970gtx soon and re-try those ones.  If you plan on getting a Vive, make sure to also budget for a high end video card to keep the experiences enjoyable.

Valve also included a Steam key for a set of games to try out.

This is a list of the games included free with the Vive.

I’ve included a quick description of each game and a little feedback for each.

 

8i – Gladiator / Message To the Future / The Climb / Wasteland

http://8i.com/

These unfortunately all ran too slow and choppy for anyone to enjoy.  This is probably because the video card (Radeon 280x) is a bit too slow.  It’s time to upgrade and try again next week.  It does seem like they have a streamlined video player though and could be promising.

Abbot’s Book Demo

I only tried this once, but the control scheme (warping around in the direction you’re facing) just didn’t feel right to me.  I understand what they’re trying to do, it just didn’t feel too natural in VR after so many other experiences where I walk around freely.

greenthumbupArpeture Robot Repair

Arpeture Robot Repair

Arpeture Robot Repair

[Warning: Complete spoilers] http://www.roadtovr.com/this-full-video-of-valves-aperture-science-vr-demo-is-wonderful-and-spoilerific/

This was made by Valve and quite a bit of fun.  It’s a fun/funny experience where you “attempt” to fix a robot and fail miserably.  It requires a bit of space or some awkward controls if you don’t have the space, but overall everyone who tried it thought it was enjoyable.

Cloudlands: VR Minigolf

http://futuretown.io/portfolio/cloudlands-vr-minigolf/

This game also appeared to suffer from my weak video card.  Nobody was able to even hit the ball.  It seems like a neat idea, but not until I upgrade hardware.

greenthumbupFantastic Contraption

Vive is Here - Fantastic Contraption

Fantastic Contraption


http://fantasticcontraption.com/

This was my wifes #2 game.  In it, you build contraptions in 3D as the name implies.  One of the first things she built was a car which was fun to watch.

Felt Tip Circus

http://store.steampowered.com/app/427890/?snr=1_7_7_230_150_1

This was an interesting set of mini-games where you start above a play sized circus in a room, then dive down into it and experience parts from inside.  The mini-games were pretty fun, but the interface for moving between them could be simplified / less confusing.

greenthumbupFinal Approach

Final Approach

Final Approach

http://www.phaserlock.com/#!finalapproach/c3ng

This is a VR take on the Flight Control game that was popular on mobile/steam.  It also mixed in some mini-games (putting out fires, removing seagulls) that made it interesting.  There was a bit of humor, and it was pretty forgiving on failures.  It’s definitely one of the most polished experiences available with the release.  Overall it was a hit with everyone who tried it.  The menu system however isn’t very intuitive (hold the trigger before touching buttons if you want to click them).

Jeeboman

http://futuretown.io/portfolio/jeeboman/

This is a scifi shooting game.  It’s got decent graphics and some interesting mechanics, but performance with my system was too slow to be enjoyable.  Once I’ve upgraded, I think this may be a great game.

greenthumbupJob Simulator Demo

vive is here - job simulator

Job Simulator

http://jobsimulatorgame.com/

I saw the guys who made this do a presentation at the Vision Summit.  They’ve done a great job really polishing the experience and making it fun / hilarious.  Everyone who played it wanted to do a repeat, so I’d say it was a hit.

greenthumbupNinja Trainer Vive Demo

Ninja Trainer

Ninja Trainer

https://www.youtube.com/watch?v=Fg7xjkhqpjQ

It’s fruit ninja in VR…. This so far has been the most popular and requested game played.  We’ve had competitions for high scores (110 is the current btw).  I’d rate this as the #1 game/experience included.

Sisters – Scary

http://otherworldinteractive.com/project-view/sisters/

This was a horror/jump experience.  It’s really polished and scared two teenagers out before the halfway point.  My wife is the only one who made it through the entire thing so far, and she even let out a scream & jump at the end.  Just be sure to actually look around at stuff as it seems progression is based on interaction not time.

greenthumbupSpace Pirate Trainer VR – The boys loved it

Space Pirate

Space Pirate

http://www.i-illusions.com/home/space-pirate-trainer/

You shoot at space pirates with guns.  With shields and time scale adjustment there’s enough to keep the 2 boys playing over and over.  Definitely worth a shot if you want to aim and shoot at stuff.

theBlu

http://thebluvr.com/

An underwater experience where you see beautiful imagery of sea life swimming around you.  It seems to be a relaxing passive experience with really nice art.

greenthumbupTilt Brush

Tilt Brush

Tilt Brush

http://www.tiltbrush.com/

This was a pretty amazing art experience.  It turns your controllers into a 3D paint brush and a pallet.  You can draw whatever you want in 3D.  While the creations we made weren’t anything impressive outside VR, when you see them in the headset it’s mind blowing.

 

And the Reviews

I let my family go through the experiences and gathered some feedback.
Here are their ages and favorites:

Boy #1 – 7 years old

Favorite – Space Pirates

Runner Up – Ninja Trainer

Boy #2 – 13 years old

Favorite – Space Pirates

Runner Up – Job Simulator

Girl #1 – 15 years old

Favorite – “I don’t know stop asking me”  (I think this means Final Approach)

Runner Up – “OMG I don’t know stop asking me”  (Could not translate this one)

Wife

Favorite – Ninja Trainer

Runner Up – Fantastic Contraption

Me

Favorite – Ninja Trainer

Runner Up – Tilt Brush

 

Unity Integration

This post won’t cover Unity integration with the Vive, but an upcoming one will.  If you’re interested in learning how to create your own Vive games, sign up now to be notified when the first sample is available.

 

Continue reading >
Share

Upcoming Presentation – Intro to Unity – March 8th – Ontario, CA

If you’re in the area, come check out my talk on March 8th!

Intro to Unity 2D, 3D, VR, & AR with Jason Weimann

Tuesday, Mar 8, 2016, 6:30 PM

CoStar Group
901 via Piemonte Suite 450 Ontario, CA

18 Nerds Attending

Intro to Unity 2D, 3D, VR, & AR  ————————————————————-Unity3D is a state of the art rendering and game engine that allows you to write your code in C#.  While the most common use is for game development, it’s also great for a variety of business applications from highly interactive cross platform GUIs to aug…

Check out this Meetup →

Continue reading >
Share

Automatic Deployment, Registration, and Deregistration with Octopus Deploy on AWS EC2

By Jason Weimann / February 23, 2016

Do you want to use Octopus Deploy to manage deployments on AWS EC2 instances in an Auto Scaling Group?  We did, and ran into some issues, so here’s how we solved them.

Octopus Deploy is a great tool for handling deployment across all your environments, but setting it up with AWS EC2 requires a bit of work.  This post assumes you have at least a brief understanding of how Octopus works and a bit of familiarity with Amazon Web Services.

What we’ll cover

  • Automatic Installation of Tentacles on new EC2 instances
  • Automatic Subscribing to projects & roles when an EC2 instance first starts up
  • Automatic Deployment of the correct release to the new EC2 instances
  • Automatic Deregistration when EC2 instances come down (terminated by an ASG or manually)

Bootstrapping our Setup

The first thing we need to do is trigger installation of the Octopus tentacle when a new server is brought up.

To do this, we use the EC2 config service along with a powershell script in the userdata for the instance.

What are the Script Variables???

Some of the scripts have Script Variables before them.

Pay close attention to those and be sure to set the proper values for any Mandatory variables.

 

Bootstrapper.ps1 – The Bootstrapper Script

The first script you’ll need is the bootstrapper.  It’s designed to be small and unchanging, so you can assign it once in your launch configuration and not need to change it later.

The bootstrapper only has 2 tasks.  Download your primary setup script “serverSetup.ps1” and execute it.

 

Here’s what the script looks like.

Script Variables

$sourceBucketName Mandatory Set this to your S3 bucket name

 

<powershell>
Write-Output "Downloading Server Script from S3"

$sourceBucketName = "YOUR_S3_BUCKET_HERE"
$serverSetupFileName = "serverSetup.ps1"
$localSetupFileName = "c:\temp\serverSetup.ps1"
$localPath = "C:\temp\"

Read-s3object -bucketname $sourceBucketName -Key $serverSetupFileName -File $localSetupFileName

invoke-expression -Command $localPath\ServerSetup.ps1
</powershell>

Something to note here is the set of <powershell> tags around the code.  Those exist because we’ll be placing this script in the EC2 userdata, which needs them to understand which scripting language we’re using.  If you execute the script manually from a powershell command line, you’ll see errors for those lines, but you can ignore them.

 

 


 

 

ServerSetup.ps1 – The Server Setup Script

Much like the bootstrap script, the ServerSetup.ps1 script doesn’t contain much logic.  It’s intended to download and execute other scripts that perform specific actions.

 

Script Variables

$sourceBucketName Mandatory Set this to your S3 bucket name
$scriptFolderName Optional Set this to the subfolder holding your scripts

 

# If for whatever reason this doesn't work, check this file:
Start-Transcript -path "D:\ServerSetupLog.txt" -append

Write-Output "####################################"
Write-Output "Starting ServerSetup.ps1"

### Custom Variables ###
$scriptFolderName = "Scripts" 

# Variables that must be set here AND in SetVariables.ps1
$sourceBucketName = "YOUR_S3_BUCKET_HERE"
$localPath = "C:\temp\"
$instanceId = Invoke-RestMethod -Method Get -Uri http://169.254.169.254/latest/meta-data/instance-id
### Custom Variables ###


# Download All files from S3
Set-Location $localPath

Write-Output "Downloading scripts from $sourceBucketName\$scriptFolderName to local path $localPath"
$objects = get-s3object -bucketname $sourceBucketName -KeyPrefix $scriptFolderName
foreach($object in $objects) 
{
    $localFileName = $object.Key -replace $scriptFolderName, ''
    if ($localFileName -ne '' -and $localFileName -ne '/') 
	{
		$localFilePath = Join-Path $localPath $localFileName
		Write-Output "Copying File " $localFileName " to " $localFilePath
		Copy-S3Object -BucketName $sourceBucketName -Key $object.Key -LocalFile $localFilePath
	}
}

# Import any needed modules here
Import-Module AWSPowerShell

& .\setInstanceNameTag.ps1
& .\installCertificates.ps1
Set-Location $localPath
& .\installTentacle.ps1
& .\addOctopusMachineIdTag.ps1
& .\autoDeploy.ps1
Write-Output "Deployment complete."

# Write the tentacle installation log to S3
Write-S3Object -bucketname $sourceBucketName -File D:\ServerSetupLog.txt -key ServerSetupLogs/$instanceId/ServerSetupLog.txt

Stop-Transcript
Note: If you decide to just copy/paste this script, make sure to replace the instances of "yourS3BucketName" with your actual bucket name, and user a log path that exists for your EC2 instances (this one uses the D: drive).

ServerSetup.ps1 –Script Explanation

Let’s break this script down and see what’s going on.

Starting the Transcript

# If for whatever reason this doesn't work, check this file:
Start-Transcript -path "D:\ServerSetupLog.txt" -append

Write-Output "####################################"
Write-Output "Starting ServerSetup.ps1"

Here, we’re just starting a transcript of everything that executes so we can upload it to S3 later. This helps when you have issues with a server or your scripts and want to find out exactly what happened. The example here uses the D: drive, so if you don’t have a D: drive on your EC2 images, switch this to another location.

Get variables we need

### Custom Variables ###
$scriptFolderName = "Scripts" 

# Variables that must be set here AND in SetVariables.ps1
$sourceBucketName = "YOUR_S3_BUCKET_HERE"
$localPath = "C:\temp\"
### Custom Variables ###


# Download All files from S3
Set-Location $localPath

Write-Output "Downloading scripts from $sourceBucketName\$scriptFolderName to local path $localPath"
$objects = get-s3object -bucketname $sourceBucketName -KeyPrefix $scriptFolderName
foreach($object in $objects) 
{
    $localFileName = $object.Key -replace $scriptFolderName, ''
    if ($localFileName -ne '' -and $localFileName -ne '/') 
    {
		$localFilePath = Join-Path $localPath $localFileName
		Write-Output "Copying File " $localFileName " to " $localFilePath
		Copy-S3Object -BucketName $sourceBucketName -Key $object.Key -LocalFile $localFilePath
	}
}

Here, we’re downloading all of the files in the Scripts subfolder of our bucket (unless you changed the $scriptFolderName). If you look at the screenshot of S3 below, you’ll see that one of the folders in the root is named “Scripts“.  The scripts are downloaded to the C:\temp folder (from $localPath), and will be executed in the next step.

 

Running our scripts that do work

# Import any needed modules here
Import-Module AWSPowerShell

& .\setInstanceNameTag.ps1
& .\installCertificates.ps1
Set-Location $localPath
& .\installTentacle.ps1
& .\addOctopusMachineIdTag.ps1
& .\autoDeploy.ps1
Write-Output "Deployment complete."

The first thing happening in this chunk is we import the AWS powershell module.

Next, we execute our set of powershell scripts that complete different tasks. Again, all of these powershell scripts are hosted in the “Scripts” subfolder of the S3 bucket.

If you decide to add another step, just create a new script and add the call to it here.

Uploading the logs

# Write the tentacle installation log to S3
Write-S3Object -bucketname $sourceBucketName -File D:\ServerSetupLog.txt -key ServerSetupLogs/$instanceId/ServerSetupLog.txt

Stop-Transcript

The final step is to upload the transcript to our “ServerSetupLogs” subfolder of the S3 bucket. Again, if you rename anything, just make sure to duplicate the renaming here.


 

The Deployment Scripts

So far, we’ve only seen scripts intended to download and execute other scripts. These scripts are where we keep the logic for registering new tentacles, installing certificates, and tagging our instances.   I’m going to cover the scripts in order of execution. You may not be interested in them all, but I recommend you at least take quick look at each to see what it’s doing. All of the scripts other than “InstallCertificates.ps1” are mandatory for the entire system to work properly (as defined in this post).

 

SetVariables.ps1 – Setting the variables we’ll use in other scripts.

This script is where you set all of your custom variables (other than the 2 that were mandatory above).

It also calculates many variables the other scripts need, like the current AWS region of the EC2 instance.

Script Variables

$sourceBucketName Mandatory Set this to your S3 bucket name
$octopusApiKey Mandatory Set your Octopus Server API Key here
$octopusServerUrl Mandatory The IP & Port of your Octopus Server
$octopusServerThumbprint Mandatory Set your Octopus Server thumbprint here
 $octopusInstallerName Mandatory Set this to the Tentacle installer filename that you stored in your S3 bucket

 

#### Customizable Variables ####
    $sourceBucketName = "YOUR_S3_BUCKET_HERE"
    $localPath = "C:\temp\"

    $octopusApiKey = "YOUR_OCTOPUS_API_KEY" #API-XXXXXXXXXXXXXXXXXXXXXXXXXXX
	$octopusServerUrl = "http://YOUR_OCTOPUS_SERVER_IP_AND_PORT/" #192.168.1.1:81
	$octopusServerThumbprint = "YOUR_OCTOPUS_SERVER_THUMBPRINT"
	$tentacleListenPort = 10933
	$tentacleHomeDirectory = "D:\Octopus"
	$tentacleAppDirectory = "D:\Octopus\Applications"
	$tentacleConfigFile = "D:\Octopus\Tentacle\Tentacle.config"
    $octopusInstallerName = "Octopus.Tentacle.3.2.13-x64.msi"
#### Customizable Variables ####

## Get Variables we need ##
	$availabilityZone = Invoke-WebRequest http://169.254.169.254/latest/meta-data/placement/availability-zone -UseBasicParsing 
	$region = $availabilityZone.Content.Trim("a","b","c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m")

    # Get our public IP address.  
    # This is used for registration with the Octopus server, to give the server an endpoint to contact this tentacle on.
    $ipAddress = (Invoke-RestMethod -Method Get -Uri http://169.254.169.254/latest/meta-data/public-ipv4)
    $ipAddress = $ipAddress.Trim()
## Get Variables we need ##

### Get-RolesAndEnvironment ###
	$instanceId = (Invoke-RestMethod -Method Get -Uri http://169.254.169.254/latest/meta-data/instance-id)
	$instance = ((Get-EC2Instance -region $region -Instance $instanceId).RunningInstance)
	$myInstance = $instance | Where-Object { $_.InstanceId -eq $instanceId }
	$roles = ($myInstance.Tags | Where-Object { $_.Key -eq "Roles" }).Value
	$environment = ($myInstance.Tags | Where-Object { $_.Key -eq "Environment" }).Value
### Get-RolesAndEnvironment ###

if (!$variablesShown)
{
    Write-Output "Variables: Used"
    Write-Output "Source Bucket - $sourceBucketName"
    Write-Output "Source Bucket Key (folder) - $keyPrefix"
    Write-Output "Local Script Path - $localPath"

    Write-Output "Octopus Settings"
    Write-Output "================"
    Write-Output "API Key - $octopusApiKey"
    Write-Output "Octopus Endpoint - $octopusServerUrl"
    Write-Output "Octopus Thumbprint - $octopusServerThumbprint"
    Write-Output "Tentacle ListenPort - $tentacleListenPort"
    Write-Output "Tentacle HomeDirectory - $tentacleHomeDirectory"
    Write-Output "Tentacle App Directory - $tentacleAppDirectory"
    Write-Output "Tentacle ConfigFile - $tentacleConfigFile"
    Write-Output "Tentacle Installer - $octopusInstallerName"

    Write-Output "EC2 Settings"
    Write-Output "============"
    Write-Output "Region - $region"
    Write-Output "Ip Address - $ipAddress"
    Write-Output "InstanceId - $instanceId"
    Write-Output "Roles - $roles"
    Write-Output "Environment - $environment"
    $global:variablesShown = 1;
}

 

SetInstanceNameTag.ps1 – Tagging your instance with a good name

The purpose of this script is to set any tags you want on the instance.  The script as it stands only sets the “Name” tag so that it shows our Octopus Environment and Octopus Role.

The end result will be something like “Development – www.jasonweimann.com” or “Staging – api.jasonweimann.com|www.jasonweimann.com” (the pipe is used when an instance is in multiple Octopus roles).

# This script is used to set custom tags on the EC2 instance
# It currently only sets the name tag to a combination of the Environment & Roles tags
# ex. "Production www.yoursite.com"
#     "Development www.yoursite.com"

Write-Output "####################################"
Write-Output "Starting SetInstanceNameTag.ps1"

# Get our variables
. .\SetVariables.ps1

# Name formatting happens here.  If you want to change the format, modify this.
# This needs to be after SetVariables so we have the environment & roles
$instanceName = $environment + " " + $roles

Import-Module AWSPowerShell

# Sets the name tag to $instanceName
function Set-InstanceNameTag()
{
    $instanceId = (Invoke-RestMethod -Method Get -Uri http://169.254.169.254/latest/meta-data/instance-id)

    #Get the current value of the name tag
       
	Remove-EC2Tag `
        -Resource $instanceId `
        -Tag @{ Key="Name" } `
        -Region $Region `
        -ProfileName $ProfileName `
        -Force
		
	Write-Output "Setting Name to $instanceName"
		
	New-EC2Tag `
        -Resource $instanceId `
        -Tag @{ Key="Name"; Value=$instanceName } `
        -Region $Region `
}

Set-InstanceNameTag

You may notice that we make some calls that were made previously in here like Import-Module.  This is done so that the scripts can be run independently as well.  We try to keep each script completely contained so that it doesn’t break if another script is modified.


 

InstallCertificates.ps1 – Optional – Installing your SSL Certs

The “InstallCertificates.ps1” script is Optional if you don’t have any certificates.  There is no harm in leaving it in though so you can add certificates at a later date.  If no certificates exist for the Octopus role the EC2 instance is a member of, nothing will happen.

# This script is used to install required HTTPS certificates on the EC2 instance
# It looks at the roles the instance is in, then downloads and installs any required certificates from S3
# ex. Production www.yourwebsite.com  
#     Development www.jasonweimann.com

Write-Output "####################################"
Write-Output "Starting InstallCertificates.ps1"

# Get our variables
. .\SetVariables.ps1

# Import any custom modules here
Import-Module AWSPowerShell

set-location cert:\localMachine\my


function Import-PfxCertificate 
{
    param([String]$certPath,[String]$certRootStore = "LocalMachine",[String]$certStore = "My")
	
	$pfxPass = "cc540540!" | ConvertTo-SecureString -AsPlainText -Force
	$pfx = new-object System.Security.Cryptography.X509Certificates.X509Certificate2
	
	$pfx.import($certPath,$pfxPass,"PersistKeySet")
	$store = new-object System.Security.Cryptography.X509Certificates.X509Store($certStore,$certRootStore)
	$store.open("MaxAllowed")
	$store.add($pfx)
	Write-Output "Store $certStore RootStore $certRootStore"
	$store.close()
}

function Download-Certificates()
{
	param 
	(
		[Parameter(Mandatory=$True)]
		[string]$role
	)
	
	$keyPrefix = "Certificates/$role"

	$objects = get-s3object -bucketname $sourceBucketName -KeyPrefix $keyPrefix
	foreach($object in $objects) 
	{
		$localFileName = $object.Key -replace $keyPrefix, ''
		Write-Output "Copying File to $localFileName"
		if ($localFileName -ne '' -and $localFileName -ne '/') 
		{
			$localFilePath = Join-Path $localPath $localFileName
			Write-Output "Copying File " $localFileName " to " $localFilePath
			Copy-S3Object -BucketName $sourceBucketName -Key $object.Key -LocalFile $localFilePath
			Write-Output "Installing Certificate $localFilePath"
			Import-PfxCertificate  $localFilePath
			Write-Output "Certificate Install Complete"
		}
		
	}
}

foreach($roleName in $roles.Split("{|}"))
{
	Download-Certificates $roleName
}
  This script has two main functions.

If you don’t have any certificates, there’s still no harm in leaving the script there.  If you decide to add one later, it’s as easy as dropping it into your S3 bucket.

Import-PfxCertificate

This handles importing the certificate file once it’s been downloaded from your S3 bucket.

Download-Certificates

This function will download each certificate that the EC2 instance requires.  The certificates are placed in a folder per role. In the example below, I have two certificate folders.  These both match Octopus role names in my octopus deploy server. If I ever need to add new certificates to my setup, I just place them in the folder for the corrisponding role and they’ll be auto installed when new instances come up.

We could also run this script manually to update all of the certificates on EC2 instances that are already running if needed.

Certificate Subfolders By Role

Certificate Subfolders By Role

 


 

InstallTentacle.ps1 – Doing the actual installation and registration with Octopus

The installTentacle.ps1 script is actually a modified version of one I found online (I can’t find the original anywhere or I’d link it here).

Its does 3 things.

  1. Download the tentacle installer from S3
  2. Install the tentacle service
  3. Register the tentacle with the Octopus server

The reason I download the installer from S3 is that we had an issue recently where a newer tentacle installer was released and was incompatible with our server.  To avoid an issue like that in the future, we just store the tentacle version we’re happy with on S3 and let the EC2 instances grab it from there.

You will need to check the version # and make sure you change $octopusInstallerName inSetVariables.ps1 whichever tentacle version you’re using.  The one in the script that you’d replace is “Octopus.Tentacle.3.2.13-x64.msi“.

The tentacle installer MUST be in the “Scripts” subfolder of your S3 bucket.

# This script runs installation of the octopus deploy tentacle
# and registration with the Octopus server located at octopusServerUrl

Write-Output "####################################"
Write-Output "Starting InstallTentacle.ps1"

# Get our variables
. .\SetVariables.ps1

# Store original working path so we can change back to it at the end of the script
$originalWorkingPath = (Get-Item -Path ".\" -Verbose).FullName

# Installation Function
function Install-Tentacle 
{
  param (
     [Parameter(Mandatory=$True)]
     [string]$apiKey,
     [Parameter(Mandatory=$True)]
     [System.Uri]$octopusServerUrl,
     [Parameter(Mandatory=$True)]
     [string]$environment,
     [Parameter(Mandatory=$True)]
     [string]$role
  )

  Write-Output "Beginning Tentacle installation"

  # The tentacle.msi file m ust be in the current working directory before this script launches
  # ServerSetup.ps1 downloads the version of the tentacle we use to c:\temp\tentacle.msi
  $tentaclePath = $ExecutionContext.SessionState.Path.GetUnresolvedProviderPathFromPSPath(".\Tentacle.msi")
  
  # Remove any existing instance of the tentacle - This is only needed when re-running the script after the initial install
  # to prevent issues with certificates changing during installation.  It should fail on a new EC2 instance.
  Write-Output "Removing Previous Installation if it exists"
  $msiExitCode = (Start-Process -FilePath "msiexec.exe" -ArgumentList "/x $octopusInstallerName /quiet" -Wait -Passthru).ExitCode
  
  # Start the actual installation of the tentacle
  Write-Output "Installing MSI"
  $msiExitCode = (Start-Process -FilePath "msiexec.exe" -ArgumentList "/i $octopusInstallerName /quiet" -Wait -Passthru).ExitCode
  Write-Output "Tentacle MSI installer returned exit code $msiExitCode"
  if ($msiExitCode -ne 0) { throw "Installation aborted" }

  # Open the firewall port
  Write-Output "Open port $tentacleListenPort on Windows Firewall"
  & netsh.exe firewall add portopening TCP $tentacleListenPort "Octopus Tentacle"
  if ($lastExitCode -ne 0) { throw "Installation failed when modifying firewall rules" }
    
  
  Write-Output "Configuring and registering Tentacle"
  
  # Change directory to where tentacle.exe is located
  # tentacle.exe is a tool provided with Octopus deploy to handle registration and other tasks on a tentacle
  cd "${env:ProgramFiles}\Octopus Deploy\Tentacle"

  # Run the required tentacle.exe commands to register with the Octopus server
  & .\tentacle.exe create-instance --instance "Tentacle" --config $tentacleConfigFile --console | Write-Host
  if ($lastExitCode -ne 0) { throw "Installation failed on create-instance" }
  
  & .\tentacle.exe configure --instance "Tentacle" --home $tentacleHomeDirectory --console | Write-Host
  if ($lastExitCode -ne 0) { throw "Installation failed on configure" }
  
  & .\tentacle.exe configure --instance "Tentacle" --app $tentacleAppDirectory --console | Write-Host
  if ($lastExitCode -ne 0) { throw "Installation failed on configure" }
  
  & .\tentacle.exe configure --instance "Tentacle" --port $tentacleListenPort --console | Write-Host
  if ($lastExitCode -ne 0) { throw "Installation failed on configure" }
  
  & .\tentacle.exe new-certificate --instance "Tentacle" --console | Write-Host
  if ($lastExitCode -ne 0) { throw "Installation failed on creating new certificate" }
  
  & .\tentacle.exe configure --instance "Tentacle" --trust $octopusServerThumbprint --console  | Write-Host
  if ($lastExitCode -ne 0) { throw "Installation failed on configure" }
  
  # We may need to register with multiple roles.  To accomplish that, we need to assign a variable with a --role for each one.
  # Concatanating this in the Invoke-Expression call does not work.
  foreach($roleName in $roles.Split("{|}"))
  {
    $roleExp += " --role '$roleName' " 
  }
  
  # Create the register expression (needed because of multiple roles)
  $registerExp = ".\tentacle.exe register-with --instance ""Tentacle"" --server $octopusServerUrl --environment $environment $roleExp --name $instanceId --publicHostName $ipAddress --apiKey $apiKey --comms-style TentaclePassive --force --console | Write-Host"
  
  Write-Output $registerExp # Log the expression for debugging
  Invoke-Expression $registerExp
  if ($lastExitCode -ne 0) 
  { 
    Write-Output "Environment: $environment Role: $role Name: $instanceId PublicHostName: ipAddress"
    throw "Installation failed on register-with"
  }
 
  & .\tentacle.exe service --instance "Tentacle" --install --start --console | Write-Host
  if ($lastExitCode -ne 0) { throw "Installation failed on service install" }
 
  Write-Output "Tentacle commands complete"
}


# Call the installation function
Install-Tentacle -apikey $octopusApiKey -octopusServerUrl $octopusServerUrl -environment $environment -role $roles

# Change back to original working directory
cd $originalWorkingPath
I won’t cover everything this script is doing, but if you scan through, you’ll see that most of it is just setup of the Octopus Deploy Tentacle. The part I want to point out though is this
# We may need to register with multiple roles. To accomplish that, we need to assign a variable with a --role for each one.
 # Concatanating this in the Invoke-Expression call does not work.
 foreach($roleName in $registerInRoles.Split("{|}"))
 {
     $roleExp += " --role '$roleName' " 
 }
 
 # Create the register expression (needed because of multiple roles)
 $registerExp = ".\tentacle.exe register-with --instance ""Tentacle"" --server $octopusServerUrl --environment $environment $roleExp --name $instanceId --publicHostName $ipAddress --apiKey $apiKey --comms-style TentaclePassive --force --console | Write-Host"
 
 Write-Output $registerExp # Log the expression for debugging
 Invoke-Expression $registerExp
This is the section that’s registering our new tentacle with the Octopus Deploy server. Most examples you’ll see only show how to register with a single role. In our case, we want to be able to host multiple roles on a single instance for the Development environment. When we setup roles later, you’ll see that we split them with a pipe, and in this part of the script, we Split them based on the pipe, then add a –role entry for each role we want to be a member of.


 

AddOctopousMachineIdTag.ps1 – Tagging the instance so it can be removed later

It may seem like we’ve already gone through a bunch of scripts, and we have, but that’s just because we want everything to be very modular. You may also be wondering why we’re doing another Tag change in a separate file…  It’s only because this must be done after the tentacle registration is complete and we wanted the Name set before anything else runs. The “AddOctopusMachineIdTag.ps1” script works by querying the Octopus Deploy server for a machine by Name/ AWS instanceId (our octopus target names are the instanceId). When it finds the correct machine, it places a tag on the EC2 instance named “OctopusMachineId“.  This tag is used later for automatic deregistration. The Octopus machine Id is what you see in the address bar when you look at one of your deployment.  Ex. 192.168.1.1/app#/machines/Machines-167 Before you use this script, be sure to add your Octopus IP & API keys  

# This script is used to set the OctopusMachineId  tags on the EC2 instance
# This tag is used by cloudwatch to auto deregister the tentacle when the instance is unavailable

Write-Output "####################################"
Write-Output "Starting AddOctopusMachineIdTag.ps1"


# Get our variables
. .\SetVariables.ps1

Add-Type -Path "C:\Program Files\Octopus Deploy\Tentacle\Newtonsoft.Json.dll" # Path to Newtonsoft.Json.dll 
Add-Type -Path "C:\Program Files\Octopus Deploy\Tentacle\Octopus.Client.dll" # Path to Octopus.Client.dll 


$endpoint = new-object Octopus.Client.OctopusServerEndpoint $octopusServerUrl,$octopusApiKey 
$repository = new-object Octopus.Client.OctopusRepository $endpoint 
$findmachine = $repository.Machines.FindByName("$instanceId") 
$octopusMachineid = $findmachine.id

# Set the OctopusMachineId tag
New-EC2Tag `
        -Resource $instanceId `
        -Tag @{ Key="OctopusMachineId"; Value=$octopusMachineid } `
        -Region $region `

Write-Output "Set OctopusMachineId to $octopusMachineid"

 


 

 

Auto Deploy

This is the final script of the boostrapping process.  This script is where our new EC2 instance requests the latest release from your Octopus Deploy server. It works by using the Octopus servers REST api to determine which release(s) should be on the instance based on it’s Environment and Roles.  It then requests a deployment to itself of those releases.

# This script tells the Octopus server to deploy any releases this 
# instance should be running.  It looks at the Environment and roles
# to determine what release # should be deployed, then sends commands 
# to the server to begin that deployment.
# This is run only on the initial startup, and is launched by ServerSetup.ps1

Write-Output "####################################"
Write-Output "Starting AutoDeploy.ps1"

Write-Output "Starting AutoDeploy"

# Get our variables
. .\SetVariables.ps1

$Header =  @{ "X-Octopus-ApiKey" = $octopusApiKey } # This header is used in all the octopus rest api calls

# Get the project names and replace any periods with - for the rest calls (periods in a URI are of course problematic)
$projectNames  = $roles.replace(".", "-")
 
# Function to request the release for a project
# This is called once for each project/role this EC2 instance is a member of
function GetReleaseForProject
{
    param (
		[Parameter(Mandatory=$True)]
		[string]$projectName
	)
	
	Write-Output "Getting Build for $projectName"
  
	# Get our Octopus Machine Id
	# This is needed for our rest calls and is the internal machine ID Octopus uses
	# We get all machines here, then find the one who's name matches our instanceId, then select the Id to use later
	$instanceId = (Invoke-RestMethod -Method Get -Uri http://169.254.169.254/latest/meta-data/instance-id)
	$global:allMachines = Invoke-WebRequest -UseBasicParsing $octopusServerUrl/api/machines -header $Header | ConvertFrom-Json
    Write-Output "Getting MachineID for InstanceID: $instanceId"
    Write-Output "allMachines: $allMachines.Count"	
    $OctopusMachineId =  $allMachines.Items.where({$_.Name -eq "$instanceId"}).Id
    Write-Output "OctopusMachineId: $OctopusMachineId"	
	
	# Getting Environment and Project By Name - NOTE: This is not the same as the Environment Tag
	$fullUri = "$octopusServerUrl/api/projects/$ProjectName"
	$Project = Invoke-WebRequest -UseBasicParsing  -Uri $fullUri -Headers $Header| ConvertFrom-Json
	$Environments = Invoke-WebRequest -UseBasicParsing  -Uri $octopusServerUrl/api/Environments/all -Headers $Header| ConvertFrom-Json
    Write-Output "Environments: $Environments"
	$OctopusEnvironment = $Environments | ?{$_.name -eq $environment}
	
	# Finally set the environment and project id strings
	$environmentId = $OctopusEnvironment.Id
	$projectId = $Project.Id
	
	# Get the most recent release that matches our environmentId & projectId
	$fullUri = "$octopusServerUrl/api/deployments?Environments=$environmentId&Projects=$projectId&SpecificMachineIds=instanceId&Take=1"
	$currentRelease =  Invoke-WebRequest -UseBasicParsing  -Uri "$fullUri"  -Headers $Header  | ConvertFrom-Json

	# Set our machine name in an array of MachineNames to be converted into a JSON array for the rest call
	[string[]] $MachineNames = $OctopusMachineId
	
	# Generate the JSON for our rest call by creating an object in powershell and using ConvertTo-Json
	$DeploymentBody = @{ 
				ReleaseID = $currentRelease.Items[0].ReleaseID
				EnvironmentID = $OctopusEnvironment.id
				SpecificMachineIds = $MachineNames
			  } | ConvertTo-Json
			  
	$fullUri =  "$octopusServerUrl/api/deployments"

	Write-Output "Full Uri: $fullUri"
	Write-Output "DeploymentBody: $DeploymentBody"
    Write-Output "Headers: $Header"

	# Make the rest call to start a deployment
	$deploymentCall = Invoke-WebRequest -UseBasicParsing  -Uri $fullUri  -Method Post -Headers $Header -Body $DeploymentBody
}


# Split all of our project names into an array to loop over
Write-Output "Project Names: $projectNames"
$projectsSplit = $projectNames.split("|")
Write-Output "Split Projects $projectsSplit"

# Call GetReleaseForProject for each role/project this instance is a member of
foreach($projectName in $projectsSplit)
{
	GetReleaseForProject $projectName
}

 

The S3 Bucket

Next, we need to setup your S3 bucket. If you don’t feel comfortable with S3 via command line, I’d recommend you use a tool like  S3 Browser for the next part.

Your S3 bucket should look like this.

S3 Bucket   In the screenshot, the “serverSetup.ps1” script is placed in the root of the bucket “deploy“.

There are 3 subfolders that each serve a unique purpose.  If you read the scripts above, you probably already know what two of them are used for.  But just in-case it’s not clear, here’s a quick description.

Certificates Holds SSL certificates you want installed on your machines (this assumes you have certificates you want installed)
Scripts All of the scripts from above that do the real work are placed here.  (In the “Scripts” folder of the download)
The Tentacle Installer and AWSSDK.DLL files are also placed here.
ServerSetupLogs Inside, there are subfolders for each EC2 instance that contain any logs you upload from it. (The serverSetup.ps1 script will upload the full log here automatically at the end)

 

Create Your Folders

Let’s create the folders from the screenshot in your own bucket now.

  1. Upload the “serverSetup.ps1script to the root of your S3 Bucket.
  2. Upload the Scripts folder contents, including your selected Octopus Tentacle installer and the AWSSDK.dll file.
  3. Create a Certificates folder in the root of your S3 Bucket.
    1. Create a folder for each role you have.  ex. www.jasonweimann.com
    2. Place all certificates you want installed on instances tagged with that role into the sub-folder for their role. (ex. if I have certificates for jasonweimann.com, they’d be placed in “Certificates\www.jasonweimann.com\mycertificatefile.pfx“)
  4. Create an empty folder named “ServerSetupLogs

 

You can download the scripts and README files as a zip below

Download “Octopus Deploy Automated Registration” octopus-deploy-automated-registration.zip – Downloaded 298 times – 4 MB

.

Be sure to replace the required strings in each script with your own Octopus Server IP, S3 Bucket Name, API key.

 

Octopus Setup

This part is just meant to show how our environments are setup and how to use the roles in the scripts above.  You don’t need to match our naming scheme, but you do need to understand how the roles are defined.

What are these roles???

Roles are just projects in Octopus.  Each EC2 instance can have one or more roles (host one or more websites/projects).

With the default naming scheme in the scripts, your servers Name tag will be set to {EnvironmentName} – {RoleName}

For the example projects below, if you ran each role on it’s own EC2 instance in the Production environment their names would be

  • Production – api.jasonweimann.com
  • Production – www.jasonweimann.com

If you ran both sites on the same EC2 instance in Development, it would be named

  • Development – www.jasonweimann.com|api.jasonweimann.com

 

 

Starting your EC2 instance

Because we’ll be using an Auto Scaling Group, the first thing we need to do is create a Launch Configuration.  If you already know how to do this, then just pay attention for the parts in red.  For everyone else, just follow step by step.

 

Before we can create the instance, you need to have 2 things setup.

  1. An IAM role with access to call into AWS services – In the screenshots below, mine is named “PowerUser

  2. A security group with the Octopus port 10933 open from your Octopus Server IP – In the screenshots below, mine is named “Octopus Deploy

 

The Launch Configuration

Open the Launch Configurations page from the EC2 menu

Select Create launch configuration

Select your AMI

Click Next: Configure Details

Choose a name for the Launch Configuration

Select the IAM role you created above (it needs access to call AWS services)

For “User data“, select “As file”.

Check the “Monitoring” checkbox

Choose the “bootstrap.ps1” file

Click next.

In Storage, you’ll want to Create a D: drive for your deployments

Don’t forget to check Delete on Termination unless you want your HDD images staying around even after a server has been terminated.

 

Now, select your Security Group that gives the Octopus server access to connect to the Tentacle (created above)

 

Double check your settings, then click “Create launch configuration”

 

 

Creating the Auto Scaling Group

In the first step, you don’t need to do anything special.

Select a name, subnet, and group size.

ClickConfigure Notifications

 

Tags – The really important part

This is how you will determine what environment and roles the instance(s) in your Auto Scaling Group will be in.

Do this by setting 2 tags.  Environment & Roles.

You can have other tags if you need them, but Environment & Roles must be there.

Environment This must match the name of your Octopus Environment.  If you name it Development, use Development here.  If your Octopus environment name doesn’t match what’s in here, the deployment won’t work.
Role This must match  your Octopus Project Name.  If you want multiple roles, split them with a pipe |  Do not add extra spaces.  (ex. www.jasonweimann.com|api.jasonweimann.com)

Environment must match the name of your Octopus Environment.  If you name it Development, use Development here.  If your Octopus environment name doesn’t match what’s in here, the deployment won’t work.

Role must match your Octopus Project name

Environment and Roles set

Environment and Roles set

 

Multiple roles selected are split by a pipe |

Multiple roles selected

 

Click “Create Auto Scaling group

Looking at Errors / Issues

After a few minutes, your EC2 instance should start up.  If it doesn’t automatically register, get renamed, and take a deployment from your server, don’t worry.

Remember above when we added the scripts, one of them copies logs to your S3 Bucket.

Inspect the S3 Buckets subfolder “ServerSetupLogs“.

You’ll see a sub-folder for each EC2 instance that’s come up with the deployment scripts running.

Look into those logs and search for the error/issue…

If you can’t find your log there, it should also be available on the root of the D: drive.  Just remote connect to the instance and look at the log there.

If you’re unsure what happened and need help, feel free to comment below.

If your server never got renamed, and the scripts didn’t even get executed, it may be from using a custom AMI.

If you want to use a custom AMI, make sure you have checked the Execute User Data option in the EC2 Config Services application before creating the AMI.  (you may need to re-save your AMI with this option checked)

 

Handling Deregistration

The last thing you need to do is setup deregistration.

The deregistration process works by monitoring CloudWatch events and triggering a Lambda function.

First, we need to create the Lambda

Open the Lambda page and hit Create a Lambda Function.

2016-02-08 17_40_45-AWS Lambda

On the “Select blueprint” page, just click “Skip”

2016-02-08 17_41_14-AWS Lambda

Now, give your lambda a name and description.

Select Node.js for the Runtime

Select Edit code inline for Code entry type

2016-02-08 17_42_33-AWS Lambda

Paste the code below into the Code area

IMPORTANT: You need to put in your Octopus Server IP & API Key in the script before pasting it.

var aws = require("aws-sdk");


exports.handler = function(event, context) {
    if (event.detail.state != "terminated")
      context.succeed(event);
    
    var http = require('http');
  
    var instanceId = event.detail["instance-id"]; // [""] required because of the hyphen
    var currentRegion = event.region;

    console.log('EC2InstanceId =', instanceId);

    var ec2 = new aws.EC2({region: currentRegion}); //event.ResourceProperties.Region});

    var params = {
        DryRun: false,
        Filters: [
          {
            Name: 'resource-id',
            Values: [
              instanceId,
            ]
          },
           {
            Name: 'key',
            Values: [
              'OctopusMachineId',
              / more items /
            ]
          },
        ],
        MaxResults: 5,
        NextToken: 'STRING_VALUE'
    };
    
    console.log("Getting MachineName for InstanceID: " + instanceId);
    
    ec2.describeTags(params, function(err, data) {
        if (err) 
        {
            console.log(err, err.stack); // an error occurred
            context.succeed(err);
        }
        else 
        {
            console.log(data);           // successful response
            var octopusMachineId = data.Tags[0].Value;
            
            
            var fullPath = '/api/machines/' +  octopusMachineId + '?apiKey=YOUR_OCTOPUS_API_KEY_HERE'; // API-XXXXXXXXXXXXXXXXXXXXXXXXXX
    
            var options = {
              host: 'YOUR_OCTOPUS_SERVER_IP_HERE',
              port: 81,
              path: fullPath,
              method: 'Delete'
            };
            
            callback = function(response) {
              var str = '';
            
              response.on('data', function (chunk) {
                str += chunk;
              });
            
              response.on('end', function () {
                console.log(str);
                context.succeed(str);
              });
            }
            
            http.request(options, callback).end();
        }
    });
};

 

2016-02-08 17_43_59-AWS Lambda

Leave the Handler with the default value.

Set the role to one that has access to S3 (if you don’t have one, you’ll need to make one now)

You can leave the memory and timeout values at the defaults.

Continue to the Review page

Review your lambda then click “Create function”

 

Cloudwatch Events

The last thing we need to do is setup a Cloudwatch event for EC2 termination.

This event will trigger the lambda that does deregistration of the tentacle.

Open the Cloudwatch page.

2016-02-08 17_45_14-AWS Management Console

Under Events, select Rules and clickCreate rule

2016-02-08 17_45_43-CloudWatch Management Console

 

Select “EC2 instance state change notification” for the event source.

2016-02-08 17_46_00-CloudWatch Management Console

 

SelectSpecific state(s)” and choose “Terminated“.

Add a target and set it to the Lambda you just created.

It should look like this when you’re done.

2016-02-09 08_29_36-CloudWatch Management Console

 

Continue to the next page.

Give the Rule a name and make sure Enabled is checked.

2016-02-09 08_30_01-CloudWatch Management Console

Click “Create Rule” and you’re done.

You’re Done!

With this rule created, any EC2 instance that terminates should automatically de-register from your octopus server.

 

Questions?

This is a big subject, with many parts.  There’s a lot to learn, but if you download the scripts and make the few configuration changes, you should be able to get it working.

If you have Questions about anything here, please post them in the comments and I’ll do my best to assist.

You can download the scripts and README files as a zip below

Download “Octopus Deploy Automated Registration” octopus-deploy-automated-registration.zip – Downloaded 298 times – 4 MB

 

 

Continue reading >
Share

Dependency Injection and Unit Testing Unity

This post will be the first in what will likely be a long series. Dependency Injection and Unit Testing are generally considered staples in modern software development. Game development for a variety of reasons is one of the few areas where this isn’t true. While it’s starting to become more popular, there are quite a few things holding back game programmers from embracing these valuable paradigms.

I’d like to open by giving a quick description of the benefits you’ll gain by embracing dependency injection.  Before you jump away thinking “I don’t need this” or “this will be too complicated”, just take a look at what you have to gain.  And let me try to quell any fears, this is something you can definitely take advantage of, it won’t be hard, it’ll save you time, and your code will be improved.

 

Benefits – What do I have to gain?

Loose Coupling

Dependency Injection by it’s nature encourages very loose coupling.  This means your objects and classes don’t have tight dependencies on other classes.  Loose coupling leads to having code that is much less rigid and brittle.

Re-usbility Across Projects

Loose coupling also makes your classes much easier to use across projects.  When your code has dependencies on many other classes or static global variables, it’s much harder to re-use that code in other projects.  Once you get into the habit of good separation and dependency injection, you’ll find that reuse becomes a near trivial task.

Encourages Coding to Interfaces

While you can certainly code to interfaces without Dependency Injection, using it will naturally encourage this behavior.  The benefits of this will become a bit more obvious in the example below, but it essentially allows you to swap behavior and systems in a clean way.

Cleaner Code

When you start injecting your dependencies, you quickly end up with less “spaghetti-code”.  It becomes much clearer what a classes job is, and how the class interacts with the rest of your project.  You’ll find that you no-longer have to worry about a minor change in one piece of code having an unexpected consequence in something that you thought would be completely unrelated.

As an example, once while working on a major AAA MMO game, I saw a bug fix to a specific class ability completely break the entire crafting system.  This exactly the kind of thing we want to avoid.

Unit Testing

This is one of the most commonly stated benefits to Dependency Injection.  While it’s a huge benefit, you can see above that it’s not the only one.  Even if you don’t plan to unit test initially (though you should),  don’t rule out dependency injection.

If you haven’t experienced a well unit tested project before, let me just say that it’s career changing.  A well tested project is less likely to slip on deadlines, ship with bugs, or fail completely.  When your project is under test, there’s no fear when you want to make a change, because you know immediately when something is broken.  You no-longer need to run through your entire game loop to verify that your changes work, and more importantly that you haven’t broken other functionality.

 

 

If it’s so good, why isn’t this common place?

Now, I’d like to cover a few of the reasons the game industry has been slow to adopt Dependency Injection & Unit Testing.

C++

While this doesn’t apply to Unity specifically, the game industry as a whole has primarily relied on C++. There were of course studios that developed in other languages, but to this date, outside of Unity, the major engines are all C++ based. C++ has not had nearly the same movement towards Dependency Injection or Unit Testing as other common enterprise languages (Java, C#, JS, Ruby, etc).  This is changing though, and with the proliferation of unit testing and dependency injection in C#, it’s the perfect time to jump in with your games.

Performance

Dependency Injection adds overhead to your game. In the past, that overhead could be too much for the hardware to handle.  Given the option between 60fps and dependency injection, 60fps is almost always the correct answer.  Now though, hardware is really fast, and 99% of games can easily support Injection without giving up any performance.

Mindset

While there are countless other “reasons” you could come across from game programmers, the key one is just an issue of mindset.  Too many people have been programming without Injection and Unit testing and just haven’t been exposed to the benefits.  They think “that’s for enterprise software”, “that’s something web developers do”, or “that doesn’t work for games”.  My goal here is to convince you that it’s worth trying.  I promise if you dig in and try dependency injection and unit testing, you’ll quickly start to see the benefits, and you’ll want to spread the word as well.

 

Dependency Injection Frameworks

When you’re searching, you may also see the DI frameworks referred to as IOC containers.

You may be wondering how you get started with Dependency Injection in Unity.  It’s not something built into the Unity engine, but there are a variety of options to choose from on GitHub and in the Asset Store.

Personally, I’ve been using Zenject, and my samples will be done using it.  But that doesn’t mean you shouldn’t look into the other options available.

Zenject

I think the description provided on the Zenject asset page does a better job describing it than I could, so here it is:

Zenject is a lightweight dependency injection framework built specifically to target Unity. It can be used to turn the code base of your Unity application into a collection of loosely-coupled parts with highly segmented responsibilities. Zenject can then glue the parts together in many different configurations to allow you to easily write, re-use, refactor and test your code in a scalable and extremely flexible way.

While I hope that after reading this series you have a good idea why you should use a dependency injection framework, and how to get started, I must highly recommend you take a look at the documentation provided on the Zenject GitHub page.

Constructor Injection

Most Dependency Injection is done via what’s called Constructor Injection.  This means that anything your class relies on outside itself is passed in via the constructor.

Example

I want to give an example of how you’d use Constructor Injection in a more general sense before diving too deep into the differences with Unity.  What I’m presenting here is a simplified version of a game state system I’m using in a Unity project currently.

In my project, I have a variety of game state classes.  The different “GameStates” handle how the game functions at different stages throughout the game cycle.  There are game states for things like generating terrain, lost/gameover, building, attacking, and in this example, ready to start.

In the game state “ready to start“, all we want to do is wait for the user to start the game.  The game state doesn’t care how the user starts the game, only that they do.  The simplest way to implement this would be to check on every update and see if the user pressed the “Fire1” button.

It may look something like this:

using UnityEngine;

public class GameStateReadyToStart : MonoBehaviour
{
    void Update()
	{
		if (Input.GetButton("Fire1"))
			SwitchToNextGameState();
	}

	private void SwitchToNextGameState()
	{
		// Logic to go to next gamestate here
	}
}

This will work, it’s simple, and quick to implement, but there are some issues with it.

 

Problems

  • Our gamestate is a monobehaviour so we can read the input during the update.
  • The Input logic is inside a class who’s job isn’t Input.  The gamestate should handle game state/flow, not input.
  • Changing our input logic requires us to touch the gamestate class.
  • We’ll have to add input logic to every other gamestate.
  • Input can’t be easily varied across different devices.  If we want a touch button on iPad and the A button on an xbox, we have to make bigger changes to our gamestate class.
  • We can’t write unit tests against our gamestate because we can’t trigger input button presses.

You may be thinking that’s a long list, but I guarantee there are more problems than just those.

Why not just use a Singleton?

The first answer you may come across to some of these problems is the Singleton pattern.
While it’s very popular, simple, and resolves half of our issues, it doesn’t fix the rest.
Because of that, outside the game development world, the singleton pattern is generally considered bad practice and is often referred to as an anti-pattern.

 

Let’s try some separation

Now, let me show you an easy way to resolve all of the problems above.

public class GameStateReadyToStart
{
    public GameStateReadyToStart(IHandleInput inputHandler)
	{
		inputHandler.OnContinue += () =>
		{
			SwitchToNextGameState();
		};
	}

	private void SwitchToNextGameState()
	{
		// Logic to go to next gamestate here
	}
}

Here, you can see we’ve moved input handling out of the “gamestate” object into it’s own “inputHandler” class.  Instead of reading input in an update loop, we simply wait for the InputHandler to tell us when the user is ready to continue.  The gamestate doesn’t care how the user tells us to continue.  All the gamestate cares about is that the user told it to switch to the next state.  It’s now properly separated and doing only what it should do, nothing more.

The “IHandleInput” interface for this example is very simple:

using System;

public interface IHandleInput
{
    Action OnContinue { get; set; }
}

Now, if we want to switch input types across devices, we simply write different implementations of the “IHandleInput interface.
We could for example have implementations like:

  • TouchInputHandler – Continues when the user presses anything bound to “Fire1
  • GUIBasedInputHandler – Continues when the user clicks a GUI button
  • VoiceInputHandler – Continues when the user says a phrase
  • NetworkInputHandler – Continues when the user presses something on another device (think controlling a PC game with a phone)
  • TestInputHandler – Continues via a unit test designed to verify state switching doesn’t break

 

Time to Inject!

Now without going too much deeper into my example, you may be thinking “that’s nice, but now I have to pass in an input handler and manage that”.

This is where dependency injection comes into play.  Instead of creating your handler and passing it into the constructor, what we’ll do is Register the handler(s) we want with our Dependency Injection Container.

To do that, we need to create a new class that derives from the Zenject class “MonoInstaller

using System;
using UnityEngine;
using Zenject;
using Zenject.Commands;

public class TouchGameInstaller : MonoInstaller
{
    public override void InstallBindings()
	{
		Container.Bind<IHandleInput>().ToTransient<TouchInputHandler>();
		Container.Bind<GameStateReadyToStart>().ToTransient<GameStateReadyToStart>();

		Container.Resolve<GameStateReadyToStart>();
	}
}

In the TouchGameInstaller class, we override InstallBindings and register our 2 classes.

Line 13 simply asks the container for an instance of the game state.

This is a very simplified version of the Installer with a single game state, later parts of this series will show how we handle multiple game states.

What we’ve done here though is avoid having to manage the life and dependencies of our “gamestate” class.
The Dependency Injection Container will inspect our classes and realize that the “GameStateReadyToStart” class has a dependency on an “IHandleInput“, because the constructor has it as a parameter.
It will then look at it’s bindings and find that “IHandleInput” is bound to “TouchInputHandler“, so it will instantiate a “TouchInputHandler” and pass it into our “gamestateautomatically.

Now, if we want to switch our implementations on different devices, we simply switch our our “TouchGameInstaller” with a new installer for the device and make no changes to our GameState classes or any existing InputHandler classes.  We no longer risk breaking anything existing when we want to add a new platform.  And we can now hook up our GameState to unit tests by using an Installer that registers a TestInputHandler.

 

You may realize that I haven’t injected any gameobjects yet, and could be wondering how this works with monobehaviors that can’t constructors.

In the next part of this series, I’ll explain how to hook up your gameobjects and monobehaviors with the dependency injection framework and continue the example showing how the entire thing interacts.

 

 

Continue reading >
Share

Editing Unity variables – Encapsulation & [SerializeField]

Editing Unity script variables in the Inspector – The case for Encapsulation & [SerializeField]

If you’ve read many Unity tutorials, you may already know that it’s very easy to edit your script fields in the Unity inspector.

Most Unity tutorials (including on the official page) tell you that if you have a MonoBehaviour attached to a GameObject, any public field can be edited.

While that does technically work, I want to explain why it’s not the best way to setup your scripts, and offer an alternative that I think will help you in the future.

In this article, you’ll learn how to use proper Encapsulation while still taking full advantage of the Unity Inspector.

Take a look at this “Car.cs” script.

 

using UnityEngine;

public class Car : MonoBehaviour
{
    public Tire FrontTires;
	public Tire RearTires;

	public Tire FrontRightTire;
	public Tire FrontLeftTire;
	public Tire RearRightTire;
	public Tire RearLeftTire;

	private void Start()
	{
		// Instantiate Tires
		FrontRightTire = Instantiate(FrontTires);
		FrontLeftTire = Instantiate(FrontTires);

		RearRightTire = Instantiate(RearTires);
		RearLeftTire = Instantiate(RearTires);
	}
}

 

If you look at the Start method, you can tell that the fields “FrontTires” & “RearTires” are referring to prefabs that will be be used to instantiate the 4 tires of the car.

Once we’ve assigned some Tire prefabs, it looks like this in the Inspector.

In play mode, the Start method will instantiate the 4 actual tires on our car and it’ll look like this.


 

Problem #1 – Confusion

The first thing you might realize is that there could be some confusion about which fields to assign the prefab to.
You’ve just seen the code, or in your own projects, perhaps you’ve just written it, and it may seem like a non-issue.

But if your project ever grows, it’s likely others will need to figure out the difference, and to do so, they’ll need to look at the code too.
If your project lasts more than a few days/weeks, you also may forget and have to look back through the code.

Now you could solve this with special naming.  I’ve seen plenty projects where the “Prefab” fields had a prefix or suffix like “Front Tires Prefab”.

That can also work, but then you still have 4 extra fields in there that you have to read every time.  And remember, this is a simple example, your real classes could have dozens of these fields.

Fix #1 – Let’s Use Properties for anything public

To resolve this, let’s change the entries we don’t want to be editable into Properties.

Microsoft recommends you make your fields all private and use properties for anything that is public.  There are plenty of benefits not described in this article, so feel free to read in detail from Microsofts article Fields(C# Programming Guide)

Now let’s change the “Car.cs” script to match this.
using UnityEngine;

public class Car : MonoBehaviour
{
    public Tire FrontTires;
	public Tire RearTires;

	public Tire FrontRightTire { get; set; }
	public Tire FrontLeftTire { get; set; }
	public Tire RearRightTire { get; set; }
	public Tire RearLeftTire { get; set; }

	private void Start()
	{
		// Instantiate Tires
		FrontRightTire = Instantiate(FrontTires);
		FrontLeftTire = Instantiate(FrontTires);

		RearRightTire = Instantiate(RearTires);
		RearLeftTire = Instantiate(RearTires);
	}
}

Here’s what it looks like in the Inspector

With that change, you may be thinking we’ve resolved the issue and everything is good now.
While it’s true that confusion in the editor is all cleared up, we still have one more problem to address.
That problem is lack of Encapsulation.

 


Problem #2 – No Encapsulation

“In general, encapsulation is one of the four fundamentals of OOP (object-oriented programming). Encapsulation refers to the bundling of data with the methods that operate on that data.”

There are countless articles and books available describing the benefits of encapsulation.

The key thing to know is that properly encapsulated classes only expose what’s needed to make them operate properly.

That means we don’t expose every property, field, or method as public.
Instead, we only expose the specific ones we want to be accessed by other classes, and we try to keep them to the bare minimum required.

Why?

We do this so that our classes/objects are easy to interact with.  We want to minimize confusion and eliminate the ability to use the classes in an improper way.

You may be wondering why you should care if things are public.  Afterall, public things are easy to get to, and you know what you want to get to and will ignore the rest.
But remember, current you will not be the only one working on your classes.

If your project lasts beyond a weekend, you need to think about:

  • other people – make it hard for them to misuse your classes.
  • and just as important, there’s future you.

Unless you have a perfect memory, good coding practices will help you in the future when you’re interacting with classes you wrote weeks or months ago.

 


Problem #2 – The Example

Let’s look at this “Wall” script now to get an idea of why proper encapsulation is so important.

using UnityEngine;

public class Wall : MonoBehaviour
{
    public void Update()
	{
		if (Input.GetButtonDown("Fire1"))
			DamageCar(FindObjectOfType<Car>());
	}
	public void DamageCar(Car car)
	{
		car.FrontTires.Tread -= 1;
		car.RearTires.Tread -= 1;
	}
}

The “DamageCar” method is supposed to damage all of the wheels on the car by reducing their Tread value by 1.

Do you see what’s wrong here?

If we look back to the “Car.cs” script, “FrontTires” & “RearTires” are actually the prefabs, not the instantiated tires the car should be using.

In this case, if we execute the method, we’re not only failing to properly damage our tires, we’re actually modifying the prefab values.

This is an easy mistake to make, because our prefab fields that we we shouldn’t be interacting with aren’t properly encapsulated.

Problem #2 – How do we fix it?

If we make the “FrontTires” & “RearTiresprivate, we won’t be able to edit them in the inspector… and we want to edit them in the inspector.

Luckily, Unity developers knew this would be a need and gave us the ability to flag our private fields as editable in the inspector.

[SerializeField]

Adding the [SerializeField] attribute before private fields makes them appear in the Inspector the same way a public field does, but allows us to keep the fields properly encapsulated.

Take a look at the updated car script

using UnityEngine;

public class Car : MonoBehaviour
{
    [SerializeField]
	private Tire _frontTires;
	[SerializeField]
	private Tire _rearTires;

	public Tire FrontRightTire { get; set; }
	public Tire FrontLeftTire { get; set; }
	public Tire RearRightTire { get; set; }
	public Tire RearLeftTire { get; set; }

	private void Start()
	{
		// Instantiate Tires
		FrontRightTire = Instantiate(_frontTires);
		FrontLeftTire = Instantiate(_frontTires);

		RearRightTire = Instantiate(_rearTires);
		RearLeftTire = Instantiate(_rearTires);
	}
}

Here you see we no-longer expose the “FrontTires” and “RearTires” fields outside of our class (by marking them private).

In the inspector, we still see them available to be assigned to.

Now our problems are solved and our class is properly encapsulated!

You may also notice that the casing on them has been changed.  While this is not required to properly encapsulate your objects, it is very common practice in the C# community to denote private fields with camel case prefixed by an underscore.  If you don’t like the underscore, consider at least using camel casing for your private fields and reserve pascal casing for public properties.

Video Version

Project Download

Want the source for this project to try it out yourself? Here it is: https://unity3dcollege.blob.core.windows.net/site/Downloads/Encapsulation%20SerializeField.zip

Continue reading >
Share
Page 14 of 16