I am Joshua Poehls. Say hello Archives (not so) silent thoughts

Go 101: Methods on Pointers vs. Values

Methods can be declared on both pointers and values. The difference is subtle but important.

type Person struct {
     age int
}

// Method's receiver is the value, `Person`.
func (p Person) Age() int {
     return p.age
}

// Method's receiver is a pointer, `*Person`.
func (p *Person) SetAge(age int) {
     p.age = age
}

In reality, you’d only define getter and setter functions like this if you needed to implement additional logic. In an example like this you’d just make the Age field public.

This is how you define getter and setter functions in Go. Notice that we defined the Age() function on the value but SetAge() on the pointer (i.e. *Person). This is important.

Go always passes by value. Function parameters are always passed by copying them as opposed to passing a reference. (Read more.)

Even pointers are technically passed by value. The memory address is copied, the value it points to is not copied.

Here is the wrong way to define SetAge. Let’s see what happens.

func (p Person) SetAge(age int) {
     p.age = age
}

p := Person{}
p.SetAge(10)
fmt.Printf("Age: %v", p.Age()) // Age: 0

▶ Run it.

Notice that the output is 0 instead of 10? This is ‘pass by value’ in action.

Calling p.SetAge(10) passes a copy of p to the SetAge function. SetAge sets the age property on the copy of p that it received which is discarded after the function returns.

Now let’s do it the right way.

func (p *Person) SetAge(age int) {
     p.age = age
}

p := Person{}
p.SetAge(10)
fmt.Printf("Age: %v", p.Age()) // Age: 10

▶ Run it.

My rule of thumb is this: declare the method on the pointer unless your struct is such that you don’t use pointers to it.

Two reasons:

  1. Performance. Calling a method on a pointer will almost always be faster than copying the value. There may be cases wear the copy is faster but those are edge case.
  2. Consistency. It is common for at least one of your methods to need a pointer receiver and if any of the type’s methods are on the pointer then they all should be. This recommendation is direct from the FAQ.

Read the FAQ “Should I define methods on values or pointers?” for more insight.

Update: Thanks to the fine folks on reddit for suggesting some improvements.
Join the discussion on reddit!

Update 2: Here are even more rules of thumb to help you choose whether to use a value or pointer receiver.

⦿

Go 101: Constructors and Overloads

Go doesn’t have constructors in the traditional sense. The convention is to make the zero value useful whenever possible.

type Person struct {
     Age int
}

// These are equivalent.
// `p1` and `p2` are initialized to the zero value of Person.
// Neither of these are nil.
var p1 Person // type Person
p2 := Person{} // type Person

// You could also use `new` to allocate which returns a pointer
p3 := new(Person) // type *Person

It is most common to use the struct initializer. e.g. p := Person{} or p := &Person{} if you need the pointer.

Sometimes you want special initialization logic. If your type is named Person then the convention would be create a function named NewPerson that returns a pointer to an initialized Person type.

func NewPerson(int age) *Person {
     p := Person{age}
     return &p
}

myPerson := NewPerson(10) // type *Person

Multiple constructors can be implemented by having multiple initializer functions. Go doesn’t support function overloads so you will need to name your functions intelligently.

import "time"

func NewPersonAge(int age) *Person {
     p := Person{age}
     return &p
}

func NewPersonBirthYear(int birthYear) *Person {
     p := Person{time.Now().Year() - birthYear}
     return &p
}

Read more in Effective Go.

Update: Thanks to Joe Shaw for the comments! I’ve updated the article with his suggestions.

⦿

PowerShell Script Module Boilerplate

One of the things I always look for when getting familiar with a new language or environment is examples of the physical file structure and logical organization of the code. How do you layout your project files? What is idiomatic?

Unsurprisingly, this isn’t always as simple as finding a few open-source projects on GitHub to reference. Believe it or not, there are a lot of pretty unorganized coders out there. I admit to being a bit OCD with my project structures. I like them clean, organized, and consistent.

In this post I’m going to cover my preferred boilerplate for PowerShell Script Modules.

Script Modules are about as simple as it gets. Typically you have one or more PS1 or PSM1 files that contain your module’s cmdlets. Beyond that you should have a PSD1 manifest.

Fork this!

This entire boilerplate is on GitHub. If you just want a solid starting point, download this repo. If you want to know more, keep reading.

File Structure

+- src/
| +- source_file.ps1
| +- ...
+- tools/
| +- release.ps1
+- LICENSE
+- README.md
  • src/ contains the PS1, PSM1, PS1XML, and any other source files for the module.
  • tools/ is where I put any meta scripts for the project. Usually there is just one script here that builds a release version of my module.
  • LICENSE - if your module is open-source, always specify what license you are releasing it under.
  • README.md - always have a README file. Even if it is only a one sentence description. Markdown is a great format to use for this.

Explicit Exports

By default, PowerShell will export all of the functions in your module. I recommend being explicit about this and always specifying which functions should be publically exported. This way it is easy to add private helper functions that are internal to your module without worrying about them being made public accidentally.

You do this by calling Export-ModuleMember at the bottom of your source file. If you have a source.psm1 file that contains a Show-Calendar function that should be public, you would do something like this:

# Show-Calendar.ps1
#
# Show-Calendar will be public.
# Any other functions in this file will be private.

function Show-Calendar {
    # ...
}

Export-ModuleMember -Function Show-Calendar

Full example →

Release Script

Every project should have a release script. You should never be manually building your distributable release. At a minimum my release script will:

  1. Generate the PSD1 manifest for my module.
  2. Save the manifest into a temporary ./dist folder.
  3. Copy all of the module source files into ./dist.
  4. Add all of the module’s source files to a ZIP file ready for me to distribute.

Here is what a simple release.ps1 script might look like.

View on GitHub →

<#
.SYNOPSIS
    Generates a manifest for the module
    and bundles all of the module source files
    and manifest into a distributable ZIP file.
#>

[CmdletBinding()]
param(
    [Parameter(Mandatory = $true)]
    [version]$ModuleVersion
)

$ErrorActionPreference = "Stop"

Write-Host "Building release for v$moduleVersion"

$scriptPath = Split-Path -LiteralPath $(if ($PSVersionTable.PSVersion.Major -ge 3) { $PSCommandPath } else { & { $MyInvocation.ScriptName } })

$src = (Join-Path (Split-Path $scriptPath) 'src')
$dist = (Join-Path (Split-Path $scriptPath) 'dist')
if (Test-Path $dist) {
    Remove-Item $dist -Force -Recurse
}
New-Item $dist -ItemType Directory | Out-Null

Write-Host "Creating module manifest..."

$manifestFileName = Join-Path $dist 'YourModule.psd1'

# TODO: Tweak the manifest to fit your module's needs.
New-ModuleManifest `
    -Path $manifestFileName `
    -ModuleVersion $ModuleVersion `
    -Guid fe524c79-95a6-4d02-8e15-30dddeb8c874 `
    -Author 'Your Name' `
    -CompanyName 'Your Company' `
    -Copyright '(c) $((Get-Date).Year) Your Company. All rights reserved.' `
    -Description 'Description of your module.' `
    -PowerShellVersion '3.0' `
    -DotNetFrameworkVersion '4.5' `
    -NestedModules (Get-ChildItem $src -Exclude *.psd1 | % { $_.Name })

Write-Host "Creating release archive..."

# Copy the distributable files to the dist folder.
Copy-Item -Path "$src\*" `
          -Destination $dist `
          -Recurse

# Requires .NET 4.5
[Reflection.Assembly]::LoadWithPartialName("System.IO.Compression.FileSystem") | Out-Null

$zipFileName = Join-Path ([System.IO.Path]::GetDirectoryName($dist)) "$([System.IO.Path]::GetFileNameWithoutExtension($manifestFileName))-$ModuleVersion.zip"

# Overwrite the ZIP if it already already exists.
if (Test-Path $zipFileName) {
    Remove-Item $zipFileName -Force
}

$compressionLevel = [System.IO.Compression.CompressionLevel]::Optimal
$includeBaseDirectory = $false
[System.IO.Compression.ZipFile]::CreateFromDirectory($dist, $zipFileName, $compressionLevel, $includeBaseDirectory)

Move-Item $zipFileName $dist -Force

Version Control

Always exclude your ./dist folder from source control. As a rule of thumb, you never want to store the build output of any project in source control.

Depending on how you plan to release your module, you may prefer to exclude *.psd1 manifest files from source control. This just keeps things clean and enforces that you use your release script to build the distributable.

Good Examples

Here are a few open-source PowerShell modules that I’ve found to be good examples to follow.

⦿

PowerShell Batch File Wrapper

Sometimes you want a .cmd wrapper for your PowerShell script. Usually for me this is so people who aren’t familiar with the command line can double-click to execute the script.

This batch file should be saved alongside your PowerShell script, like so.

.\
 |- my_script.ps1
 |- my_script.cmd

my_script.cmd will execute the same named .ps1 file in the same directory, so my_script.ps1 in this case. Any arguments passed to my_script.cmd will pass-through to the PowerShell script.

@ECHO OFF

SET SCRIPTNAME=%~d0%~p0%~n0.ps1
SET ARGS=%*
IF [%ARGS%] NEQ [] GOTO ESCAPE_ARGS

:POWERSHELL
PowerShell.exe -NoProfile -NonInteractive -NoLogo -ExecutionPolicy Unrestricted -Command "& { $ErrorActionPreference = 'Stop'; & '%SCRIPTNAME%' @args; EXIT $LASTEXITCODE }" %ARGS%
EXIT /B %ERRORLEVEL%

:ESCAPE_ARGS
SET ARGS=%ARGS:"=\"%
SET ARGS=%ARGS:`=``%
SET ARGS=%ARGS:'=`'%
SET ARGS=%ARGS:$=`$%
SET ARGS=%ARGS:{=`}%
SET ARGS=%ARGS:}=`}%
SET ARGS=%ARGS:(=`(%
SET ARGS=%ARGS:)=`)%
SET ARGS=%ARGS:,=`,%
SET ARGS=%ARGS:^%=%

GOTO POWERSHELL

What’s going on here?

  • %SCRIPTNAME% variable holds the name of the PowerShell script to execute. %~d0%~p0%~n0 magic gets the full path of the current batch script without the file extension. By specifying the full path of the PowerShell script like this we can guarantee that it is always executed from the right place no matter what your working directory is.
  • Escapes special characters in the arguments so that they are passed to PowerShell as you would expect.
  • Runs PowerShell.exe with:
    • -NoProfile to improve startup performance. Scripts you are distributing shouldn’t rely on anything in your profile anyway.
    • -NonInteractive because usually my scripts don’t need input from the user.
    • -ExecutionPolicy Unrestricted to ensure that the PowerShell script can be executed regardless of the machine’s default Execution Policy.
    • -Command syntax for executing the command ensures that PowerShell returns the correct exit code from your script. Using -Command with $ErrorActionPreference = 'Stop' also ensures that errors thrown from your script cause PowerShell.exe to return a failing exit code (1). PowerShell is quite buggy when it comes to bubbling exit codes. This is the safest method I’ve found.

Batch file tips

Special characters in arguments

Remember that certain special characters need to be escaped in arguments passed to the batch file. These characters are: ^ & < > /?. Note that /? is a sequence and is recognized as a help flag when passed to a batch file.

my_script.cmd "I ""am"" quoted" passes a single argument I "am" quoted to PowerShell.

my_script.cmd "^&<>/?" passes ^&<>/?

Environment variable expansion

Environment variables get automatically expanded in batch file arguments.

my_script.cmd %PROGRAMFILES% passes C:\Program Files

⦿

Killing Processes In Disconnected Terminal Service Sessions

Usually it is best to configure your terminal service policy to log out disconnected sessions, however there may be cases where you only want to kill specific applications when the user disconnects.

This is easy to do with a PowerShell script that you run periodically as a scheduled task. The processes won’t be killed immediately when the user disconnects but will be killed shortly after.

This script will, by default, only kill the OUTLOOK processes in disconnected sessions after waiting 15 seconds for it to gracefully close. It is trivial to tweak the script to kill whichever processes you care about.

# kill-processes.ps1

# This script supports being run with -WhatIf and -Confirm parameters.
[CmdletBinding(SupportsShouldProcess=$true, ConfirmImpact='Medium')]
param (
    # Regex of the states that should be included in the process killing field.
    [string]$IncludeStates = '^(Disc)$', # Only DISCONNECTED sessions by default.
    # Regex of the processes to kill
    [string]$KillProcesses = '^(OUTLOOK)$', # Only OUTLOOK by default.
    [int]$GracefulTimeout = 15 # Number of seconds to wait for a graceful shutdown before forcefully closing the program.
)

function Get-Sessions
{
    # `query session` is the same as `qwinsta`

    # `query session`: http://technet.microsoft.com/en-us/library/cc785434(v=ws.10).aspx

    # Possible session states:
    <#
    http://support.microsoft.com/kb/186592
    Active. The session is connected and active.
    Conn.   The session is connected. No user is logged on.
    ConnQ.  The session is in the process of connecting. If this state
            continues, it indicates a problem with the connection.
    Shadow. The session is shadowing another session.
    Listen. The session is ready to accept a client connection.
    Disc.   The session is disconnected.
    Idle.   The session is initialized.
    Down.   The session is down, indicating the session failed to initialize correctly.
    Init.   The session is initializing.
    #>

    # Snippet from http://poshcode.org/3062
    # Parses the output of `qwinsta` into PowerShell objects.
    $c = query session 2>&1 | where {$_.gettype().equals([string]) }

    $starters = New-Object psobject -Property @{"SessionName" = 0; "Username" = 0; "ID" = 0; "State" = 0; "Type" = 0; "Device" = 0;};
     
    foreach($line in $c) {
         try {
             if($line.trim().substring(0, $line.trim().indexof(" ")) -eq "SESSIONNAME") {
                $starters.Username = $line.indexof("USERNAME");
                $starters.ID = $line.indexof("ID");
                $starters.State = $line.indexof("STATE");
                $starters.Type = $line.indexof("TYPE");
                $starters.Device = $line.indexof("DEVICE");
                continue;
            }
           
            New-Object psobject -Property @{
                "SessionName" = $line.trim().substring(0, $line.trim().indexof(" ")).trim(">")
                ;"Username" = $line.Substring($starters.Username, $line.IndexOf(" ", $starters.Username) - $starters.Username)
                ;"ID" = $line.Substring($line.IndexOf(" ", $starters.Username), $starters.ID - $line.IndexOf(" ", $starters.Username) + 2).trim()
                ;"State" = $line.Substring($starters.State, $line.IndexOf(" ", $starters.State)-$starters.State).trim()
                ;"Type" = $line.Substring($starters.Type, $starters.Device - $starters.Type).trim()
                ;"Device" = $line.Substring($starters.Device).trim()
            }
        } catch {
            throw $_;
            #$e = $_;
            #Write-Error -Exception $e.Exception -Message $e.PSMessageDetails;
        }
    }
}

# Helper function for getting the singular or plural form of
# a word based on the given count.
# Because we want proper log messages.
function Get-ProperWord([string]$singularWord, [int]$count) {
    if ($count -eq 0 -or $count -gt 1) {
        if ($singularWord.EndsWith("s")) {
            return "$($singularWord)es";
        }
        else {
            return "$($singularWord)s";
        }
    }
    else {
        return $singularWord;
    }
}

# Get a list of all terminal sessions that are in the state we care about.
$IncludedSessions = Get-Sessions `
                        | Where { $_.State -match $IncludeStates } `
                        | Select -ExpandProperty ID

# Get a list of all processes in one of those terminal sessions
# that match a process we want to kill.
$SessionProcesses = $IncludedSessions `
    | % { $id = $_;
          Get-Process `
            | Where { $_.SessionID -eq $id -and $_.Name -match $KillProcesses } }

# Get some words to use in log output.
$wordSecond = $(Get-ProperWord 'second' $GracefulTimeout)
$wordProcess = $(Get-ProperWord 'process' $SessionProcesses.Length)

if ($SessionProcesses.Length -gt 0) {
    # Initiate a graceful shutdown of the processes.
    # http://powershell.com/cs/blogs/tips/archive/2010/05/27/stopping-programs-gracefully.aspx
    Write-Output "Gracefully closing $($SessionProcesses.Length) $wordProcess"
    $SessionProcesses `
        | % { if ($PSCmdlet.ShouldProcess("$($_.Name) ($($_.Id))", "CloseMainWindow")) { $_.CloseMainWindow() } } `
        | Out-Null

    # Wait X seconds for the programs to close gracefully.
    Write-Output "Waiting $GracefulTimeout $wordSecond for the $wordProcess to close"
    if ($GracefulTimeout -gt 0) {
        if ($PSCmdlet.ShouldProcess("Current Process", "Start-Sleep")) {
            Start-Sleep -Seconds $GracefulTimeout
        }
    }

    # Force any remaining processes to close, the hard way.
    Write-Output "Forcefully closing any remaining processes"
    $SessionProcesses `
        | Where { $_.HasExited -ne $true } `
        | Stop-Process -ErrorAction SilentlyContinue
}
else {
    Write-Output "No processes to close"
}
⦿

Crime Rates Are Dropping

We all know the news is biased. Media slants to the left or the right. Bias is natural and very difficult to avoid.

I don't read the news much but the recent public shootings in Aurora, Colorado and at the Sikh temple in Wisconsin have made my social media feeds flutter with the topic of gun control with points from all sides.

Today I listened to an episode of Common Sense with Dan Carlin, a podcast that I occasionally listen to and always enjoy. He does a good job, in my opinion, of thinking about situations from both sides of the fence.

Dan listed several statistics from the DOJ's website. Some I expected, but a few caught me off guard.

As I said, media is slanted, we know this. So when I hear how our country is violent and things are worse than ever, I figure it isn't nearly that bad. But, I do figure it is worse than it was to some degree. Well the numbers don't seem to agree.

In fact, it would seem that this country has never been a safer place.

The chart above was just for 2005 and doesn't give you much for comparison. Here are the numbers for victims of all types of crime for 1993 and 2010. See for yourself.

Bureau of Justice Statistics. Generated using the NCVS Victimization Analysis Tool at www.bjs.gov. 13-Aug-12

This isn't an opinion or bias, just facts. Crime is going down, not up. As of 2010, violent crime specifically was a mere 29% of what it was just 17 years prior. That is a massive drop.

I'm not saying that this is good enough, that we should stop trying. What I am saying is that things are a heck of a lot better than they were.

I don't believe the solution to violence in America will be as simple as banning weapons. Just like I wouldn't propose banning phones as a solution to the telling of lies. Medium is not cause. We need to be more creative when trying to solve this problem.

Raw data for the Bureau of Justice Statistics
Victimization by Type
  1993 2010
Violent Victimization 16,822,618 4,935,983
Rape/Sexual Assault 898,239 268,574
Robbery 1,752,667 568,510
Aggravated Assault 3,481,055 857,751
Simple Assault 10,690,657 3,241,148
Property Victimization 35,093,887 15,411,610
Household Burglary 6,378,721 3,176,181
Motor Vehicle Theft 1,921,179 606,991
Theft 26,793,987 11,628,437
⦿

PowerShell, batch files, and exit codes. Recipes & Secrets.

TL;DR;

Update: If you want to save some time, skip reading this and just use my PowerShell Script Boilerplate. It includes an excellent batch file wrapper, argument escaping, and error code bubbling.

PowerShell.exe doesn’t return correct exit codes when using the -File option. Use -Command instead. (Vote for this issue on Microsoft Connect.)

This is a batch file wrapper for executing PowerShell scripts. It forwards arguments to PowerShell and correctly bubbles up the exit code (when it can).

PowerShell.exe still returns a passing (0) exit code when a ParserError is thrown. Even when using -Command. I haven’t found a workaround for this. (Vote for this issue on Microsoft Connect.)

You can use black magic to include spaces and quotes in the arguments you pass through the batch file wrapper to PowerShell.

PowerShell

PowerShell is a great scripting environment, and it is my preferred tool for writing build scripts for .NET apps. Exit codes are vital in build scripts because they are how your Continuous Integration server knows whether the build passed or failed.

This is a quick tour of working with exit codes in PowerShell scripts and batch files. I’m including batch files because they are often necessary to wrap the execution of your PowerShell scripts.

Let’s start easy. Say you need to run a command line app or batch file from your PowerShell script. How can you check the exit code of that process?

# script.ps1

cmd /C exit 1
Write-Host $LastExitCode    # 1

$LastExitCode is a special variable that holds the exit code of the last Windows based program that was run. So says the documentation.

Remember though, $LastExitCode doesn’t do squat for PowerShell commands. Use $? for that.

# script.ps1

Get-ChildItem "C:\"
Write-Host $?    # True

Get-ChildItem "Z:\some\non-existant\path"
Write-Host $?    # False

Anytime you run an external command like this, you need to check the exit code and throw an exception if needed. Otherwise the PowerShell script will keep right on trucking after a failure.

# script.ps1

cmd /C exit 1
if ($LastExitCode -ne 0) {
    throw "Command failed with exit code $LastExitCode."
}
Write-Host "You'll never see this."

Writing these assertions all the time will get old. Fortunately you can use a helper function, like this one found in the excellent psake project.

# script.ps1

function Exec
{
    [CmdletBinding()]
    param (
        [Parameter(Position=0, Mandatory=1)]
        [scriptblock]$Command,
        [Parameter(Position=1, Mandatory=0)]
        [string]$ErrorMessage = "Execution of command failed.`n$Command"
    )
    & $Command
    if ($LastExitCode -ne 0) {
        throw "Exec: $ErrorMessage"
    }
}

Exec { cmd /C exit 1 }
Write-Host "You'll never see this."

Throwing & exit codes

The throw keyword is how you generate a terminating error in PowerShell. It will, sometimes, cause your PowerShell script to return a failing exit code (1). Wait, when does it not cause a failing exit code, you ask? This is where PowerShell’s warts start to show. Let me demonstrate some scenarios.

# broken.ps1

throw "I'm broken"

From the PowerShell command prompt:

PS> .\broken.ps1
I'm broken.
At C:\broken.ps1:1 char:6
+ throw <<<<  "I'm broken."
    + CategoryInfo          : OperationStopped: (I'm broken.:String) [], RuntimeException
    + FullyQualifiedErrorId : I'm broken.
    
PS> $LastExitCode
1

From the Windows command prompt:

> PowerShell.exe -NoProfile -NonInteractive -ExecutionPolicy unrestricted -Command ".\broken.ps1"
I'm broken.
At C:\broken.ps1:1 char:6
+ throw <<<<  "I'm broken."
    + CategoryInfo          : OperationStopped: (I'm broken.:String) [], RuntimeException
    + FullyQualifiedErrorId : I'm broken.
    
> echo %errorlevel%
1

That worked, too. Good.

Again, from the Windows command prompt:

> PowerShell.exe -NoProfile -NonInteractive -ExecutionPolicy unrestricted -File ".\broken.ps1"
I'm broken.
At C:\broken.ps1:1 char:6
+ throw <<<<  "I'm broken."
    + CategoryInfo          : OperationStopped: (I'm broken.:String) [], RuntimeException
    + FullyQualifiedErrorId : I'm broken.

> echo %errorlevel%
0

Whoa! We still saw the error, but PowerShell returned a passing exit code. What the heck?! Yes, this is the wart.

A workaround for -File

-File allows you to pass in a script for PowerShell to execute, however terminating errors in the script will not cause PowerShell to return a failing exit code. I have no idea why this is the case. If you know why, please share!

A workaround is to add a trap statement to the top of your PowerShell script. (Thanks, Chris Oldwood, for pointing this out!)

# broken.ps1

trap
{
    Write-Error $_
    exit 1
}
throw "I'm broken."

From the Windows command prompt:

> PowerShell.exe -NoProfile -NonInteractive -ExecutionPolicy unrestricted -File ".\script.ps1"
C:\broken.ps1 : I'm broken.
    + CategoryInfo          : NotSpecified: (:) [Write-Error], WriteErrorException
    + FullyQualifiedErrorId : Microsoft.PowerShell.Commands.WriteErrorException,broken.ps1

> echo %errorlevel%
1

Notice that we got the correct exit code this time, but our error output didn’t include as much detail. Specifically, we didn’t get the line number of the error like we were getting in the previous tests. So it isn’t a perfect workaround.

Remember. Use -Command instead of -File whenever possible. If for some reason you must use -File or your script needs to support being run that way, then use the trap workaround above.

-Command can still fail

I’ve discovered that PowerShell will still exit with a success code (0) when a ParserError is thrown. Even when using -Command.

From the Windows command prompt:

> PowerShell.exe -NoProfile -NonInteractive -Command "Write-Host 'You will never see this.'" "\"
The string starting:
At line:1 char:39
+ Write-Host 'You will never see this.'  <<<< "
is missing the terminator: ".
At line:1 char:40
+ Write-Host 'You will never see this.' " <<<<
    + CategoryInfo          : ParserError: (:String) [], ParentContainsErrorRecordException
    + FullyQualifiedErrorId : TerminatorExpectedAtEndOfString

> echo %errorlevel%
0

I’m not aware of any workaround for this behavior. This is very disturbing, because these parser errors can be caused by arguments (as I demonstrated above). This means there is no way to guarantee your script will exit with the correct code when it fails.

Note: This was tested in PowerShell v2, on Windows 7 (x64).

There are other known bugs with PowerShell’s exit codes. Beware.

Batch files

I mentioned early on that it is often necessary to wrap the execution of your PowerShell script in a batch file. Some common reasons for this might be:

  • You want users of your script to be able to double-click to run it.
  • Your build runner doesn’t support execution of PowerShell scripts directly.

Whatever the reason, writing a batch file wrapper for a PowerShell script is easy. You just need to make sure that your batch file properly returns the exit code from PowerShell. Otherwise, your PowerShell script might fail and your batch file would return a successful exit code (0).

This is a safe template for you to use. Bookmark it.

Update: I’ve created a much better batch file wrapper for my PowerShell scripts. I recommend you ignore the one below and use my new one instead.

:: script.bat

@ECHO OFF
PowerShell.exe -NoProfile -NonInteractive -ExecutionPolicy unrestricted -Command "& %~d0%~p0%~n0.ps1" %*
EXIT /B %errorlevel%

This wrapper will execute the PowerShell script with the same file name (i.e., script.ps1 if the batch file is named script.bat), and then exit with the same code that PowerShell exited with. It will also forward any arguments passed to the batch file, to the PowerShell script.

Let’s test it out.

# script.ps1

param($Arg1, $Arg2)
Write-Host "Arg 1: $Arg1"
Write-Host "Arg 2: $Arg2"

From the Windows command prompt:

> script.bat happy scripting
Arg 1: happy
Arg 2: scripting

What if we want “happy scripting” to be passed as a single argument?

> script.bat "happy scripting"
Arg 1: happy
Arg 2: scripting

Well that didn’t work at all. This is the secret recipe.

> script.bat "'Happy scripting with single '' and double \" quotes!'"
Arg 1: Happy scripting with single ' and double " quotes!
Arg 2:

Please don’t ask me to explain this black magic, I only know that it works. Much credit to this StackOverflow question for helping me solve this!

For comparison, here is how you would do it if you were executing the script from PowerShell, without using the batch file wrapper.

From the PowerShell command prompt:

PS> .\script.ps1 happy scripting
Arg 1: happy
Arg 2: scripting

PS> .\script.ps1 "Happy scripting with single ' and double `" quotes included!"
Arg 1: Happy scripting with single ' and double " quotes included!
Arg 2:

That’s all folks!

⦿

Syntax highlighting for Nginx in VIM

Thanks to Evan Miller, adding VIM syntax highlighting for Nginx config files is a breeze.

First, install VIM if you haven’t already. On Arch Linux, it goes like this:

> pacman -Sy vim

Create a folder for your VIM syntax files.

> mkdir -p ~/.vim/syntax/

Download the syntax highlighting plugin.

> curl http://www.vim.org/scripts/download_script.php?src_id=14376 -o ~/.vim/syntax/nginx.vim

Add it to VIM’s file type definitions. Make sure to adjust the path to your Nginx installation if you need to.

> echo "au BufRead,BufNewFile /etc/nginx/conf/* set ft=nginx" >> ~/.vim/filetype.vim

Now enable syntax highlighting in your .vimrc file.

> echo "syntax enable" >> ~/.vimrc

That’s it. Now you’ll have nice colors when you edit your Nginx configs with VIM!

> vim /etc/nginx/conf/nginx.conf
Screenshot of VIM with syntax highlighting in an Nginx config file

Screenshot of VIM with syntax highlighting in an Nginx config file

⦿

Storing your Raspberry Pi configuration in Git

Storing your Raspberry Pi’s configuration files in Git is a great way to protect yourself from really bad accidents. You get a backup of all your configs and revision control to rollback those nasty changes. Best of all, you don’t have to manually create backup copies of each individual file. (cp rc.conf rc.conf.bak anyone?)

I should note that I’m running Arch Linux ARM, but this should apply fairly equally to Debian and other distros.

First, install Git (if you haven’t already).

> pacman -Sy git

Arch has a convention of storing all configuration files in /etc. So we will initialize our Git repo there.

> cd /etc
> git init

We only want to store the configuration files that we’ve actually changed in Git. We’ll use a .gitignore file for that.

> vim .gitignore

Here is what mine looks like right now.

# Blacklist everything.
*

# Whitelist the files we care about.
!rc.conf
!rc.local
!ntp.conf
!resolv.conf

!ddclient/
!ddclient/ddclient.conf

!nginx/
!nginx/conf/
!nginx/conf/nginx.conf

The ! prefix negates the pattern, basically creating a whitelist. Cool, huh?

Now we can do our initial commit.

> git add -A
> git commit -m "Added initial configs."

Remember to add any new config files to your .gitignore file and always commit your changes!

For added security you should push your repository to a remote. BitBucket offers free private repositories, if you don’t have a paid Github account.

⦿

Soft links, hard links, junctions, oh my! Symlinks on Windows, a how-to

First, a quick definition of terms. There are three kinds of “symlinks” on Windows.

  • soft links (also called symlinks, or symbolic links)
  • hard links
  • junctions (a type of soft link only for directories)

Soft links can be created for files or directories.

Hard links can only be created for files.

Both soft and hard links must be created on the same volume as the target. i.e. You can’t link something on C:\ to something on D:\.

You can read more about hardlinks and junctions on MSDN.

This is where the difference between soft and hard links is most evident.

Deleting the target will cause soft links to stop working. What it points to is gone. Hard links however will keep right on working until you delete the hard link itself. The hard link acts just like the original file, because for all intents and purposes, it is the original file.

Junctions

Windows also has another type of link just for directories, called Junctions.

Junctions look and act like soft links. The key difference is that they allow you to link directories that are located on different local volumes (but still on the same computer). You can’t create a junction to a network location.

Create a soft link to a directory.

c:\symlink_test> mklink symlink_dir real_dir
symbolic link created for symlink_dir <<===>> real_dir

Create junction link to a directory.

c:\symlink_test> mklink /J junction_dir real_dir
Junction created for junction_dir <<===>> real_dir

Create a soft link to a file.

c:\symlink_test> mklink symlink_file.txt real_file.txt
symbolic link created for symlink_file.txt <<===>> real_file.txt

Create a hard link to a file.

c:\symlink_test> mklink /H hardlink_file.txt real_file.txt
Hardlink created for hardlink_file.txt <<===>> real_file.txt

What they look like.

c:\symlink_test> dir
Volume in drive C is OS
Volume Serial Number is 7688-08EC

Directory of c:\symlink_test

06/07/2012  10:32 AM    <DIR>          .
06/07/2012  10:32 AM    <DIR>          ..
06/07/2012  09:51 AM                15 hardlink_file.txt
06/07/2012  09:59 AM    <JUNCTION>     junction_dir [c:\symlink_test\real_dir]
06/07/2012  09:47 AM    <DIR>          real_dir
06/07/2012  09:51 AM                15 real_file.txt
06/07/2012  10:00 AM    <SYMLINKD>     symlink_dir [real_dir]
06/07/2012  10:31 AM    <SYMLINK>      symlink_file.txt [real_file.txt]
               3 File(s)             30 bytes
               5 Dir(s)  145,497,268,224 bytes free
Screenshot of folder in Windows Explorer

Screenshot of folder in Windows Explorer

Note for PowerShell users:
MKLINK isn’t an executable that you can just call from PowerShell. You have to call it through the command prompt.

cmd /c mklink /D symlink_dir real_dir

Alternatively, you can use this module I wrote that has native PowerShell wrappers for MKLINK.

Read about MKLINK on MSDN.

Using FSUTIL

FSUTIL is another way to create hard links (but not soft links). This is the same as mklink /H.

c:\symlink_test> where fsutil
c:\Windows\System32\fsutil.exe

c:\symlink_test> fsutil hardlink create hardlink_file.txt real_file.txt
Hardlink created for c:\symlink_test\hardlink_file.txt <<===>> c:\symlink_test\real_file.txt

Read about FSUTIL on MSDN.

Using Junction

Junction is a tool provided by Sysinternals and provides another way to create junctions. Same as mklink /J. It also has some other tools for working with junctions that I won’t cover here.

c:\symlink_test> junction junction_dir real_dir
Junction v1.06 - Windows junction creator and reparse point viewer
Copyright (C) 2000-2010 Mark Russinovich
Sysinternals - www.sysinternals.com

Created: c:\symlink_test\junction_dir
Targetted at: c:\symlink_test\real_dir

Download the Junction tool from Sysinternals.

⦿