I am Joshua Poehls. Say hello Archives (not so) silent thoughts

Storing your Raspberry Pi configuration in Git

Storing your Raspberry Pi’s configuration files in Git is a great way to protect yourself from really bad accidents. You get a backup of all your configs and revision control to rollback those nasty changes. Best of all, you don’t have to manually create backup copies of each individual file. (cp rc.conf rc.conf.bak anyone?)

I should note that I’m running Arch Linux ARM, but this should apply fairly equally to Debian and other distros.

First, install Git (if you haven’t already).

> pacman -Sy git

Arch has a convention of storing all configuration files in /etc. So we will initialize our Git repo there.

> cd /etc
> git init

We only want to store the configuration files that we’ve actually changed in Git. We’ll use a .gitignore file for that.

> vim .gitignore

Here is what mine looks like right now.

# Blacklist everything.

# Whitelist the files we care about.



The ! prefix negates the pattern, basically creating a whitelist. Cool, huh?

Now we can do our initial commit.

> git add -A
> git commit -m "Added initial configs."

Remember to add any new config files to your .gitignore file and always commit your changes!

For added security you should push your repository to a remote. BitBucket offers free private repositories, if you don’t have a paid Github account.


Soft links, hard links, junctions, oh my! Symlinks on Windows, a how-to

First, a quick definition of terms. There are three kinds of “symlinks” on Windows.

  • soft links (also called symlinks, or symbolic links)
  • hard links
  • junctions (a type of soft link only for directories)

Soft links can be created for files or directories.

Hard links can only be created for files.

Both soft and hard links must be created on the same volume as the target. i.e. You can’t link something on C:\ to something on D:\.

You can read more about hardlinks and junctions on MSDN.

This is where the difference between soft and hard links is most evident.

Deleting the target will cause soft links to stop working. What it points to is gone. Hard links however will keep right on working until you delete the hard link itself. The hard link acts just like the original file, because for all intents and purposes, it is the original file.


Windows also has another type of link just for directories, called Junctions.

Junctions look and act like soft links. The key difference is that they allow you to link directories that are located on different local volumes (but still on the same computer). You can’t create a junction to a network location.

Create a soft link to a directory.

c:\symlink_test> mklink symlink_dir real_dir
symbolic link created for symlink_dir <<===>> real_dir

Create junction link to a directory.

c:\symlink_test> mklink /J junction_dir real_dir
Junction created for junction_dir <<===>> real_dir

Create a soft link to a file.

c:\symlink_test> mklink symlink_file.txt real_file.txt
symbolic link created for symlink_file.txt <<===>> real_file.txt

Create a hard link to a file.

c:\symlink_test> mklink /H hardlink_file.txt real_file.txt
Hardlink created for hardlink_file.txt <<===>> real_file.txt

What they look like.

c:\symlink_test> dir
Volume in drive C is OS
Volume Serial Number is 7688-08EC

Directory of c:\symlink_test

06/07/2012  10:32 AM    <DIR>          .
06/07/2012  10:32 AM    <DIR>          ..
06/07/2012  09:51 AM                15 hardlink_file.txt
06/07/2012  09:59 AM    <JUNCTION>     junction_dir [c:\symlink_test\real_dir]
06/07/2012  09:47 AM    <DIR>          real_dir
06/07/2012  09:51 AM                15 real_file.txt
06/07/2012  10:00 AM    <SYMLINKD>     symlink_dir [real_dir]
06/07/2012  10:31 AM    <SYMLINK>      symlink_file.txt [real_file.txt]
               3 File(s)             30 bytes
               5 Dir(s)  145,497,268,224 bytes free
Screenshot of folder in Windows Explorer

Screenshot of folder in Windows Explorer

Note for PowerShell users:
MKLINK isn’t an executable that you can just call from PowerShell. You have to call it through the command prompt.

cmd /c mklink /D symlink_dir real_dir

Alternatively, you can use this module I wrote that has native PowerShell wrappers for MKLINK.

Read about MKLINK on MSDN.


FSUTIL is another way to create hard links (but not soft links). This is the same as mklink /H.

c:\symlink_test> where fsutil

c:\symlink_test> fsutil hardlink create hardlink_file.txt real_file.txt
Hardlink created for c:\symlink_test\hardlink_file.txt <<===>> c:\symlink_test\real_file.txt

Read about FSUTIL on MSDN.

Using Junction

Junction is a tool provided by Sysinternals and provides another way to create junctions. Same as mklink /J. It also has some other tools for working with junctions that I won’t cover here.

c:\symlink_test> junction junction_dir real_dir
Junction v1.06 - Windows junction creator and reparse point viewer
Copyright (C) 2000-2010 Mark Russinovich
Sysinternals - www.sysinternals.com

Created: c:\symlink_test\junction_dir
Targetted at: c:\symlink_test\real_dir

Download the Junction tool from Sysinternals.


Can't access network resources over VPN connection on Mac OS X?

So you have your shiny OS X connected to a VPN, good deal! The problem is, you can’t connect to any of the servers and workstations on the VPN. What could be wrong?

It could be that OS X is still trying to find those machines on the internet instead of looking for them on the VPN connection. We can tell OS X to check the VPN connection first by giving it a higher priority than the other network connections on your Mac.

To change the priority of your VPN connection:

  1. Choose Apple menu > System Preferences and click Network.
  2. Choose Set Service Order from the Action pop-up menu (looks like a gear).
  3. Drag your VPN connection to the top of the list.
  4. Click OK, and then click Apply to make the new settings active.

This solution saved my day when I couldn’t Remote Desktop into my workstation over the VPN.

For more information, refer to Apple’s documentation for this.


Use AutoHotkey to remap your numpad keys to something useful

Are you tired of having to remember that ALT+0176 is the degree symbol ° ? Maybe there are other special characters that you want to be able to type easier.

Personally it was the degree symbol that got me, and since I never use my numpad, I decided it would be much more useful if those keys actually entered stuff that I cared about.

Here is a quick tutorial in using AutoHotkey to remap one of your numpad keys to the degree symbol. Of course you can expand on this beginning easily to solve whatever woes you are having.

Start out by going to the AutoHotkey website.

Download the AutoHotkey_L installer. This is the most recent version of AutoHotkey and specifically supports Unicode which is important for us.

Screenshot of the AutoHotkey download page

Screenshot of the AutoHotkey download page

Run the installer and leave all the defaults.

Screenshot of the AutoHotkey installer

Screenshot of the AutoHotkey installer

Look in your My Documents folder and open up AutoHotkey.ahk in a text editor that you can save as UTF-8 in.

If you don’t know what this means or don’t have an editor that can do this then download Notepad2 and open the file in that program.

Screenshot of the AutoHotkey.ahk file in Windows Explorer

Screenshot of the AutoHotkey.ahk file in Windows Explorer

Now that you have AutoHotkey.ahk opened, delete everything in there (after reading it if you care) and paste the following in its place.

Send °

Now save this file with UTF-8 encoding. If you are using Notepad2 you can do this by going to File | Encoding | UTF-8 and clicking Yes to the warning. Then click File | Save as usual.

Screenshot of saving the file with UTF-8 encoding

Screenshot of saving the file with UTF-8 encoding

Finally, run the AutoHotkey program from your Start Menu.

Screenshot of AutoHotkey in the Start Menu

Screenshot of AutoHotkey in the Start Menu

Now to try it out!

Put your focus in any text box. Anywhere. Go ahead. Now make sure you have NumLock turned OFF and hit the 0 key on your numpad.

See what just happened? No, that’s not a superscripted zero. That’s a degree symbol. Now that they are easy to type you are going to have to learn what it is! ;)

You can get a list of other special key names here: http://www.autohotkey.com/docs/KeyList.htm

This is just scratching the surface of what you can do with AutoHotkey scripts. Be sure to read up on the documentation for more ideas. Knowledge is power!

Here is a teaser of just a few of the tricks you can do:

  • Create global shortcuts to run programs (ex. CTRL+Z to open a browser and navigate to a specific website.)
  • Replace acronyms or common spelling mistakes (ex. replace “restraunt” -> “restaurant” automatically.)
  • Toggle hidden files in Windows Explorer with a shortcut key.

What I miss in vanilla Visual Studio

For the past week or so I’ve gone without any Visual Studio enhancements. No ReSharper. No CodeRush. I knew that I didn’t leverage even a fraction of what these tools offer so I wanted to find out what I would miss.

Here’s what I’m missing the most. In no particular order.

  1. Detection of errors before I compile.

  2. Shortcut for moving a class to its own file.

    I like to build up features in the same file (for speed) and then refactor the classes into their own files later. A tedius process of copy/paste.

  3. A rename function that is smart enough to rename the file as well.

  4. Unit Test debugger.

    I don’t mind running tests in the NUnit GUI, it even feels faster sometimes, but I miss the ability to step into my tests. I use this feature a lot for exploring APIs.

  5. Ability to generate constructors.

    Especially useful for creating custom exceptions where you want to implement all of the base constructors.

  6. Fast code snippets.

    Visual Studio’s built in snippets are functional, but slow to access compared to the alternatives so I ended up not using them. I definitely prefer CodeRush’s snippets.


Tortoise SVN PowerShell helper function

Do you work from the command line? Use PowerShell? SVN?

While the SVN command line client is certainly usable, there are definitely times where a GUI is more convenient. Specifically during a commit where you want to easily check/uncheck files and view diffs.

I got tired of having to open an explorer window to get into Tortoise SVN’s commit screen so I wrote this helper function.

Put this into your profile and you can start typing tsvn commit (or just tsvn) to open the Tortoise SVN commit dialog. Enjoy!

# Helper function for opening the Tortoise SVN GUI from a PowerShell prompt.
# Put this into your PowerShell profile.
# Ensure Tortoise SVN is in your PATH (usually C:\Program Files\TortoiseSVN\bin) 
function Svn-Tortoise([string]$Command = "commit") {
  Launches TortoiseSVN with the given command.
  Opens the commit screen if no command is given.
  List of supported commands can be found at:
  TortoiseProc.exe /command:$Command /path:"$pwd"
Set-Alias tsvn "Svn-Tortoise"

Take your PowerShell profile everywhere with Dropbox

I’ll cover these prerequisites in brief, but for this tip I’m going to assume a few things:

  • You are running Windows and have PowerShell installed.
  • You are familiar enough with PowerShell to care about your $PROFILE.

What is PowerShell?

In short, PowerShell is a new scripting language from Microsoft designed to replace the use of batch files and VBScript for administrating and automating Windows machines. The language leverages the .NET Framework such that anything you can do with .NET you can do with PowerShell. Typically you will be using PowerShell from the command line but there are lots of other ways to use it.

What is a PowerShell profile?

PowerShell has the concept of a script that will run each time you launch PowerShell. This is a really handy place to put any functions that you want to be available all the time. Initialize some variables, create some aliases, anything you do here will be there when you need it.

What are we doing here?

I’m going to give you one possible solution for keeping all of your scripts and settings (i.e. your profile) in sync no matter which computer you are on. No copy/pasting. No more wearing that USB dongle around your neck - seriously, that just screams geek.

Most steps have a sample PowerShell command that will do the trick. So fire up PowerShell and put on your scripting hat ‘cuz here we go!

  1. Get Dropbox.
    Among other things, Dropbox makes keeping your files in sync across computers dead simple. If you aren’t using it you should be. It’s just that good.

  2. Create a folder in Dropbox to hold your scripts.

    PS> New-Item $dropbox\scripts

    Anywhere you see $dropbox, assume this to be the path to your Dropbox folder. By default on Windows 7 this is %USERPROFILE%\Dropbox.

  3. Copy your profile ps1 file into your new scripts folder.

    PS> Copy-Item $PROFILE $dropbox\scripts\profile.ps1

  4. Edit your profile ps1 file (the original one) and “dot source” the one in your Dropbox folder.

    PS> ". $dropbox\scripts\profile.ps1" > $PROFILE

Pro Tip! As an alternative to dot sourcing your profile from Dropbox, you could instead create a symlink to it.

PS> /cmd /c mklink `"$PROFILE`" `"$dropbox\scripts\profile.ps1`"

Installing Ruby (and the DevKit) on Windows

Here’s my recipe for a painfree installation of Ruby on Windows.

  1. Use the RubyInstaller from www.ruby-lang.org. I used Ruby 1.9.2-p0.
  2. Install the Ruby DevKit by following the instructions in the Installation Overview. I used version DevKit-4.5.0-20100819-1536-sfx.

The DevKit is needed so that you can install gems that need to build native extensions. Without it when you try to gem install jekyll you will see an error like this:

ERROR: Error installing jekyll:
ERROR: Failed to build gem native extension.

'make' is not recognized as an internal or external command,
operable program or batch file.

After installing DevKit gem install jekyll will work like a champ.

Good luck!


RIA Services - Lessons Learned While Getting Started

Recently my company has been working on a rewrite of one of our line-of-business applications. We have decided to leverage RIA Services in the new application. At the moment we are also using Entity Framework; that may or may not change.

Here are a few lessons I’ve learned so far when working with RIA Services.

Potentially massive (in terms of lines of code & logic) domain services

RIA doesn’t handle multiple domain services well. What I mean by this is that an entity cannot be shared between multiple services. This means that in most cases you are going to have domain services with lots of methods. Worst case, think having an insert, update and delete method for each entity in your domain.

Lesson: Accept that having lots of methods in your domain service is inevitable.

Aid: Keep the methods slim by not putting any business or data access logic in them.

Shared database entity temptation

LinqToEntitiesDomainService<> wraps up your Entity Framework ObjectContext and lets you access it directory in your domain service. There is also a LinqToSqlDomainService<>. This makes it really tempting to simply return your Entity Framework or LINQ to SQL entities from your domain service methods. This is a very bad idea.

Lesson: Don’t return your database entities from your domain service.


  • Create a new DTO (Data Transfer Object) for each entity you need to return from your domain service.
  • Make sure your domain service operations return these DTOs and not the entities that Entity Framework or LINQ to SQL generate for you.

DomainService.Submit() summary

When your client application makes changes and then calls SubmitChanges() on the client-side, RIA Services sends this set of changes to the server as a ChangeSet. This ChangeSet will include deletes, updates and inserts - basically anything your client did before calling SubmitChanges().

[DomainService][1] is the base class that all RIA Services inherit from. This includes the LinqToEntitiesDomainService<> and LinqToSqlDomainService<> classes.

This base class has a Submit() method. This is basically what is being called when your client calls Submit(). The DomainService’s Submit() method handles parsing that set of changes and invoking the methods on your domain service that you have written to handle the inserts, updates and deletes of your entities.

Think of the Submit() method as a router that takes all the changes and routes them to the specific operations that need to happen in order to persist those changes to your backing data store.

Using DAOs (or Repositories)

I realized early on that I didn’t want to shove all my data access code directly into my domain service methods. I like to isolate the code that talks to the database in DAOs (Data Access Objects). These DAOs act as a weld point between the world of my database and the world of my application. This makes it easier for me to handle inevitable database changes or to switch out my entire data access strategy without having to rewrite the whole application. (See why.)

At first I thought that to use my DAOs I couldn’t/shouldn’t use a LinqToEntitiesDomainService<> and should instead just inherit from DomainService. So this is how I started and I had my DAO methods handle the creation (and disposing) of my ObjectContext as needed.

Come to find out, this works; however, it isn’t the best way to do things. As I said before, RIA Services will ask your domain service to process a whole set of changes at once. If you have 5 changes to save, do you really want to create a new ObjectContext for each one? Or would you rather create the ObjectContext once, do the 5 changes and then save them all at once to the database?

Here’s what I ended up doing:

  • Went back to using LinqToEntitiesDomainService<> as my base class.
  • Updated my DAO constructors to take an instance of the ObjectContext.

The LinqToEntitiesDomainService<> will handle creating my ObjectContext as well as calling SaveChanges() on it when all the changes have been processed. This is much more efficient.

In code

Here’s a code example of what I discovered:

This is bad.

public class PersonDomainService : DomainService
    public void UpdatePerson(Person dto)
        var dao = new PersonDao();

public class PersonDao
    public void Update(Person dto)
        using (var context = new MyObjectContext())
            var existingPerson = _context.People.Where(p => p.ID == dto.ID).SingleOrDefault();
            if (existingPerson != null)
                existingPerson.Name = dto.Name;

This is good.

public class PersonDomainService : LinqToEntitiesDomainService<MyObjectContext>
    public void UpdatePerson(Person dto)
        var dao = new PersonDao(ObjectContext);

    public override bool Submit(ChangeSet changeSet)
        //  base.Submit() is what will take all the changes in the ChangeSet
        //  and call your insert, update and delete methods for each one.
        //  When this is all done, the ObjectContext.SaveChanges() method
        //  will  be called.
        return base.Submit(changeSet);

public class PersonDao
    private readonly MyObjectContext _context;
    public PersonDao(MyObjectContext context)
        _context = context;

    public void Update(Person dto)
        var existingPerson = _context.People.Where(p => p.ID == dto.ID).SingleOrDefault();
        if (existingPerson != null)
            existingPerson.Name = dto.Name;

Further Reading

Here are some things I’ve found helpful thus far:


Why I Love Silverlight

Update on June 9th, 2012

It is now the year 2012. All of the previous love I had for Silverlight has long since moved on to web standards, specifically HTML5. I don’t miss Silverlight, even a little bit. Even so. I will leave this post around for fun.

Silverlight is at the top of my “to learn” list this year. I’m already pretty familiar with the basics but now I need to dig in and write some real apps using it.

This afternoon I decided to outline why I’m so excited about Silverlight. Enjoy.

  • Cross-platform. Microsoft supports Silverlight on Windows and Mac. This is ground breaking for me. I can write applications using .NET and they will run on a Mac. Score. Moonlight also enables some Silverlight support on Linux. As of this writing I believe they support Silverlight 2.
  • Rapid development. Not my rapid development but Microsoft’s! Microsoft has been ramping up Silverlight at a phenomenal rate. This says a lot. At the very least it says Silverlight is here to stay. More than that, it says if Silverlight isn’t a viable medium for your application now… it probably will be soon.
  • Web technology. I’ve ventured into WinForms and desktop apps since my beginnings as a web developer, but I still favor the always up-to-date, easy deployment and lack of client install that web applications offer. Being a web technology, Silverlight brings all this to the table.
  • User experience and easy UI building. Another reason I prefer web apps is that I prefer building my UI in HTML/CSS instead of with WinForms/GDI. Designing a unique and creative UI in WinForms is very difficult. Restyling controls, complex layouts, and theming an application are all painful in WinForms but very natural and easy with HTML/CSS.

XAML is what makes this so much better in Silverlight. The often annoyingly verbose syntax aside, you get all the easy creativity and power of HTML/CSS. * Offline and out of browser. Silverlight apps can be installed to run offline and out of browser on users’ machines. Remember, this isn’t just on Windows, either. Mac and Linux users get to share the love. This is the feature that makes me say Auf Wiedersehen to WinForms and even WPF. Not only do I get all the deployment and easy update benefits of a web app, but I can also interact with the user’s desktop, have shortcuts, file associations, and drag & drop, just like a native application.

WPF may be the next WinForms, but for me, Silverlight is the next WinForms and the next ASP.NET. Sure Silverlight won’t work for everything but this year I’m hoping to make Silverlight my MasterCard. As in, “…for everything else, there’s MasterCard Silverlight.”