Wednesday, April 2, 2014

Note to self: Machine Setup

Every few years us developers find the need to recreate their precious development environment on new hardware. It's an exciting time, because "Hey, new hardware!" However we always find ourselves copying bits and pieces from the old hardware while trying to reconstruct our development environments. Some of us take good notes, I myself have been slacking. I had some notes, but not all. So I took this opportunity to make a note to self so that next time I can cruise through this process much faster.

Let's get started!

Package Manager

Previously I have always rolled with MacPorts. I'm finally caving. Install Homebrew
ruby -e "$(curl -fsSL"


You can't build a whole lot without it
brew install apple-gcc42
sudo ln -s /usr/local/bin/gcc-4.2 /usr/bin/gcc-4.2
sudo xcodebuild -license

RbEnv or RVM

I recently switched to RbEnv from RVM and am sticking with it.
brew install rbenv ruby-build
rbenv init
With RbEnv you need to install a shim to make life easier when referencing ruby:

echo 'if which rbenv > /dev/null; then eval "$(rbenv init -)"; fi' >> ~/.bash_profile
echo 'eval "$(rbenv init -)"' >> ~/.bash_profile
Install Ruby!
rbenv install 2.1.1

Version Control

Visit Git and download the DMG installer. Then generate your RSA key and attach it to your Github account.

ssh-keygen -t rsa -C ""

Don't forget to make branch creation and deletion much easier and follow this great tutorial from Scott Bradley.

Also don't forget to configure your .gitconfig. Mine is saved in a repo on Github with my bash_profile and bashrc.

Rack server

Use pow, thank me later.
curl | sh
Don't forget to install the powder gem for ease of use.

Editor of choice

I choose you Emacs! And let's setup and use Cask with Pallet to manage packages.
Download and install the dmg from

Install Cask (curl worked better than homebrew IMO)
curl -fsSkL | python

Download my .emacs.d repo into ~/.emacs.d to configure all the awesome packages and Cask. The setup is loosely based on this post.

Run Cask within the .emacs.d folder to install all of the packages listed in the Caskfile

Bash prompt

I like my prompt to tell me a little bit about the directory I'm in. Especially if it's a Git repo. Follow this easy tutorial to setup a clean prompt that renders VCS info.

And for even more cool bashness, install autojump! I highly recommend cloning the repo and installing it manually.

And install iTerm2

Within my .bashrc I've setup some aliases for running Emacs locally as a server so I can work entirely within iTerm while developing. I prefer alternating tabs in iTerm vs alternating between GUI Emacs and the terminal.

See you again in 1-2 years!

Monday, January 6, 2014

Carrying on great conversations #hashu

Some of the best conversations are usually unplanned and they tend to happen when I surround myself with smart people that I've just met. This usually occurs at a conference or meetups over beverages and/or grub. However I'm terrible at remembering names.

That's where Hashu comes in. One day while pairing with Andre Ortiz he mentioned an approach he took to starting conversations and recalling them at SXSW. He had written a web app that leveraged Twitter and the power of the #hashtag to keep a record of people he met at SXSW. It was a great idea! So we teamed up and put together Hashu!

The Nickel Tour

To use Hashu all you need is a Twitter account and a Twitter client, that's it! Then just follow these 3 quick steps:

1. Use your phone, iPad, laptop, etc etc to open in a browser and follow the prompts to authenticate with Twitter.

2. Send a tweet and mention somebody you found interesting and tag the tweet with #hashu

3. Return to at a later time to view all your tweets tagged with #hashu

3a. Aside from showing you your tweets in a cool tiled format with images and Google maps, it will also show you the profiles of all of the people you have mentioned.

The Technical

Hashu is a Rails 4 app, using Ruby 2 and Mongo. My original intent was to demonstrate modeling for NoSQL and Riak, but given budgetary constraints and that I was using Heroku's free tier for hosting I switched from Riak to Mongo. However I chose to model as I would for Riak and used Braintree's Curator gem so at a later date and time we could easily switch back from Mongo to Riak.

The code is public and can be found on Github. Please don't go stealing our ideas! We're so cool we opened up our code so you could have a peak under the hood :)

The Extra Credit

I'll be attending CodeMash this week and plan on using Hashu throughout the week. Don't be surprised if I #HashU! If you'd like to chat about Hashu, Riak, pair or have a beer at CodeMash just send me a tweet (@msnyder) or look for the guy who looks like me. And for those attending CodeMash, you can also access Hashu at

The Leftovers

This is the first production iteration of this application and Andre and I are working to improve it. We'd like to add in some cool per conference branding, better parsing of tweets, and some other coolness. If you're interested in helping out or have some feedback for us, send us a tweet or leave a comment on the CranialPulse blog.
Enjoy using Hashu!

Thursday, December 26, 2013

Simple & Easy Git awesomeness for your bash prompt

Previously I had been using an old recipe published by Neil Ford some years ago to display source control info in my bash prompt. It worked fairly well for a long time and was compatible with SVN, Git, and even CVS. However lately it's been acting a bit flaky so I decided to refactor it. My goal was to display the repo name, current branch and indicate if the working state was dirty.

In doing my homework I learned that Git now distributes some helpful functions for displaying Git info in your Bash prompt. They also provide bash tab completion for Git commands!! WIN!

Here's a quick Gist showing what I had to do:
And here is the result. Repo name followed by branch name in parens. The '%' indicates I have untracked files in my working directory. For a better description of what indicators are used, read through the documentation in

Friday, August 23, 2013

Do you know who your server is talking to?

Meet Hong Kong Fuey, the world's greatest detective and agent behind FueyClient and the Fuey Dashboard.

FueyClient is a Ruby gem that you can install on your servers to monitor what resources are
currently available to your server and which are not. FueyClient currently supports Pings, SNMPWalks and RFC Pings for those out there working with SAP.

Configuring FueyClient to inspect what resources are available is a matter of configuring a simple YAML file.

This configuration would ping "My DNS" and upon success would SNMPWalk to verify my VPN connection was available. If we wanted we could then go on to add another Ping to validate resources on the VPN network were available.

Currently the execution, success, and failure of each inspection is logged locally and sent to a Redis queue. Tools like PapertrailApp are great for looking through the FueyClient logs, but even better is the soon to be open sourced Fuey Dashboard that handles push notifications and shows realtime status on each of your inspections.

In the meantime, running FueyClient is easy. Just setup the config files and run it from the command line with the location of your config files passed as an argument. Even better, set it up as a cron job to run every 5 minutes! For more info on installation and configuration, check out the README.

At B2b2dot0 we value knowing that each of our clients servers can communicate with their required resources and we assembled Fuey just for that purpose! Hopefully Fuey can help you ensure your servers can communicate with their resources.

(Definitely expect an update in the next week or so when the Fuey dashboard is available!)

UPDATE: The Fuey Dashboard is now available for your consumption! Check it out!

Thursday, August 8, 2013

Install Redis on your Mac with MacPorts

To install Redis using MacPorts just run
sudo port install redis
Then to load it as a startup item run
sudo port load redis 
Or if you prefer to start a redis server manually
redis-server /opt/local/etc/redis.conf
As of the writing of this post, this will install Redis 2.6.10 and 2.6.14 is available directly from Redis.

Monday, July 29, 2013

Technical Friction

While working on a roadmap for the future of our products at B2b2dot0, I realized we needed to have a consistent way of prioritizing internal efforts. Several of our internal efforts require modifications to fundamental pieces of our current products and processes.  So much so that it would require moving slower on current "billable" work. Justifying that level of structural change is difficult, especially at a startup where bandwidth is at a premium. But we also can not forget to spend time on increasing efficiency.

Each of these changes represented a certain risk, but they also offered a great benefit to all future work and efforts. Lean methodologies teach us about constraints and maximizing flow through constraints. In thinking about flow, I realized it's about friction. When there is high friction it becomes hard for things to flow through the system. If we can reduce friction, we increase flow.

Technical Friction is the resistance that all improvements encounter from existing technical infrastructure (and culture). We have all encountered this before, typically in the form or we can't do X because of Y.

Reduce technical friction. Aspire to be a polar bear on ice.
In identifying Technical Friction it became obvious how to prioritize internal efforts. The projects that reduced the most amount of friction deserved the most amount of attention. Simple enough. But what about reducing friction in a system that is currently running, receives little to no updates, and works as is? Reducing friction is those systems has low ROI and would more than likely introduce more work into the system as most systems that fit that description are fragile and difficult to test. So for the purposes of prioritizing it is also important to consider the flow and utilization of the component that has high friction.

This helps. The roadmap is now clearer and we can start moving by focusing on reducing friction in the systems that are constantly moving and changing. The goal is not to start moving, but to continue moving with low friction.

Thursday, July 4, 2013

Benefits and Positive Value of Pair Programming

After reading High costs and negative value of pair programming, i just had to write a post about the Benefits and Positive Value of Pair Programming.
Where to start? How about a quick statement to debunk Capers Jones analysis of Pair Programming. Capers uses 4 data tables to statistically prove pair programming invaluable and expensive. The basis for all the data is LOC (lines of code). LOC is a vanity metric, which only serves to document the current state of a product, but offers no insight into how we got here or what to do next. To borrow from Scott Bradley's blog post on Cohort Analysis,

vanity metrics give the “rosiest possible picture” of a startup’s progress, but does not track how people are actually interacting with the application.
That is exactly what LOC are, a vanity metric that offers no insight into what is really happening.So how should we measure the effectiveness of Pair Programming? Great question! Let's discuss Actionable Metrics. Actionable metrics tie specific and repeatable actions to observed results. Agile teams are full of repeatable actions, the smallest being Red Green Refactor up to Iteration Planning and on to Release Planning. During the Red, Green, Refactor cycle a pair is constantly inspecting the code and reacting to it. Capers cites that lone programmers produce 20 LOC/hr, while pairs only produce 15 LOC/hr. That 5 LOC/hr less is a direct result of constantly refactoring and evolving the code. Those 15 lines of code are going to be more maintainable and easier to hand off to another developer than the 20 LOC written by a solo programmer that has not been reviewed by a peer. Not too mention you now have cross pollination and not just one developer can maintain those 15 lines, but now at least 2 developers can reducing your single point of failure.

What other actionable metrics can we observe about pairing? How about delivery of features? It's repeatable and produces a result. Features may differ across teams and projects. Based solely on my experience pairing on large teams at companies like Progressive and Nationwide to small teams at startup companies like Save Love Give, the teams that paired effectively produced features more often and of higher quality with less defects. Those teams also exhibited a higher degree of cohesiveness and shared understanding that allowed them to be more effective in other aspects of the project, not just programming. The teams I have been on that have not paired or utilized what I'd call "partner programming" had to rely lengthy inspection processes and larger QA teams to ensure a higher degree of quality. The non-pairing teams also suffered from the side effects of turnover. Whenever a developer would leave the project, code produced by that developer immediately became more costly to extend and maintain.

Speaking of turnover and changes within a team, one less often mentioned benefit of pairing is easier onboarding of new team members. New team members become contributors day one, unlike non-pairing teams where new members must endure weeks if not months of learning standards, development process, code base, etc. Pairing teams are able to introduce new members to standards, process, code base, etc as they are working "hands on" in the code base.

So why would a team not pair? Culture. A culture that promotes strong egos, limited collaboration, and creates silos will certainly struggle with pairing. In order to pair the team needs to commit to setting their egos aside and working as a team. Even the most senior developer has something to learn from the youngest of developers!

The most difficult thing about writing this post is keeping it focused on solely pairing and ignoring other process changes that typically accompany teams that pair; TDD, iterative development cycles, and continuous integration among other things. Pairing is a process change. Actually it's a social change in your team culture. Your entire team needs to be open to that change. In 2002 I was part of a team that underwent an agile transformation and took on the entire book of eXtreme Programming practices. It took awhile, but the entire team remained open to the change and in the end we delivered some outstanding software ahead of schedule without working crazy hours. It turned out to be a project I frequently refer to when I describe the ideal scenario. We went from changing pairs a 1-2 day basis to multiple times per day as it made sense.

Just because you do not pair does not make you evil. Well not entirely evil :) It just means you have a different preference than I do. I respect your skills, however myself and others have experienced the benefits of pairing and prefer to work on teams that are open to collaboration and pairing as they can deliver much more value in less time.