Thursday, December 26, 2013

Simple & Easy Git awesomeness for your bash prompt

Previously I had been using an old recipe published by Neil Ford some years ago to display source control info in my bash prompt. It worked fairly well for a long time and was compatible with SVN, Git, and even CVS. However lately it's been acting a bit flaky so I decided to refactor it. My goal was to display the repo name, current branch and indicate if the working state was dirty.

In doing my homework I learned that Git now distributes some helpful functions for displaying Git info in your Bash prompt. They also provide bash tab completion for Git commands!! WIN!

Here's a quick Gist showing what I had to do:
And here is the result. Repo name followed by branch name in parens. The '%' indicates I have untracked files in my working directory. For a better description of what indicators are used, read through the documentation in git-prompt.sh

Friday, August 23, 2013

Do you know who your server is talking to?

Meet Hong Kong Fuey, the world's greatest detective and agent behind FueyClient and the Fuey Dashboard.

FueyClient is a Ruby gem that you can install on your servers to monitor what resources are
currently available to your server and which are not. FueyClient currently supports Pings, SNMPWalks and RFC Pings for those out there working with SAP.

Configuring FueyClient to inspect what resources are available is a matter of configuring a simple YAML file.

This configuration would ping "My DNS" and upon success would SNMPWalk to verify my VPN connection was available. If we wanted we could then go on to add another Ping to validate resources on the VPN network were available.

Currently the execution, success, and failure of each inspection is logged locally and sent to a Redis queue. Tools like PapertrailApp are great for looking through the FueyClient logs, but even better is the soon to be open sourced Fuey Dashboard that handles push notifications and shows realtime status on each of your inspections.


In the meantime, running FueyClient is easy. Just setup the config files and run it from the command line with the location of your config files passed as an argument. Even better, set it up as a cron job to run every 5 minutes! For more info on installation and configuration, check out the README.

At B2b2dot0 we value knowing that each of our clients servers can communicate with their required resources and we assembled Fuey just for that purpose! Hopefully Fuey can help you ensure your servers can communicate with their resources.

(Definitely expect an update in the next week or so when the Fuey dashboard is available!)

UPDATE: The Fuey Dashboard is now available for your consumption! Check it out!

Thursday, August 8, 2013

Install Redis on your Mac with MacPorts

To install Redis using MacPorts just run
sudo port install redis
Then to load it as a startup item run
sudo port load redis 
Or if you prefer to start a redis server manually
redis-server /opt/local/etc/redis.conf
As of the writing of this post, this will install Redis 2.6.10 and 2.6.14 is available directly from Redis.


Monday, July 29, 2013

Technical Friction

While working on a roadmap for the future of our products at B2b2dot0, I realized we needed to have a consistent way of prioritizing internal efforts. Several of our internal efforts require modifications to fundamental pieces of our current products and processes.  So much so that it would require moving slower on current "billable" work. Justifying that level of structural change is difficult, especially at a startup where bandwidth is at a premium. But we also can not forget to spend time on increasing efficiency.

Each of these changes represented a certain risk, but they also offered a great benefit to all future work and efforts. Lean methodologies teach us about constraints and maximizing flow through constraints. In thinking about flow, I realized it's about friction. When there is high friction it becomes hard for things to flow through the system. If we can reduce friction, we increase flow.

Technical Friction is the resistance that all improvements encounter from existing technical infrastructure (and culture). We have all encountered this before, typically in the form or we can't do X because of Y.

Reduce technical friction. Aspire to be a polar bear on ice.
In identifying Technical Friction it became obvious how to prioritize internal efforts. The projects that reduced the most amount of friction deserved the most amount of attention. Simple enough. But what about reducing friction in a system that is currently running, receives little to no updates, and works as is? Reducing friction is those systems has low ROI and would more than likely introduce more work into the system as most systems that fit that description are fragile and difficult to test. So for the purposes of prioritizing it is also important to consider the flow and utilization of the component that has high friction.

This helps. The roadmap is now clearer and we can start moving by focusing on reducing friction in the systems that are constantly moving and changing. The goal is not to start moving, but to continue moving with low friction.

Thursday, July 4, 2013

Benefits and Positive Value of Pair Programming

After reading High costs and negative value of pair programming, i just had to write a post about the Benefits and Positive Value of Pair Programming.
Where to start? How about a quick statement to debunk Capers Jones analysis of Pair Programming. Capers uses 4 data tables to statistically prove pair programming invaluable and expensive. The basis for all the data is LOC (lines of code). LOC is a vanity metric, which only serves to document the current state of a product, but offers no insight into how we got here or what to do next. To borrow from Scott Bradley's blog post on Cohort Analysis,

vanity metrics give the “rosiest possible picture” of a startup’s progress, but does not track how people are actually interacting with the application.
That is exactly what LOC are, a vanity metric that offers no insight into what is really happening.So how should we measure the effectiveness of Pair Programming? Great question! Let's discuss Actionable Metrics. Actionable metrics tie specific and repeatable actions to observed results. Agile teams are full of repeatable actions, the smallest being Red Green Refactor up to Iteration Planning and on to Release Planning. During the Red, Green, Refactor cycle a pair is constantly inspecting the code and reacting to it. Capers cites that lone programmers produce 20 LOC/hr, while pairs only produce 15 LOC/hr. That 5 LOC/hr less is a direct result of constantly refactoring and evolving the code. Those 15 lines of code are going to be more maintainable and easier to hand off to another developer than the 20 LOC written by a solo programmer that has not been reviewed by a peer. Not too mention you now have cross pollination and not just one developer can maintain those 15 lines, but now at least 2 developers can reducing your single point of failure.

What other actionable metrics can we observe about pairing? How about delivery of features? It's repeatable and produces a result. Features may differ across teams and projects. Based solely on my experience pairing on large teams at companies like Progressive and Nationwide to small teams at startup companies like Save Love Give, the teams that paired effectively produced features more often and of higher quality with less defects. Those teams also exhibited a higher degree of cohesiveness and shared understanding that allowed them to be more effective in other aspects of the project, not just programming. The teams I have been on that have not paired or utilized what I'd call "partner programming" had to rely lengthy inspection processes and larger QA teams to ensure a higher degree of quality. The non-pairing teams also suffered from the side effects of turnover. Whenever a developer would leave the project, code produced by that developer immediately became more costly to extend and maintain.

Speaking of turnover and changes within a team, one less often mentioned benefit of pairing is easier onboarding of new team members. New team members become contributors day one, unlike non-pairing teams where new members must endure weeks if not months of learning standards, development process, code base, etc. Pairing teams are able to introduce new members to standards, process, code base, etc as they are working "hands on" in the code base.

So why would a team not pair? Culture. A culture that promotes strong egos, limited collaboration, and creates silos will certainly struggle with pairing. In order to pair the team needs to commit to setting their egos aside and working as a team. Even the most senior developer has something to learn from the youngest of developers!

The most difficult thing about writing this post is keeping it focused on solely pairing and ignoring other process changes that typically accompany teams that pair; TDD, iterative development cycles, and continuous integration among other things. Pairing is a process change. Actually it's a social change in your team culture. Your entire team needs to be open to that change. In 2002 I was part of a team that underwent an agile transformation and took on the entire book of eXtreme Programming practices. It took awhile, but the entire team remained open to the change and in the end we delivered some outstanding software ahead of schedule without working crazy hours. It turned out to be a project I frequently refer to when I describe the ideal scenario. We went from changing pairs a 1-2 day basis to multiple times per day as it made sense.

Just because you do not pair does not make you evil. Well not entirely evil :) It just means you have a different preference than I do. I respect your skills, however myself and others have experienced the benefits of pairing and prefer to work on teams that are open to collaboration and pairing as they can deliver much more value in less time.






Tuesday, June 4, 2013

Exposing your controllers intent in Rails

Every time I see an instance variable assigned some value in a controller action it bothers me. It just violates some part of my soul to see it done so I've been toying with a way to make it more palatable and the intentions more clear. Would love to hear your feedback on this approach.
Check out this gist to get the idea.

Tuesday, May 28, 2013

Installing Ruby 2.0.0 with RVM and without Homebrew

Yes, I said "without Homebrew". I'm a MacPorts guy and have been for awhile so my MBP is laden with  MacPorts managed packages.  I actually have Homebrew installed, but have only used it once. So you can flame all you want, but unless you want to redo my entire machine, layoff.

With that aside I ran into problems using rvm to install 2.0.0 because it wanted to install dependencies using Homebrew, dependencies which I had already installed using MacPorts. Of course I didn't want to uninstall MacPorts and it's packages and let Homebrew handle everything, especially just for openssl to be compiled by Homebrew. Until today (last attempt was 3 weeks ago) I could not get around the Homebrew requirement and it's for one of two reasons, which I have not verified. Either rvm stopped forcing homebrew in their last update (not likely) or the rvm pkg command allows the default package manager to run vs forcing homebrew (more likely) (I had not previously tried rvm pkg to install dependencies).

In the end, here is what I needed to do. (Note, I already had Ruby 1.9.3 and it's dependencies installed)

rvm get stable
rvm pkg install openssl
rvm install 2.0.0 --with-openssl-dir=$HOME/.rvm/usr --verify-downloads 1
rvm use 2.0.0

Tuesday, April 9, 2013

Ruby 1.8.7-p358 on Mountain Lion with RVM

While trying to install 1.8.7-p358 I ran into a compile error
/usr/include/tk.h:78:23: error: X11/Xlib.h: No such file or directory
I found the answer to the problem here and an explanation here.

Resolving the problem is easy. Just follow these steps:

  1. Install xQuartz
  2. Tell the compiler to use the correct installation with: export CPPFLAGS=-I/opt/X11/include
  3. Then try to attempt to install again with rvm: rvm install 1.8.7-p358

Wednesday, January 16, 2013

Riak: What's faster? 2i or Key Filters

Recently we've experienced a large surge in the size of our data and it's called into question some of our querying approaches and node configuration.

We had been using 2i for most of our querying and a combination of "Data Point Objects" and MapReduce for our more analytical needs. However when our MapReduce started bombing we  questioned/reviewed our querying approach. (for the record, it was bombing due to pref_list exhausted, currently resolved. more on this in a subsequent post)

It didn't take long to find posts like this on Google:
Be aware that key filters are just a thin layer on top of full-bucket key listings. You'd be better off storing the field you want to filter in a secondary index, which more efficiently supports range queries (note that only the LevelDB storage engine currently supports secondary indexes). Barring that, you could use the special "$key" index for a range query on the key. - Sean Cribbs (http://riak-users.197444.n3.nabble.com/preflist-exhausted-error-td3935922.html)
Key Filters are the backbone of our querying approach. We were/are under the impression this is the best way to get a subset of the data for MapReduce. Since most of our queries are over date ranges all of our keys are prefixed with yyyymmddhhmmss.  This allows us to use date based Key Filters in our Map Reduce. According to the post however, we should be using 2i and the special $key field for performance reasons.

First of all, what is $key??? Here is what the Riak Handbook has to say about the $key field:
There are special field names at your disposal too, namely the field $key, which automatically indexes the key of the Riak object. Saves you the trouble of specifying it twice. Riak automatically indexes the key as a binary field for your convenience ...
So I put it to the test. For the record we are using Ruby and Rails and the Ripple(ODM) with Ripplr (RiakSearch). I opened a rails console and ran my Key Filter based MapReduce. It returned in 31,830 rows in a little over a minute on average. I then queried the same bucket using 2i and $key with the same parameters with no Reduce phase. It ran for over 3 minutes before I canceled it! For a sanity check I shrunk the range to two months that I knew had much less data ... it returned results over a minute later!

The code for our Key Filter:
The code for the comparable 2i range query:
So my question to all of you is why is 2i/$key being recommended over Key Filters? What am I missing here?

As a side note ... Riak handled an extraordinary spike in load like a champ writing and reading data. Our concerns strictly revolve around how best to query that data now that we have it.

Monday, January 14, 2013

Why CodeMash continues to be the awesome!



What is CodeMash? 

"CodeMash is a unique event that will educate developers on current practices, methodologies, and technology trends in a variety of platforms and development languages such as Java, .Net, Ruby, Python and PHP." - Codemash.org 

This was CodeMash's 7 year and my 4th year in attendance. While the CodeMash description is accurate, what it fails to describe is the CodeMash culture. Education does not just happen during speakers' sessions or precompilers. It also occurs while sitting down and having a brew with a signatory of the Agile Manifesto or while having a casual conversation about Rails 4.0 with a core contributor. It even occurs while sitting in the hot tub/swim up bar with other agile coaches talking about real world scenarios. There are no rules at CodeMash, except one ... be yourself. Oh, and maybe lots of crispy bacon.

2013 tech trends at CodeMash

I tend to be a Ruby centric developer these days, but these were the buzzwords I continued to hear while at CodeMash.
  • JavaScript
  • Gamification
  • Single Page Applications
  • Bacon Bar
  • and more Javascript

Javascripts

It was extremely difficult to not attend a talk that did not mention Javascript. Several frameworks have evolved around JavaScript including AngularJS, Backbone, Node, and Knockout. Testing for Javascript has also matured significantly with tools like Jasmine and lineman (see my notes on the Test Double talk).
On a similar note, CoffeeScript is the language of choice for crafting Javascript as it compiles down to and promotes best practice Javascript code that is 99.9% guaranteed to work in IE! And you don't need to work in Rails to use CoffeeScript. You can use the coffee command line tool to watch a directory and have it (re)compile your CoffeeScript as you make changes to it.  Keep reading for more on testing in Javascript!

Gamification and Single Page Applications

Dennis and Brian from SRT Solutions have crafted an application for exploring different ways to write single page applications. If you have ever read a choose your own adventure book, you're going to love their application titled Choose your own application. The focus is on building your own single page application with your choice of technologies. The application has been "gamified" and "players" earn badges for each choice they make. This is a great opportunity to explore a new technology in a fun way. Technology choices include Backbone, Knockout.js, .NET, Rails, Node.js, Heroku, CoffeeScript, and Azure.
Rails and Single Page Applications ... with the release of Rails 4.0, Rails will be adding in default support for Single Page Applications (TurboLinks). Currently it can be disabled by removing the TurboLinks gem from the Gemfile, otherwise you will need to disable it on a per link basis. DHH has stated he intends to drive rails in the direction which is best for BaseCamp, a single page application, so expect more changes like this in the near future. My $.02, expect a community fork of Rails in the near future
Brian Prince delivered an excellent talk on Gamification. He discussed several real world examples where Gamification has led to modifying user behavior, including applications that encourage diabetics to test insulin levels regularly and elderly people living at home alone to stay active and engaged. The important thing to remember is to identify the behavior you want to change and then gamify that aspect of your application to encourage that behavior. Adding badges for the sake of adding badges often results in encouraging the wrong behavior.

And Gamification does not just mean badges. Take the bottomless trash can for example. It changes behavior by encouraging them to put their trash into a trash can. And when they do it sounds like their garbage is falling into a deep chasm. It's fun and gets people to do it again. They actually found people looking for trash nearby just to throw in it!

Machine Learning

Seth Juarez delivered two excellent presentations on Machine Learning. Machine learning allows us to find and exploit patterns in data. There are two main classifications of machine learning, supervsed and unsupervised. Supervised learning allows us to be predictive while unsupervised learning helps us to understand the structure of the data. For more details, read my notes from Seth's talks (part 1, part 2).
Seth also has a NuGet package that can be imported into Visual Studio. It is called NuML and can be found here. It was demoed during his talk and looks awesome! As the number of Big Data projects grow, this is going to increasingly become a more and more common topic for discussion and application.

Real world Javascript testing

Javascript testing has really improved since I last looked into it. Jasmine appears to be the front runner and from what I saw and experienced is my prefered choice. It looks alot like rspec and can use the rspec-given syntax thanks to Justin Searls and Jasmine-Given. Justin demonstrated a combination of tools that makes testing Javascript extremely easy. Lineman is one of those tools and requires Node.js and NPM in order to install it. Lineman is used to run your Jasmine specs. You can read more about Javascript testing in my detailed notes on his talk.

Better Metrics for your team

Nayan Hajratwala gave a fantastic demonstration on measuring your team's effectiveness. Traditionally teams have been measured by cyclomatic complexity, velocity, hours in office, etc. However, none of those answer what the customer really wants to know ... What is the team's throughput?
Throughput is the rate at which features are passing through the system. Most often teams try to deliver more by putting more work in progress into the system. This often results in lower quality, bottlenecks, and overall lower throughput.
Cycle Time is the time between two succesfully delivered features and applies Little's Law to compute. Little's Law is described as:
The average number of work items in a stable system is equal to
their average completion rate, multiplied by their average time in the
system.  
To demonstrate, Nyan created 4 teams and had each team play "The Dot Game". The game has the team divided up into 8 roles and the team measures how fast they can assemble the "product". The demonstration showed that adding more work in progress only resulted in less being delivered. Nyan then changed the rules of the game such that there was less work in progress by requiring each role to only work on one product at a time and repeated the exercise. Each of the 4 teams saw an average of 8x improvement in Cycle Time, a huge improvement in quality, and increase in the amount of product produced.
The goal should not be 100% utilization of workforce, it should be maximizing throughput. This demonstration showed that by minimizing work in progress and having each role focus on one thing at a time resulted in less than 100% utilization, but it also resulted in much higher throughput and higher quality.


Bacon Bar!

Several stations were assembled, each with their own mouth watering trays of bacon and selection of toppings. 350lbs of bacon were consumed in a very short amount of time and no heart attacks were reported. Thanks to Josh Walsh and Designing Interactive for coming up with this great idea and sponsoring the activity!!


I was however surprised that Duct Tape beat Bacon 34-29 in the first round of Manifest's MashMadness. Duct Tape even went on to beat Gandolf the Grey in the championship round. Gonzaga??