June 2009 Archives

Just Effing Do It

| Comments

The first chapter of Ben Hunt’s book Save the Pixel contains a great snippet of advice:

Once you’re clear what needs to be done, stop all analysis, and apply the JFDI process (“Just F*ing Do It”).

I love this. I’ve been developing an analysis allergy over the past few years. In design meetings, I can’t help but think how much quicker it would be to just try one of the options, or ALL of them, and find out the best design by experimentation.

Now, when I feel like I’m stuck in a rut, I start writing code. It crystallizes my understanding of the problem and has the pleasant side-effect of producing something useful.

Fat Setups, Skinny Tests

| Comments

Dave Stanek recently turned me on to a different style of unit testing:

  • write a test class for each scenario
  • put all the execution in the setup method
  • verify outcomes in tiny test methods

I’m sure someone has a better name for this, but I think of it as fat-setup/skinny-test.

A Simple Example

Here’s a fictional class that is good at finding Jedi of both good and evil inclination. Its main job is to return a tuple containing a list of light jedi, and another list of dark. Empty lists mean no matches.

Unfortunately, the adapter JediFinder uses returns None when there are no matches, so our class need to do a little logic to detect that and return empty lists instead.


class JediFinder(object):
    """A class good at finding different kinds of Jedi in the Universe"""
    def __init__(self, search_adapter):
        self.search_adapter = search_adapter

    def find_jedi(self):
        light_jedi, dark_jedi = self.search_adapter.find()
        if light_jedi is None:
            light_jedi = []
        if dark_jedi is None:
            dark_jedi = []
        return light_jedi, dark_jedi

The Tests

So now we want to test our finder. Here’s one way I might do fat setups. First I have a base test class that sets up things used in many/all of the test cases:

    from mock import Mock
    class JediFinderTest(object):
        def setup(self):
            self.search_adapter = Mock()
            self.finder = JediFinder(self.search_adapter)

Now I can define other test cases, each based on a different scenario. In this case, making sure the code does the right then when no jedi are found, when only dark side jedi are found, only light side, and when both are found.

    class TestWithNoResults(JediFinderTest):
        def setup(self):
            self.search_adapter.find.return_value = (None, None)
            self.results = self.finder.find_jedi()

        def test_returns_empty_lists(self):
            assert self.results == ([], [])

    class TestOnlyLightJediFound(JediFinderTest):
        def setup(self):
            self.search_adapter.find.return_value = (["Obi-Wan"], None)
            self.results = self.finder.find_jedi()

        def test_light_is_obiwan(self):
            assert self.results[0] == ["Obi-Wan"]

        def test_dark_is_empty(self):
            assert self.results[1] == []

    class TestOnlyDarkFound(JediFinderTest):
        def setup(self):
            self.search_adapter.find.return_value = (None, ["Vader"])
            self.results = self.finder.find_jedi()

        def test_dark_is_vader(self):
            assert self.results[1] == ["Vader"]

        def test_light_is_empty(self):
            assert self.results[0] == []

    class TestLighAndDarkFound(JediFinderTest):
        def setup(self):
            self.search_adapter.find.return_value = (["Obi-Wan"], ["Vader"])
            self.results = self.finder.find_jedi()

        def test_dark_is_vader(self):
            assert self.results[1] == ["Vader"]

        def test_light_is_obiwan(self):
            assert self.results[0] == ["Obi-Wan"]


I like this style because the smaller test methods get right to the point. They express what the test is verifying, not filling lines with setup and configuration.

Another developer can come along, pretty much ignore the setups, and just look at test methods to learn what’s going on.

I think it also helps with maintenance. You can add new test methods to each case when new features are added without refactoring or duplicating test setups.

Nature Hates Your Project

| Comments


The Universe wants your project to fail. Here’s a few ways that Nature issues the beat-down:


All things tend toward chaos. Software by it’s nature is the organizing of thoughts, procedures, and electrons. Very unnatural. Advantage:Nature.

The Relative Rarity of Psychic Powers in Humans

Your big project plan assumes you have some reasonable skill at predicting the future. Sadly, humans are bad at this.Advantage:Nature.

The Inexorable Progression of Time

Time doesn’t take a break. So every minute you’re sleeping and NOT coding, the Universe pulls ahead. Oh, and those new features you want to add in the same amount of time? Sorry. No deal.Advantage:Natire.


Is it any wonder that software projects have dismal success rates with the odds stacked so high?

One of the awesome things about Agile/Lean/XP/ is that it admits defeat in each of these cases. Rather than beating heads against these monumental obstacles, they slice and dice our activities to minimize the damage nature can do to our fragile efforts.

Can’t predict the future? Use sprints to limit your focus to shorter time spans. Our guesses get more accurate as the time we need to predict gets lowered.

Hard to fit all your features in before the deadline? Don’t focus on finishing everything before your deadline. Ensure you work on the most important features, and that those features work.

Code turning to mush? Refactor. Refactor. Refactor. This one is a drag-out brawl with entropy. You have keep up on this, and there’s no good way around it. Left to itself, your codebase will rot into a steaming pile of poo. You mow your lawn every week. Refactor your code regularly.

And so on. This is why many projects are so painful to developers. We’re being asked to undo the very fabric of the Universe and deliver on time. Of course, that’s exactly what we do, but it hurts like hell. Why not go for something a little more fun?

[photo courtesy of Jon Wiley some rights reserved]

Dojo Codeswarm

| Comments

I love codeswarms. Though their utility may be questioned, it’s neat to see your hard work translated into organically moving points of light.

It’s also a useful way to help people understand just how much activity goes into software.

So, I was very happy to see the Dojo codeswarm on Alex Russel’s blog.

I’ve actually rigged one for our project at work. It’s not too hard if you follow the instructions and have a little patience.

Jakob Nielsen has an awesome post about the importance of the first 2 words of a link. He sets the cutoff at eleven characters.

Users typically see about 2 words for most list items; they’ll see a little more if the lead words are short, and only the first word if they’re long. Of course, people don’t see exactly 11 characters every time, but we picked this number to ensure uniformity across the sites we tested.

This prompted me to write a (bad) greasemonkey script that shows only the first 11 characters of hyperlinks.

After I installed it, I visited a few blogs to see how post titles looked.

I found that “good” titles tended to have exotic or unique words, and make you want to click:

  • “Ready, Aim.”
  • “Best Mac Ev”
  • “Maps of tun”

Bad headlines tend to have boring or null words, and feel as if they’ve been cut short:

  • “Build Your”
  • “Why it’s wi”
  • “A Reminder”

Give the script a shot, and see if you have a similar experience.

(And to save you the trouble, the first 11 characters of this post’s title are “11 Characte”. Riveting.)

Just saw this well-made video that explores what things must be like for enterprise software companies. It’s easy and fun to vilify vendors, but this is a nice look at things from their perspective.

[via Scofield Editorial]

I was blown away when I read about how IMVU runs a million tests a day in the course of about 50(!) production deployments.

The way it happens is downright shuttle-like :

The code is rsync’d out to the hundreds of machines in our cluster. Load average, cpu usage, php errors and dies and more are sampled by the push script, as a basis line. A symlink is switched on a small subset of the machines throwing the code live to its first few customers. A minute later the push script again samples data across the cluster and if there has been a statistically significant regression then the revision is automatically rolled back. If not, then it gets pushed to 100% of the cluster and monitored in the same way for another five minutes.

I freaking LOVE that. The machine dipping its toes in the water before moving code, then again after to make sure things are OK. The Pragmatic Programmer talks about ubiquitous automation, but this is something approaching a whole new level, man.

I think we can all admit to taking a shot or two at Perl Guys from time to time: I like to do it with a thick east-European accent. “Vat? You need billing system? I write in Perl. Ten lines.”

Well who’s laughing now? Here’s Sig Wik, a wiki in four lines of Perl. Four!

use CGI':all';path_info=~/\w+/;$_=`grep -l $& *`.h1($&).escapeHTML$t=param(t)
||`dd<$&`;open F,">$&";print F$t;s/htt\S+|([A-Z]\w+){2,}/a{href,$&},$&/eg;
print header,pre"$_<form>",submit,textarea t,$t,9,70

This all comes courtesy of the Shortest Wiki Contest. Good food for thought in the whole brevity vs. clarity thing.


The next time someone quotes me the Project Triangle, I will set fire to the conference room.

Seriously. A grade-school student understands this thing.

If I spend less time buildng this ramp for my BMX bike, it won’t be as good.

I could pay for a fancy pre-fab ramp and save myself some time.

I could take two weeks and make one from twigs I collect from the ditch.

Knowing about the triangle it isn’t the problem. The challenge is in how you deal with it.

When you show someone that triangle, you’re telling them something they already know: software projects are hard. Hell, all projects are hard.

If something is easy, we call it a game and serve beer.

In my nascent quest to resurrect Code Soflty, I’ve stumbled upon a great set of 3D icons for things like rss feeds and twitter accounts. Pretty sweet for zero dollars. You can check out the rss and Twitter versions in my kickass sidebar to the right —>

[link] Free AJI 3D social media icon set #1

[link] Free 3D social media icon set #2 - by AJI


If you’re frustrated by endless technical design meetings, try short-circuiting one with some behavioral tests.

Meetings hurt. They take precious time away from building things. Technical design meetings are especially painful because we know we could use that time to write code. Instead, we come out of the hour (or two!) bleary-eyed and with little to show.

If you feel this pain, try replacing some of those meetings with behavioral tests. These tests can demonstrate the interface or behavior you’re trying to design, then live on as verification and documentation.

I’ve discovered some ground rules for doing this:

  1. No business-folk. You’re writing code, not prose. This is for developers only.
  2. Be flexible. Your initial tests are brainstorms and straw men to get the conversation going.
  3. Keep your tests alive. It’s important that these initial tests survive through to implementation, and remain valuable as verification AND documentation.

An example: Pretend we’re building a web front-end for a system that books hotel rooms. The back end is responsible for looking up availability and is to be built by another team. We need to design the interface to be invoked by the web code, which has collected the information about the room, to the backend code, which can look it up. I might write an initial behavior test like this:

class TestRoomChecker(object):
    def should_get_available_single_room(self):
        checker = RoomChecker()
        room = checker.check(beds=2, start="12/1/2009", duration=4)
        assert room is not null

Even with this rinky-dink test, I’m communicating TONS of useful things to the backend team:

  1. The name of the method the front end will invoke to look for rooms
  2. That I want to send the date as a string, and not a datetime
  3. That I’ve forgotten about smoking rooms, accessible rooms, and bed size.

This is all great stuff that came about because I wrote my behavioral test before meeting with people to discuss technical details. Now when we do get together and discuss the test, I can add TODO’s directly in code to note the issues and immediately update my tests to reflect them.

It’s easy for someone from the backend team to look at this and point out all the flaws and defects. If you’re really feeling your oats, ask them to stay at your desk and write a few more with them looking over your shoulder. You’ll end up with a rich set of tests that define and verify the new interface.