JAW Speak

Jonathan Andrew Wolter

Future of Work at London Business School: My Take on ThoughtWorks Globally

with one comment

Reading time: 2 – 4 minutes

Thanks to Roy’s urging, over the last 6 months I’ve been involved with London Business School’s Lynda Gratton in a consortium on the Future of Work. What will work look like in 2020, and how can companies become “future proof?” Specifically, what will global corporations look like? We kicked it off in London last November, where we met and discussed with people at several dozen companies. It’s been an honor, and a time to meet many interesting people.

Today, looking back at it, I am extraordinarily proud to call myself a ThoughtWorker. As an employer, we have a unique positioning of a global culture, lived out social values, and world changing technology.

Go visit any ThoughtWorks’ office in the world and you’ll find many people on international assignment. Further globalization and virtual-globalization will permeate work in 2020 — but today we clearly stand out as a leader amongst our peers. On vacation last winter in Beijing (pictures), I visited our office. It felt no more like I was in China then if I were in our offices in Bangalore, San Francisco, London or Chicago. It was not an “American culture,” but a “smart, interesting, passionate” culture of people with deep and varied interests — surrounded by great technology and delivering some game-changing products.

We encourage frequent cross-pollination of ideas and experiences through short and long term transfers. I felt right at home, met some amazing people (fellow TW’er and father of the Chinese internet Michael Robinson), got set up with daily Chinese lessons, and returned to visit in the office (for said lessons) almost every day.

There are so many more things to write about. It’s inevitable for work in 2020 to involve more cultures, countries, and languages. The marketplace for your goods could be half way around the world. Virtual-workers and virtual-meetings will explode. Yet, I also predict more face time with global coworkers. (Think how the world will change when we have US to Asia, or Europe to South America point to point travel in a few hours via spaceflight.) This will usher in increased complexity, and security measures; however I am excited for the future.

I have many more ideas, especially about the social values side of work in 2020 — but in true agile fashion, I’d like to see if anyone is interested in this before putting in more time up front.

Bookmark and Share

Written by Jonathan

April 9th, 2010 at 7:41 pm

Posted in career, thoughtworks

“No Project Was To Extend Beyond 90 Days”

with one comment

Reading time: 2 – 4 minutes

PDF LinkMcKinsey has an interview (pdf) with Kundapur Vaman Kamath, ICICI’s award winning MD and CEO from 1996 until 2009. He explains why he had no CIO, it was so strategic he brought it in as the CEO’s responsibilities. (An unusual move, considering the amount of effort two roles would stretch him very thin.) His boldness is evident in the following quote.

[Startups] in Silicon Valley were taking products from concept to market in 90 days, because if they didn’t, somebody else would. So we asked, “Why can’t we?” We made it a rule: no project was to extend beyond 90 days. People were skeptical at first, but it was achievable, and it gave us a huge competitive edge. When I first heard about the 90-day rule at a seminar, we were building a platform for online brokerage almost from scratch. I got on the phone to Bombay from New York and said, “We need to get this done in 90 days.” The project had already been going for 30 days, so in the end I said, “OK, you can have 90 days from today.” The trading platform was up and running 90 days later. It cost us just over $1 million, and with some marginal tweaking–nothing more–it is still operating today.

Imagine that! Every project must go to market in 90 days. What would your organization look like if you instituted such an aggressive policy?

If implemented in most organizations, I predict two outcomes:

  1. Many projects would be canceled, saving millions of dollars.
  2. Surviving projects would release incrementally and progressively. No big up front design, followed by years of waterfall. Instead iterative enhancements and frequent production deployments. You won’t build what you don’t need, and you’ll get customer feedback faster to deliver more of what the customers want.

His other quote was great as well:

We decided to run technology in a radically different way from anyone else, so we don’t have a technology department or a glorious title like chief information officer. There is no CIO. Technology is embedded in every business, and the head of the business runs the technology.

Closer business and technology interaction: a recipe for success.

ICICI is India’s largest private bank, who succeeds primarily because it can rapidly implement technologies giving it a competitive edge, says the bank’s chief executive, K. V. Kamath. Kamath, CEO of the Industrial Credit and Investment Corporation of India, considers information technology so central to the bank’s achievements that he manages it himself, without a CIO. Drawing inspiration from the culture and methodologies of Silicon Valley, Kamath has turned a stodgy industrial lender into a regional powerhouse with assets of $56 billion. Having learned to serve low-income consumers cost-effectively in India, ICICI now is exploring other markets.

Bookmark and Share

Written by Jonathan

February 23rd, 2010 at 9:43 pm

Simplicity is Better for Deploying in Production Web Architectures

with one comment

Reading time: 5 – 8 minutes

Engineering something to be scalable, highly available, and easily manageable has been the focus of much of my time recently. Last time I talked about spiderweb architecture, because it has attributes of scalability and high availability, yet comes with a hidden cost. Complexity.

Here is a fictional set of questions, and my responses for the application architecture.

Q: Why does complexity matter?
JAW: Because when your system is complex, there is less certainty. Logical branches in the possible state of a system mean more work for engineers to create a mental model, and decide what action to take. Complexity means there are more points of unique failure.

Q: But my team is really, really smart; my engineers can handle clever and complex mental models!
JAW: That wasn’t a question, but I do have a response. Given a team at any moment in time, there is a finite amount of complexity that the team can deal with. Complexity can be in the application’s logic, dealing with delivering business value. Or, it can be in non functional requirements. If the NFR’s can be met with lower complexity, this will translate directly to more business value. A team will grow in their ability to manage complexity as they understand more and more of it, and team size can increase. Although those productivity increases can be used for business value, or complex architectures. And often, NFR’s can be met while still achieving simplicity.

Q: So how do I deal with a large, complex application which needs an emergency fix on one of the small components?
JAW: Yes, I know the scenario. You want to make a small change into production, but it sounds less risky to only push one part. Here’s my recipe for success: make every deployment identical, and automated. (Ideally push into production from continuous builds, with automated testing.) In the event of an emergency push into production, alter the code from your version control tag, and deploy that as you would every other push. My colleague Paul Hammant call non-standard, risky pushes “white knuckle three-in-the-morning deployments.”

Don’t make the e-fix a one-off, non-standard production push. Have the entire system simple, and repeatable. With repeatability and automated repetition comes security. Very flexible (read: complex), extensible (read: rarely tested day to day) hooks can be built into a system, in order to make it possible to push just one small component into production. However in reality unused code becomes stale, and when a production emergency happens, people will be so scared to try these hooks. Or if they do, there is a greater risk of a misconfiguration, and failure. Which will necessitate a fix of the failed fix which tried to fix the original tiny defect. More complexity. Blowing the original availability requirements out of the water.

Q: So, what is simplicity?
JAW: My definition says: Simplicity is the preference of fewer combinatorial states a system can be in. Choose defaults over

I recently read a quote from High Scalability, which I think gives a good definition of what simplicity is (emphasis added):

“Keep it simple! Simplicity allows you to rearchitect more quickly so you can respond to problems. It’s true that nobody really knows what simplicity is, but if you aren’t afraid to make changes then that’s a good sign simplicity is happening.

[Caveat: some complexity makes sense, it's just too much in the wrong places increases risk. And there is a threshold everyone needs to find: how much risk, how much flexibility, and how much energy to devote to reducing the risk while keeping high flexibility.]

Update: Thanks to Lucas, for pointing me to an interesting article about second life scaling:

A preconditon of modern manufacturing, the concept of interchangeable parts that can help simplify the lower layers of an application stack, isn’t always embraced as a virtue. A common behavior of small teams on a tight budget is to tightly fit the building blocks of their system to the task at hand. It’s not uncommon to use different hardware configurations for the webservers, load balancers (more bandwidth), batch jobs (more memory), databases (more of everything), development machines (cheaper hardware), and so on. If more batch machines are suddenly needed, they’ll probably have to be purchased new, which takes time. Keeping lots of extra hardware on site for a large number of machine configurations becomes very expensive very quickly. This is fine for a small system with fixed needs, but the needs of a growing system will change unpredictably. When a system is changing, the more heavily interchangeable the parts are, the more quickly the team can respond to failures or new demands.

In the hardware example above, if the configurations had been standardized into two types (say Small and Large), then it would be possible to muster spare hardware and re-provision as demand evolved over time. This approach saves time and allows flexibility, and there are other advantages: standardized systems are easy to deploy in batches, because they do not need assigned roles ahead of time. They are easier to service and replace. Their strengths and weaknesses can be studied in detail.

All well and good for hardware, but in a hosted environment this sort of thing is abstracted away anyway, so it would seem to be a non-issue. Or is it? Again using the example above, replace “hardware” with “OS image” and many of the same issues arise: an environment where different components depend on different software stacks creates additional maintenance and deployment headaches and opportunities for error. The same could be said for programming languages, software libraries, network topologies, monitoring setups, and even access privileges.

The reason that interchangeable parts become a key scaling issue is that a complex, highly heterogeneous environment saps a team’s productivity (and/or a system’s reliability) to an ever-greater degree as the system grows. (Especially if the team is also growing, and new developers are introducing new favorite tools.) The problems start small, and grow quietly. Therefore, a great long-term investment is to take a step back and ask, “what parts can we standardize? Where are there differences between systems which we can eliminate? Are the specialized outliers truly justified?” A growth environment is a good opportunity to standardize on a few components for future expansion, and gradually deprecate the exceptions.

Bookmark and Share

Written by Jonathan

January 30th, 2010 at 1:20 pm

Posted in architecture, java

Tips for Replacing a Broken iPhone 3G glass and touch sensor

with 5 comments

Reading time: 2 – 3 minutes

IMG_1254
My glass screen broke by popping and spinning up outside of my jacket, landing glass-side-down on a bumpy pothole  a few weeks ago. It was right before going to China, so I didn’t have time to take it to an apple store. I covered the glass with a screen protector (to stop shards from falling off), and waited until I had more time. I even tried having a phone store in China look at repairing it, but my language barrier got in the way. They all kept trying to use styluses to touch the screen. Now that I’m back, I decided to repair it myself and here are my findings.

Order a replacement screen and touch sensor together. Only my glass was broken, but they’re replaced as one unit.

First watch these two helpful videos for instructions. Make sure you remove the glass top with a suction cup (thanks Chris!), don’t pry it. Also, when removing the (unbroken in my case) LCD, do not pry on it. Instead pry on the metal frame it is attached to.

IMG_1251Do not use too much heat when loosening the glue of the broken glass. This was my only mishap. I used a 2200 watt heat gun and warped and melted off a piece of the plastic frame. Then I spent an hour trying to reheat and bend it back. Also, watch out for repositioning the center button – mine went back in a millimeter lower on one side so it feels different. (Actually this is probably because of the warped frame.) Regarding the rubber gasket; be careful, but some damage may be unavoidable on it.

It took us about 2.5 hours to complete it, and I recovered from the heat gun mishap so that it’s not visible and everything fit back eventually. Plus it was fun to see the insides of the iPhone. Good luck!

Bookmark and Share

Written by Jonathan

January 2nd, 2010 at 7:01 am

Posted in mac

maven + growlnotify for notification when your build finishes

with one comment

Reading time: 1 – 2 minutes

Working on os x with Spaces means I want to read something on another space instead of waiting idly for a 50 second build. But, I don’t want to get distracted. So, I use Growl and growlnotify for notifications of the build’s completion.

#!/bin/sh
# this file is called: mvn (and is executable, and added to path before actual mvn command)

# capture all args passed in to forward to real mvn
ARGS=$*

# We need the client's specific settings.xml, so always specify it now
/usr/bin/mvn -s /Volumes/TrueCryptClient/opt/maven/conf/settings.xml $ARGS 

# when you have growlnotify installed and on your path, this will pop it up
# when the build is done
growlnotify -m "DONE: maven $ARGS"

Note: if you get this error from growlnotify: could not find local GrowlApplicationBridgePathway, falling back to NSDNC, it probably means growl is not started. Start up growl in your System Preferences.

Update: Thanks Cosmin, for the enhancement. Use this snipped in the script. Have an environmental variable for what the notify command is. And say what the build status is in the growl notify.:

if [[ -n $NOTIFY ]]; then
    ($command && $NOTIFY "Build Complete" && exit 0) || ($NOTIFY "Build Failed" && exit 127)
else
    $command
fi
Bookmark and Share

Written by Jonathan

December 31st, 2009 at 1:23 pm

Posted in automation, code, mac

Tagged with

Can you spot Java Puzzler in this snippet?

without comments

Reading time: < 1 minute

I ran across this last week. It was marvelous when we saw what was happening, but entirely puzzling at first.

Boolean someFlag = complicatedLogicToFigureOutFlag();
Person person = new Person(someFlag);

Any signs for concern? How about if Person’s constructor is:

Person(boolean someFlag) {
    this.someFlag = someFlag;
}

Any warning signs?

Will it compile?

Read more for the full puzzler.

Read the rest of this entry »

Bookmark and Share

Written by Jonathan

September 30th, 2009 at 2:51 pm

Posted in code, java, puzzle

How to do 3-way merges with Subversion and Kdiff3

with 4 comments

Reading time: 4 – 7 minutes

I do not endorse branch based development. I prefer trunk based development. Specifically I like what my colleague Paul calls Branch By Abstraction, coined by Stacy Curl, and recently mentioned by Martin Fowler (All one time ThoughtWorkers, and 2 currently).

If you’re stuck with merging though, 3-way merges make it much easier. Doing it with subversion is easy. Instructions are for Linux.

  1. apt-get or yum install kdiff3.
  2. Edit your /etc/subversion/config and fin the line with diff3-cmd, set it to: diff3-cmd=/usr/local/bin/svndiff.sh
  3. Next, create the file /usr/local/bin/svndiff.sh. See below for the script you’ll want to enter in it.

Now when you get a merge conflict you will choose M and merge will open in kdiff3. On the left is the base revision, in the middle is your working copy, and on the right the incoming change. This is a little more to look at, but it is invaluable when dealing with merges. I wouldn’t go back to 2 way diff ever again.

#!/bin/bash
 
# tim/paul: this is a copy of the file located at http://www.yolinux.com/TUTORIALS/src/svndiffwrapper.txt
#    modified to do a non-conflicting merge automatically. see #HERE#
 
# Return an errorcode of 0 on successful merge, 1 if unresolved conflicts
# remain in the result.  Any other errorcode will be treated as fatal.
# Author: Michael Bradley
 
#NOTE: all output must be redirected to stderr with "1&gt;&amp;2" as all stdout output is written to the output file
 
VDIFF3="kdiff3"
DIFF3="diff3"
DIFF="kdiff3"  
 
promptUser ()
{
    read answer
    case "${answer}" in
 
        "M"         )
        echo "" 1&gt;&amp;2
        echo "Attempting to merge ${baseFileName} with ${DIFF}" 1&gt;&amp;2
        $VDIFF3 $older $mine $theirs --L1 $labelOlder --L2 $labelMine --L3 $labelTheirs -o $output 1&gt;&amp;2
        bLoop=1
        if [ -f $output ]; then
            if [ -s $output ]; then
                #output succesfully written
                bLoop=0
            fi
        fi
        if [ $bLoop = 0 ]; then
            cat $output
            rm -f $output
            exit 0
        else
            echo "Merge failed, try again" 1&gt;&amp;2
        fi
 
        ;;
 
        "m"         )
        echo "" 1&gt;&amp;2
        echo "Attempting to auto-merge ${baseFileName}" 1&gt;&amp;2
        diff3 -L $labelMine -L $labelOlder -L $labelTheirs -Em $mine $older $theirs &gt; $output
        if [ $? = 1 ]; then
            #Can't auto merge
            rm -f $output
            $VDIFF3 $older $mine $theirs --L1 $labelOlder --L2 $labelMine --L3 $labelTheirs -o $output --auto 1&gt;&amp;2
            bLoop=1
            if [ -f $output ]; then
                if [ -s $output ]; then
                    #output succesfully written
                    bLoop=0
                fi
            fi
            if [ $bLoop = 0 ]; then
                cat $output
                rm -f $output
                exit 0
            else
                echo "Merge failed, try again" 1&gt;&amp;2
            fi
        else
            #We can automerge, and we already did it
            cat $output
            rm -f $output
            exit 0
        fi
        ;;
 
        "diff3" | "Diff3" | "DIFF3"  )
        echo "" 1&gt;&amp;2
        echo "Diffing..." 1&gt;&amp;2
        $VDIFF3 $older $mine $theirs --L1 $labelOlder --L2 $labelMine --L3 $labelTheirs 1&gt;&amp;2
        ;;
 
        "diff" | "Diff" | "DIFF"  )
        echo "" 1&gt;&amp;2
        echo "Diffing..." 1&gt;&amp;2
        $DIFF $mine $theirs -L $labelMine -L $labelTheirs 1&gt;&amp;2
        ;;
 
        "A" | "a"   )
        echo "" 1&gt;&amp;2
        echo "Accepting remote version of file..." 1&gt;&amp;2
        cat ${theirs}
        exit 0
        ;;
 
        "I" | "i"   )
        echo "" 1&gt;&amp;2
        echo "Keeping local modifications..." 1&gt;&amp;2
        cat ${mine}
        exit 0
        ;;
 
        "R" | "r"   )
        echo "" 1&gt;&amp;2
        echo "Reverting to base..." 1&gt;&amp;2
        cat ${older}
        exit 0
        ;;
 
        "D" | "d"   )
        echo "" 1&gt;&amp;2
        echo "Runnig diff3..." 1&gt;&amp;2
        diff3 -L $labelMine -L $labelOlder -L $labelTheirs -Em $mine $older $theirs
        #Exit with return vaule of the diff3 (to write out files if necessary)
        exit $?
        ;;
 
        "S" | "s"   )
        echo "" 1&gt;&amp;2
        echo "Saving for later..." 1&gt;&amp;2
        cat ${mine}
        #Exit with return vaule of 1 to force writting of files
        exit 1
        ;;
 
        "Fail" | "fail" | "FAIL"   )
        echo "" 1&gt;&amp;2
        echo "Failing..." 1&gt;&amp;2
        exit 2
        ;;
 
        "H" | "h"   )
        echo "" 1&gt;&amp;2
        echo "USAGE OPTIONS:" 1&gt;&amp;2
        echo "  [A]ccept    Accept $labelTheirs and throw out local modifications" 1&gt;&amp;2
        echo "  [D]efault   Use diff3 to merge files (same behavior as vanilla SVN)" 1&gt;&amp;2
        echo "  [Fail]      Kills the command (not suggested)" 1&gt;&amp;2
        echo "  [H]elp      Print this message" 1&gt;&amp;2
        echo "  [I]gnore    Keep your locally modified version as is" 1&gt;&amp;2
        echo "  [M]erge     Manually merge using ${VDIFF3}" 1&gt;&amp;2
        echo "  [m]erge     Same as "M" but attempts to automerge if possible" 1&gt;&amp;2
        echo "  [R]evert    Revert to base version (${labelOlder})" 1&gt;&amp;2
        echo "  [S]ave      Same as 'I' but writes out rold, rnew, and rmine files to deal with later" 1&gt;&amp;2
        echo "  [diff]      Type 'diff' to diff versions $labelMine and $labelTheirsthe before making a descision" 1&gt;&amp;2
        echo "  [diff3]     Type 'diff3' to diff all three versions before making a descision" 1&gt;&amp;2
        echo "" 1&gt;&amp;2
        ;;
 
        *   )
        echo "'${answer}' is not an option, try again." 1&gt;&amp;2
        ;;
    esac
}
 
if [ -z $2 ]
then
    echo ERROR: This script expects to be called by subversion
    exit 1
fi
 
if [ $2 = "-m" ]
then
    #Setup vars
    labelMine=${4}
    labelOlder=${6}
    labelTheirs=${8}
    mine=${9}
    older=${10}
    theirs=${11}
    output=${9}.svnDiff3TempOutput
    baseFileName=`echo $mine | sed -e "s/.tmp$//"`
 
#HERE#
    diff3 -L $labelMine -L $labelOlder -L $labelTheirs -Em $mine $older $theirs &gt; $output
    if [ $? = 1 ]; then
        #Can't auto merge
        #Prompt user for direction
        while [ 1 ]
        do
            echo "" 1&gt;&amp;2
            echo "${baseFileName} requires merging." 1&gt;&amp;2
            echo "" 1&gt;&amp;2
            echo "What would you like to do?" 1&gt;&amp;2
            echo "[M]erge [A]ccept [I]gnore [R]evert [D]efault [H]elp" 1&gt;&amp;2
            promptUser
        done
    else
        #We can automerge, and we already did it
        cat $output
        rm -f $output
        exit 0
    fi
else
    L="-L"         #Argument option for left label
    R="-L"         #Argument option for right label
    label1=$3       #Left label
    label2=$5       #Right label
    file1=$6        #Left file
    file2=$7        #Right file
 
    $DIFF $file1 $file2 $L "$label1" $L "$label2" &amp;
    #$DIFF $file1 $file2 &amp;
    #wait for the command to finish
    wait
fi
exit 0

Note: I also posted this to a gist on github: svndiff.sh.

Bookmark and Share

Written by Jonathan

September 17th, 2009 at 8:49 pm

Ruby Script to Organize Mp3’s based on ID3 Genre Tag

without comments

Reading time: 2 – 4 minutes

I had one gigantic directory of all my tagged and organized mp3 files. Problem is it was too big to use. This bloated my library and I have since not been able to fit my music on my laptop. I needed to manipulate mp3 files by genre and extract them out of this single directory to create smaller libraries. I spent all of about two minutes looking for a program to do this before deciding to write a script. Truthfully, it was worse: once upon a time I over-enthusiastically downloaded StepMania and 493 DDR games/songs. And then, I added all the songs into my music library. It’s a great party game, but not the kind of music I want to listen to.

Many implementations exist for reading ID3 tags. I first tried ruby-mp3info, however it didn’t read my custom genre (‘DDR’) so then I moved to id3lib-ruby which uses the c++ id3lib library.

This worked like a charm. I ran the script over all my directories and built up a list of the directories.

#!/usr/bin/env ruby
# find_music.sh
require "rubygems"
require 'id3lib'
require 'find'
require 'set'
 
ddr_files = []
ddr_dirs = Set.new
 
search_dir = '~/media/music/music_categorized'
 
Find.find(search_dir) do |file|
  next if file !~ /.*mp3$/
  mp3 = ID3Lib::Tag.new(file)
  next if mp3.genre != 'DDR'
  ddr_dirs << File.dirname(file)
  ddr_files << file   puts "%s, %s --> AT: %s" % [mp3.genre, mp3.album, file]
end
 
File.open('result-ddr-files.txt', 'w') do |f|
  f.write(ddr_files.join("\n"))
end
 
File.open('result-ddr-dirs.txt', 'w') do |f|
  ddr_dirs.each { |d| f.write("%s\n" % d)}
end

Next I reviewed the two output files, then ran the file result-ddr-dirs.txt in as an argument into this next script. That removed almost a gig of music from my library.

#!/usr/bin/env ruby
 
if (ARGV.length != 1)
  puts "Usage: #{__FILE__} input_file"
  exit(1)
end
 
destination="/home/jwolter/media/music/music_ddr_questionable_value/"
 
File.foreach(ARGV[0]) do |line|
  next if line.strip == ""
  cmd =  "mv \"#{line.strip}\" \"#{destination}\""
  #puts cmd
  `#{cmd}`
end

Bonus: In the process searching for this, I ran into the ID3 Tags RubyQuiz.

One of the nicest benefits of being a software engineer is I avoid doing boring manual tasks on my computer. Writing a script is more fun, and faster. I’ve got many scripts to automate file manipulation, online banking, and more. What bit of your automation scripts do you think is the most helpful?

Bookmark and Share

Written by Jonathan

September 5th, 2009 at 7:06 pm

Posted in automation, music

Movie Review: Bigger Stronger Faster. I ask how far to go for better performance?

with one comment

Reading time: 5 – 8 minutes

Watching a movie that entertains is fun, one that teaches benefits you tomorrow, and ones that make you think in a new way are the best of all. I have a friend who says periodically it’s time to go to a conference, “in order to introduce randomness into the system.” Shake things up. Movies in “Cerebral” category in Netflix are a new way I found to do this. My hope is for comments and further recommendations of thought-worthy movies.

Bigger Stronger Faster is a documentary. bigger stronger fasterTechnically it’s about steroids in American culture, but it also raises the clear lack of consistency we treat other performance enhancers. The director Christopher Bell examines his brothers as they use steroids.

What is an ethical and responsible limit to how far you are willing to go for success? Is it okay to wake up in the morning and say you are destined for greatness – that somehow you were born to give something to the world? (And how far will you then go?) Is it okay to just become a normal, average, person?

  • When Tiger Woods had laser eye correction to 20/15 vision, was that an unethical performance enhancement?
  • How about professional musicians taking beta blockers to eliminate anxiety before performances and auditions?
  • Athletes’ are dependent on cortisone shots (a legal steroid), yet should those be held equal to anabolic steroids?
  • Red blood cell count can be increased by doping, taking EPO (details), high altitude training, or sleeping in an altitude chamber. Two options are illegal, two are legal. Should the end result (higher than natural RBC’s) be the determiner of ethics, rather than the mechanism used to reach it?
  • The US Air Force gives fighter pilots speed (amphetamine) to perform better, is that a rational decision?
  • He interviews a member of the Olympic Doping Committee and is told that routinely US Athletes are flagged for failing drug tests, but still allowed to compete.
  • Attending a Chiropractor Anti-aging specialist Chris is able to say he suspects a hormone deficiency which leads to tests and results where no “healthy” range has been set enables him to get an Human Growth Hormone prescription – legally.
  • Students are interviewed in how easy it is to get Adderoll (just tell your doctor you have trouble focusing, or have it passed around from friends). Are these and other “study drugs” (long but really interesting article) worth it? (Or, should everyone be taking them?)

I’m not ready to jump on the film’s open skepticism of “are steroids actually a health risk?” I don’t think they are naturally necessary and a cautious approach to my health comes intuitively. They cross my line of fair competition. Throughout sports and recreational fitness I was never tempted to try them. But maybe that was just because I wasn’t/didn’t want to become good enough to compete at the highest level?

But how far will we go for performance outside of sports? If you could close 70% more sales by taking “Synthesized Aquatic Maltose” (which I just invented), would you take it? Health Supplements in the US are not regulated to be proven healthy, the FDA has the job of proving them unhealthy.

Under the Dietary Supplement Health and Education Act of 1994 (DSHEA), the dietary supplement manufacturer is responsible for ensuring that a dietary supplement is safe before it is marketed. FDA is responsible for taking action against any unsafe dietary supplement product after it reaches the market. Generally, manufacturers do not need to register their products with FDA nor get FDA approval before producing or selling dietary supplements.FDA on DSHEA

Therefore I could start selling this new supplement and require no doctors or nutritionists to even look at what my customers would start to ingest. Chris actually does this. Entertainingly, he picks up a few illegal day laborers, and invents a product and fills pills with his “proprietary blend” of powders. He does “before/after” pictures the same day at a photo shoot and can start selling this $40/bottle tonic. (Of course, manufacturing cost are under $5/bottle for him).

There is more, such as how Utah’s third largest economy ($2.5-$4 billion/year) is the health supplement industry (Nice article here about Utah’s supplement industry). Legislation from Utah’s Senator Orrin Hatch made for the passing of DSHEA, and continues to enable those too squeamish for “real steroids” to get something that promises the same benefits.

He goes on to show a breed of cow: Belgium Blue. Through 100 years of natural selection, these cows are deficient in Myostatin, a growth factor that limits muscle growth. Video below gives a peek. Researchers are looking to mimic that for fighting Muscular Distrophy in humans. See more freakish links about this gene mutation in humans, cows, or other animals. Note: Clip below from National Geographic, not from the movie.

Chris goes to say that Americans are all about Bigger Stronger Faster, and it’s un-American to be #2. We even have romanticized the concept, calling things bigger than expected as “Xyz, on steroids.” We must win, and we must win better than we previously won.

Bookmark and Share

Written by Jonathan

August 30th, 2009 at 4:25 am

Posted in Movie/Book Reviews

Large Web App Architecture: Yes to Thicker Stack on One Hardware Node, No to Beautiful “Redundant” Spiderwebs

with 4 comments

Reading time: 4 – 7 minutes

My last client our team worked with had a large ecommerce operation. Yearly revenue in the new site is in the high single digit billions of dollars. This necessitates extremely high availability. I will draw an initially favorable looking configuration for this high availability (“beautiful spiderwebs”), but then tear it apart and suggest an alternative (“Thicker Stack on One Hardware”).

1. “Beautiful Spiderwebs” – Often Not Recommended

Here’s one common way people could implement high availability. Notice how there are always multiple routes available for servicing a request. If one BIG IP goes down, there is another to help. And this could be doubled with multiple data centers, failed over with DNS.

The visible redundancy and complexity in one diagram may be appealing. One can run through scenarios in order to make sure that yes, we can actually survive any failure and the ecommerce will not stop.

not recommended spiderweb tiers

So then what could make this my Not Recommended option?

2. Martin’s Reminder how to Think About Nodes

Fowler reminded us in Patterns of Enterprise Application Architecture how to look at distribution and tiers. For some reason people keep wanting to have certain “machines running certain services” and just make a few service calls to stitch up all the services you need. If you’re concerned about performance, though, you’re a looking for punishment. Remote calls are several orders of magnitude greater than in process, or calls within the same machine. And this architectural preference is rarely necessary.

One might lead to the first design with the logic of: “We can run each component on a separate box. If one component gets too busy we add extra boxes for it so we can load-balance our app.” Is that a good idea?

fowler distributed objects not recommended

The above is not recommended:

A procedure call between two separate processes is orders of magnitude slower [than in-process]. Make that a process running on another machine and you can add another order of magnitude or two, depending on the network topography involved. [PoEAA Ch 7]

This leads into his First Law of Distributed Object Design: Don’t distribute your objects!

The solution?

Put all the classes into a single process and then run multiple copies of that process on the various nodes. That way each process uses local calls to get the job done and thus does things faster. You can also use fine- grained interfaces for all the classes within the process and thus get better maintainability with a simpler programming model. [PoEAA Ch 7]

fowler clustered application recommended

3. “Strive for Thicker Stack on One Hardware Node” – Recommended

Observe the recommended approach below. There is still an external load balancer, but after a request is routed to an Apache/Nginx/etc front end, you’re all on one* machine.

If one tier fails on a node, pull the whole node out from rotation. Replace it. And re-enter it in the mix.

Your companies teams have worked together to be able to deploy modular services. So when your ecommerce site needs a merchant gateway processing service, you can include that (library or binary) and run it locally on your node, making a call through to it as needed.

Services are also simpler to deploy, upgrade and monitor as there are fewer processes and fewer differently-configured machines.

recommended thicker nodes tiers

(* I understand there may be the occasional exception for remote calls that need to be made to other machines. Possibly databases, mcached obviously third party hosted services, but the point is most everything else need not be remote.)

4. But, Practically Speaking How Far Do We Go?

A caveat first: these benefits get pronounced as you have more and more nodes. (And thus, more and more complex of spiderwebs of unnecessary failover).

Should there be a database server running on each node? Probably not at first. There is a maintenance associated with that. But after sharding your database and running with replication, why not? This way if a node fails, you simply pull it out and replace it with a functioning one.

5. Checklist of Takeaway Lessons

  1. Keep it local. Local calls orders of magnitude faster than remote calls.
  2. Make services modular so they don’t need to be remote, yet still have all the organizational benefits of separate teams.
  3. Simplicity in node-level-redundancy is preferred over tier-level-redundancy.

Often, people think of high availability with terms such as the following: Round Robin, Load Balancing, and Failover. What do you think of? Leave a comment below with how you meet the trade-offs of designing for HA as well as architectural decisions of low latency.

Bookmark and Share

Written by Jonathan

August 19th, 2009 at 12:55 am

Posted in architecture, code, java

Tagged with ,