Archive

Archive for October, 2009

Best red wine braised short rib ravioli in Washington DC

October 31, 2009 Leave a comment

My wife and I were big fans of Maestro restaurant at the Ritz Carlton in McLean,VA (also known as Tysons Corner to the DC area locals).

In August 2007, Fabio Trabocchi – the executive chef at Maestro – decided to move to New York where he opened Fiamma restaurant.  Sadly, in early 2009 Fiamma closed.  Reason:  worsening economy.  In late 2008 / early 2009 some restaurants in NYC reported 30-40% decline in business.

There is a new restaurant in Tysons Corner called Inox. The lunch menu can be found at http://www.inoxrestaurant.com/lunch_prix_fixe.html.

I think Inox is one of the best restaurants in the Washington DC metropolitan area.  The food is exceptional but one dish truly excels.

Red wine braised short rib ravioli with wild mushrooms, roasted shallots, and butternut squash has very few equals.   Try to create time for prix-fixe lunch and order this dish.  Do not forget to sample Inox’ olive bread.

My next ‘the lighter side’ blog post will focus on dining options in Vienna, Austria, one of my favorite cities.

Inox Restaurant location in Tysons Corner (McLean) Virginia

Advertisements

Automated build / deploy process: glimpse of the future

October 31, 2009 Leave a comment

My team is working very hard to put finishing touches on an automated build / deployment process which will accomplish two key objectives:

– Serve as a platform to perform multiple automated regression tests, each with a specific purpose

– Increase code coverage by automated tests to more than 70%

So – what will this process look like in the very near future?   I am very excited to share a preview.

First – all product components will be rebuilt (no partial compiles).  This also includes product documentation maintained in a single source database where each documentation element – usually a paragraph or one or more sentences – are managed with XML tags.  The build process will extract documentation ‘snippets’ and automatically assemble online help files, printed documentation (user & administration guides).  This step has to work or the build process will fail.

Second – automated deployment process will initiate installation procedures (the same procedures that an actual customer may use in the future) and install  the new release of the software in 3 distinct QA environments:

– QA environment ONE:   will support execution of a standard regression suite to exercise core functionality

– QA environment TWO:  will support execution of another regression suite which consists of critical end-to-end scenarios performed by the majority of the customers. The best part: these scenarios can be changed before each nightly execution.  For example, if the new release addresses many bugs for a small group of customers, scenarios relevant to these customers can be easily included in the execution profile.

– QA environment THREE:  will run multiple scalability and performance tests.   There is an extra step in this process.  One of the engineers will schedule a process which will automatically generate test databases that mirror certain customer profiles.  It’s essential for scalability tests to operate on real or almost real customer data.

Finally, the deployment process will also execute and validate  ‘uninstall new release/ install & activate old release’ procedures.

One of the features I like the most is an automated notification of what scenarios worked and did not work.   Each software engineer receives an e-mail after each scenario was executed.  The e-mail also contains detailed diagnostic information if the scenario failed during execution.

It was a long journey to get to this point but it was more than worth it.

Categories: Software Engineering

Why it’s always difficult to find exceptional software engineers

October 30, 2009 1 comment

It’s 6:30 PM.  My cell phone rings.  “Leon, how are you?  By any chance do you know …

– Senior Java architects with deep SaaS experience and exceptional problem solving skills

– Rockstar product manager that launched at least 3 enterprise software products

– Another rockstar build engineer with experience automating software build procedures (automated deployment and testing also required)

.. and they have to live within 20 miles of the future employer”.

Of course I try to help.  But – great people, especially people that can play their part in making a software product sell itself are rare.  Why?

Rockstar software engineers are usually identified as early as in 3rd or 4th year in college, work as interns during the summer, and then join a team that knows how to attract and retain real talent before anyone else does.  These individuals tend to have fewer jobs during their career, perhaps 4-5.   Job offers 4 and 5 are extended by people they most likely worked for in the past.  So they pursue a new opportunity with those they trust and believe in.

There isn’t much room in this career cycle to attract a real superstar.   And that’s why it’s so difficult to find these people.  Recruiters that can find these rockstaars have my outmost respect and gratitude.  Thank you.

Categories: Hiring

How does a good error message look like?

October 30, 2009 1 comment

It’s not easy to write good error messages.  One of my mentors many years ago told me, “it’s easy to determine if an error message is good.  One week later, wake up at 2:30 in the morning and try to understand it.  If you can easily understand what the message indicates, then the error message passes the test of being a good one”.

It’s true.

Best-of-breed error messages share these attributes:

– Unique component or caller identifier + a unique number

– Severity level:  I = informational, E = error, W = warning,

– Error description (which passes the ‘2:30 in the morning sanity test’)

– Action taken by the system as a result of the error

– Action recommended to the user – if applicable

– Detailed diagnostic information (if enabled) to quickly determine the root cause and reduce diagnostic time to the bare minimum

This is a real error message which has been subject of a recent design discussion.

Before:

– “Error:  transmission failed”

After:

– TRX-SEND-0012E Transmission failed.  File=<name>, Source=<location>,Destination=<new location>.   10 MB of 50MB transmitted.  Transmission will restart in 3 minutes.  No action required from the user.

This message clearly passes the ‘2:30am in the morning sanity test’.   No one in the software engineering team will get a call from the client or technical support team.

Error management: treat it as functional requirements

October 29, 2009 Leave a comment

Why is it so important to treat error management as just another category of critical functional requirement?

Because when the errors occur, it’s an opportunity to delight the customer – despite the very situation when the customer may be at risk of losing faith in the product.

Cryptic or confusing error messages can create significant doubt which can become its own fuel of further doubt.

Just imagine when the customer asks …

– “If this simple functionality is not working, what else may fail tomorrow?”

– “What’s the status of my payment?  I submitted the order and have no idea if the payment has been accepted”

– “What does ‘unknown error’ mean?  Who writes this stuff”

Or – the customer may react to a confusing error message and decide to take steps that may make matters even worse.

Let me paint another scenario that every VP of Sales does not want to see.  It’s a customer that provides a poor reference to a potential customer because they are very unhappy with the quality of the software.   The length of the sales cycles just doubled or tripled, delaying the sale and revenue recognition milestone (let alone cash).

That’s why it’s important to recognize error management as a critical functional element in design activities and subsequent design reviews.   Is the depth / complexity of the functional problem properly reflected by the error management approach?  I suggest to ask this question during the next design review.

I already hear some rumblings in the distance.  🙂  Who has time to do this?  Well – the cost of not doing this is even greater.  My last client was spending more than 60% of total engineering capacity on looking for the root cause of reported problems, instead of developing customer facing functionality.   Competition was more than happy to turn win-loss ratio in their favor.

I will be writing new blog posts about best practices in error management in the very bear future.

Stay tuned!

 

 

 

Defensive programming: move the concept from art to practice

October 29, 2009 Leave a comment

Few days ago, the nightly regression test suite completely melted.   There were so many errors that I had to schedule a very long conference call and work through the details and determine what happened.

Despite many errors, several distinct themes emerged from the discussion:

– The code did not have good run-time awareness.   Certain changes in the run-time environment (that could occur in the client environment) were simply not detected and caused unexpected behaviors

– The code did not have good error localization techniques.  Error that surfaced in component A was in fact rooted in a problem that occurred much earlier.    The code architecture made it very difficult to quickly find the root cause of the problem.

– Error messages were not descriptive not self-educational.  For example, there is a difference between stating “error happened while transmitting a file’ and ‘error during file transmission.  50 Mb of 100 Mb were successfully transferred.   Remaining 50 Mb will be transmitted when the service restarts in 3 minutes.  No user action required’.

– If the team spent on average 20 hours solving several problems, more than 70% of the time was spent on looking for the root cause.  The diagnostic time was simply excessive, a symptom of a poorly engineered code that does not generate helpful error diagnostic information.

I am a big fan of Code Complete by Stephen McConnell.  One can easily find it on Amazon.com at

Categories: Software Engineering

What to do when one fix creates 7 new defects

October 29, 2009 Leave a comment

This is probably one of the most stressful scenarios.

Major  software release is due at the end of the quarter.  The team is still fixing many defects and discovering that every new change is creating many new – previously undetected – problems.   It’s a difficult situation, one that I had to deal with recently while working with my client’s software engineering organization.

Step 1:  prioritize all defects and determine which ones are likely to be experienced by customers of the new release.   Some defects are very rare and occur under certain conditions.   Deal with these later.

Step 2:  delay the next release to create time for several important tasks:

a) Perform a deep-dive audit of technical architecture.    What is the ‘separation of concerns’?   In products where ‘separation of concerns’ is too close at the component level, it’s a very common occurrence to create multiple defects.

b) Examine regression testing strategy.   How much of the code can be automatically exercised by a suite of regression tests?   What’s the percentage:  40% or 80%?   How many critical functions are covered by regression tests?  How many critical customer scenarios (common & frequent functional paths) are also covered by regression tests?

c) Look for refactoring targets and accommodate refactoring activites in the next series of releases.  It’s very hard to stop and build a new architecture of a commercial software product while delaying customer facing functionality.  “On the go” refactoring is a way of life in a highly competitive market.

 

 

Categories: Software Engineering