Archive

Archive for the ‘Product Management’ Category

Steve Ballmer’s resignation through former Microsoft superstars’ resignation letters

August 23, 2013 Leave a comment

The resignation of Steve Ballmer as Microsoft’s CEO will undoubtedly create an endless stream of comments.

I will step aside for a moment (and refrain from comments) by referring to the resignation letters from two former Microsoft superstars:  Dave Stutz and Ray Ozzie.

In 2003, Dave Stutz resigned and published a resignation letter which – with full credit going to Dave – follows below.  The letter is titled Advice to Microsoft regarding commodity services.   For those that have little time, the last sentence in Dave’s letter says it all.   “Stop looking over your shoulder and invent something!”

In 2010, Ray Ozzie resigned from Microsoft and wrote a memo called Dawn of a New Day where he asked everyone at Microsoft to imagine a post-PC world.

Full text can be found at http://ozzie.net/docs/dawn-of-a-new-day/

Advice to Microsoft regarding commodity software – Copyright 2003 by Dave Stutz
The market for shrink-wrap PC software began its slow upmarket ooze into Christensen obsolescence right around the time that Microsoft really hit its stride. That was also the time of the Internet wave, a phenomenon that Microsoft co-opted without ever really internalizing into product wisdom. While those qualified to move the state of the art forward went down in the millennial flames of the dotcom crash, Microsoft’s rigorous belief in the physics of business reality saved both the day and the profits. But the tide had turned, and a realization that “the net” was a far more interesting place than “the PC” began to creep into the heads of consumers and enterprises alike.

During this period, most core Microsoft products missed the Internet wave, even while claiming to be leading the parade. Office has yet to move past the document abstraction, despite the world’s widespread understanding that websites (HTML, HTTP, various embedded content types, and Apache mods) are very useful things. Windows has yet to move past its PC-centric roots to capture a significant part of the larger network space, although it makes a hell of a good client. Microsoft developer tools have yet to embrace the loosely coupled mindset that today’s leading edge developers apply to work and play.

Microsoft’s reluctance to adopt networked ways is understandable. Their advantaged position has been built over the years by adhering to the tenet that software running on a PC is the natural point at which to integrate hardware and applications. Unfortunately, network protocols have turned out to be a far better fit for this middleman role, and Microsoft, intent on propping up the PC franchise, has had to resist fully embracing the network integration model. This corporate case of denial has left a vacuum, of course, into which hardware companies, enterprises, and disgruntled Microsoft wannabes have poured huge quantities of often inferior, but nonetheless requirements-driven, open source software. Microsoft still builds the world’s best client software, but the biggest opportunity is no longer the client. It still commands the biggest margin, but networked software will eventually eclipse client-only software.

As networked computing infrastructure matures, the PC client business will remain important in the same way that automotive manufacturers, rail carriers, and phone companies remained important while their own networks matured. The PC form factor will push forward; the Pocket PC, the Tablet PC, and other forms will emerge. But automakers, railroads, and phone companies actually manufacture their products, rather than selling intangible bits on a CD to hardware partners. Will Microsoft continue to convince its partners that software is distinctly valuable by itself? Or will the commodity nature of software turn the industry on its head? The hardware companies, who actually manufacture the machines, smell blood in the water, and the open source software movement is the result.

Especially in a maturing market, software expertise still matters, and Microsoft may very well be able to sidestep irrelevance as it has in the past. The term “PC franchise” is not just a soundbite; the number of programs written for the PC that do something useful (drive a loom, control a milling machine, create a spreadsheet template, edit a recording…) is tremendous. But to continue leading the pack, Microsoft must innovate quickly. If the PC is all that the future holds, then growth prospects are bleak. I’ve spent a lot of time during the last few years participating in damage-control of various sorts, and I respect the need for serious adult supervision. Recovering from current external perceptions of Microsoft as a paranoid, untrustworthy, greedy, petty, and politically inept organization will take years. Being the lowest cost commodity producer during such a recovery will be arduous, and will have the side-effect of changing Microsoft into a place where creative managers and accountants, rather than visionaries, will call the shots.

If Microsoft is unable to innovate quickly enough, or to adapt to embrace network-based integration, the threat that it faces is the erosion of the economic value of software being caused by the open source software movement. This is not just Linux. Linux is certainly a threat to Microsoft’s less-than-perfect server software right now (and to its desktop in the not-too-distant future), but open source software in general, running especially on the Windows operating system, is a much bigger threat. As the quality of this software improves, there will be less and less reason to pay for core software-only assets that have become stylized categories over the years: Microsoft sells OFFICE (the suite) while people may only need a small part of Word or a bit of Access. Microsoft sells WINDOWS (the platform) but a small org might just need a website, or a fileserver. It no longer fits Microsoft’s business model to have many individual offerings and to innovate with new application software. Unfortunately, this is exactly where free software excels and is making inroads. One-size-fits-all, one-app-is-all-you-need, one-api-and-damn-the-torpedoes has turned out to be an imperfect strategy for the long haul.

Digging in against open source commoditization won’t work – it would be like digging in against the Internet, which Microsoft tried for a while before getting wise. Any move towards cutting off alternatives by limiting interoperability or integration options would be fraught with danger, since it would enrage customers, accelerate the divergence of the open source platform, and have other undesirable results. Despite this, Microsoft is at risk of following this path, due to the corporate delusion that goes by many names: “better together,” “unified platform,” and “integrated software.” There is false hope in Redmond that these outmoded approaches to software integration will attract and keep international markets, governments, academics, and most importantly, innovators, safely within the Microsoft sphere of influence. But they won’t .

Exciting new networked applications are being written. Time is not standing still. Microsoft must survive and prosper by learning from the open source software movement and by borrowing from and improving its techniques. Open source software is as large and powerful a wave as the Internet was, and is rapidly accreting into a legitimate alternative to Windows. It can and should be harnessed. To avoid dire consequences, Microsoft should favor an approach that tolerates and embraces the diversity of the open source approach, especially when network-based integration is involved. There are many clever and motivated people out there, who have many different reasons to avoid buying directly into a Microsoft proprietary stack. Microsoft must employ diplomacy to woo these accounts; stubborn insistence will be both counterproductive and ineffective. Microsoft cannot prosper during the open source wave as an island, with a defenses built out of litigation and proprietary protocols.

Why be distracted into looking backwards by the commodity cloners of open source? Useful as cloning may be for price-sensitive consumers, the commodity business is low-margin and high-risk. There is a new frontier, where software “collectives” are being built with ad hoc protocols and with clustered devices. Robotics and automation of all sorts is exposing a demand for sophisticated new ways of thinking. Consumers have an unslakable thirst for new forms of entertainment. And hardware vendors continue to push towards architectures that will fundamentally change the way that software is built by introducing fine-grained concurrency that simply cannot be ignored. There is no clear consensus on systems or application models for these areas. Useful software written above the level of the single device will command high margins for a long time to come.

Stop looking over your shoulder and invent something!

Advertisements

To a CTO: “you are now responsible for alignment”. What to do next …

It’s not uncommon for a new CTO to receive a new mission to align the evolution of technology roadmap with the  evolution of the company’s business.

So the immediate questions in the new CTO’s mind are …

– Is it a minor problem which requires a corrective action?

– Is it a fairly difficult problem to solve?   Probably – because clearly someone very senior with an ability to directly control or influence the outcome would be needed to engage and get it done

– Or perhaps this could be a symptom of a larger problem?    Very likely.

As companies grow and become more complex, the lack of alignment becomes more evident – just like the cars we drive at some point need wheel alignment.    This is not an automotive blog but it’s helpful to mention that wheel alignment is a very complex procedure of adjusting multiple suspension components to achieve desired driving characteristics.

Software companies also consist of major components.   It’s important to recognize that the CTO cannot align all components.   The CTO can only influence the alignment process by asking the right questions.  Some may not be very popular.    More on asking unpopular questions in a moment.   But then again – wheel alignment is not an easy procedure either.

The major components of a software company:

–  Turning ideas into product ideas (product management)
–  Turning product ideas into real products (engineering)
–  Evangelizing products in target markets and customer  segments (marketing / product marketing)
–  Selling products (sales)
–  Servicing customers (professional services and support)
–  Supporting the company operations (HR, administrative)

That’s it.    Only a few components – or functional areas – to align.

Alignment is first and foremost a leadership challenge, not a process or technology challenge.   And that’s why alignment starts with the most senior leader in the company:  the CEO.   The CEO needs to set the tone and adjust the measures of success of each senior leader in such a way that their success cannot be achieved without working effectively with other leaders. Only then alignment can become what I believe is the right way to recognize alignment:  continuous, effective and never mentioned again as a separate initiative.

I will share an experience that many readers can relate to.   It’s a launch of a new product with many problems which highlight (an extreme …) lack of alignment throughout the company.   For each problem – I will also include questions – perhaps unpopular yet very necessary – the CTO can ask.

The new product was intended for a new market segment outside of North America.

Problems and questions that were never asked at the right time:

– Resellers were not trained to sell the new product, leading to significantly lower revenue expectations.   “What is the plan to audit existing resellers, select resellers interested in selling this product?  When do we start training?  Where are the training materials?”

– Direct sales force did not receive any incentives to sell a new product.   “What changes do we have to consider in the sales compensation model to fuel adoption of the new product?”

– First few customers did not like certain capabilities.  Formal launch had to be delayed.  “How can the product management team incorporate an early adoption cycle in the product launch plan?  How can engineering team respond to problems or feedback points identified during the early adoption cycle?”

– The customer support team did not hire technical support engineers in the target country who could speak 3 additional languages.   “What are the customer support requirements?  What is the hiring plan?”

– The budget for launching the new product was not accurate.   Sales compensation changes were not included.   Reseller training costs were also not included.  “Did we recognize all the costs of launching the new product?  Who maintains an accurate financial model which incorporates the financial impact of all decisions and changes?”

– The company did not have an Integrated Product Launch (IPL) process which provided 100% visibility to all cross functional activities, milestones, and dependencies.   While the engineering team was busy building a new product, the customer support team wasn’t ready to support the new product on day one.    “Do we have a process to manage all activities, milestones,  and dependencies?  Who owns it?  Does this person have the authority?”

The last point is one I cannot say enough about.   Alignment can never be achieved without a set of real time measures, fully supported by all components of the organizations – ready to adjust in real time when needed.    That’s when alignment becomes a non-event: organic, continuous, and effective.   And that’s where any software company wants to be.   Great CTOs believe and practice alignment every day.

Why it’s good to be paranoid if you are a product management executive

November 24, 2009 Leave a comment

One of my all-time favorite books is “Only the Paranoid Survive” by Andy Grove, former chairman and CEO of Intel Corporation.

In early 1980’s, Intel’s business was driven by DRAM chips.   Andy Grove saw the potential in microprocessors and the rest is history, including a complete transformation of Intel’s business model and product strategy.

It’s good to be paranoid if you are a product management executive and have early visibility to indicators that may compel the company to change its product strategy.

At the same time, it’s also good to ask a very basic question before the company is facing a competitive threat.   What kind of change is the company capable of?  Complete transformation – if needed – or incremental change that may not be good enough to face a competitive threat?

Even an incremental change in a product strategy will require re-alignment of product marketing, sales, engineering, and professional services organizational resources.  There will be many late nights chaired by the CFO reviewing and comparing different revenue forecasts and P&L scenarios.

Significant product strategy change will challenge every thread in the organization, from executives to every employee  – regardless of their role or experience.

The inconvenient truth:  very few companies are capable of significant product strategy change unless there is a conscious and deliberate effort to build a culture that can be capable of great leaps … well in advance.

Go beyond the basics of normal paranoia when leading a product management organization in a software company.  Be paranoid about the ability to change and respond first.   There are plenty of product strategy presentations collecting dust as an outcome of being unable to execute.

Categories: Product Management

Part 2: Identify critical design elements – early

November 10, 2009 Leave a comment

Breakthrough.

That’s the word that comes to mind after a very long (and at times very vocal) design session that ultimately led to good design decisions.

Why was the decibel level a little higher than normal?

The current design was based on the aggregate ‘weight’ of over 500 functional requirements, to be delivered in 6 major releases.  There were additional 300+ requirements in the pipeline but were not yet discussed in detail.  Also – the current design already had a number of known items that concerned everyone in the engineering team.

The team faced a decision point:  do we invest time and try to understand if future 300+ requirements may substantially influence current design?   The upside – the design will accommodate future requirements and reduce the risk of refactoring.  The downside – missing time with the families …

It turned out that some future requirements indeed had a significant impact on the current design.   In fact, had the team decided to develop the next 6 major releases based on known 500+ requirements, at some point the probability to completely redesign several critical components would be 100%.  Even more time would have to be spent with fellow colleagues and not families in this case.

The breakthrough was achieved by going back to an example that I used in the past to illustrate how important it is to identify critical design elements as early as possible, even at the cost spending a little more time upfront.

Let’s assume there are separate design teams working on these 2 problems:

A. Fly from New York to Paris in 3.5 hours

B. Fly tom New York to Paris in 8 hours

The design for Option A would yield an aircraft that looks like a Concorde, while the design for Option B would produce a Boeing 767-300ER (or equivalent). These aircraft could not be more different.

Just imagine if by some sheer chance the team pursued Option B (or Boeing 767-300ER) design as an option to fly from New York to Paris in 3.5 hours. It’s impossible for Boeing 767-300ER to accomplish this because of several intentional design decisions made very early in the process: choice of engines, alloys, wing geometry, etc. This is not intended to be a lesson in aircraft engineering but I hope the message emerges.

This example is particularly helpful as an illustration of potential risks: Boeing 767-300ER can never become a Concorde. If the software had fundamental design constraints, these constraints cannot be refactored. Only a new design can solve these problems and making a commitment to a new design is expensive, time-consuming, and at times necessary. But – if this commitment has to be made, going back to the fundamentals of good software design is step one.

Identify critical design elements as early as possible, even if the requirements are not well known. Make the design sustainable over time. Test the design assumptions against known and likely requirements, even if these have to be delivered 12-18 months from now.

It’s always good idea to separate needs from requirements

November 6, 2009 4 comments

Anyone who spent enough time building complex software products will probably agree that – when it comes to design – hindsight is truly 20/20.

There are many definitions of a good software design.  Regardless of the definition …

– Good software design always tries to balance known as well as potential requirements

– Good software design creates a stable platform to accommodate multiple major releases without the need to significantly change the underlying architecture

Why is it important to separate needs from requirements when evaluating essential elements of a good design?   Perhaps needs and requirements are the same?  They are not – and it’s good to introduce very basic definitions for both.

– Needs are specific business outcomes the customer needs to achieve in the business by using your software.  Needs are “ability to pay invoices online”, “ability to accept credit cards online”, etc.

– Requirements are – simply put – the functionality in the software product that must exist to support customer needs.

There is a very important cycle that cannot be ignored when designing software products.

Needs -> influence -> requirements -> influence -> design

Let’s illustrate why this so important.

– The customer expresses a need to have invoices paid online.  The requirements can include: supporting 100,000 users vs supporting 1,000 users, downloading invoices, viewing invoices on one or more mobile device types, different security / authentication mechanisms … and the list goes on.    By exploring & validating requirements as they match customer need, only then one can begin to accumulate raw materials for subsequent design consideration.

– Some customers will want to see all requirements mentioned above

– Good, anticipatory design will accommodate these requirements (in future product release over time) while preventing unplanned refactoring efforts.

Refactoring is a very costly effort which carries many risks, including higher defect rates and delayed software releases.  In a highly competitive market, unsuccessful refactoring efforts can have significant negative effect on the bottom line of the software company.

The “right side up” organization (or how not to demonstrate software)

November 4, 2009 Leave a comment

Many years ago, I worked for PepsiCo in Purchase, NY.  I learned a lot at PepsiCo.  In addition to building large scale data warehouse which gave SQL optimizers more than one headache, I also learned what PepsiCo called at that time “the right-side-up organization”.

In the right-side-up organization, employees are on top and managers are on the bottom.  It’s the manager’s job to enable their employees, set clear goals, remove barriers, and get out of the way.

Similarly, in the-right-side-up software company, customers are always on top.   Always.   And when the sales account manager and sales engineer travel to a customer location, while they are on site with the customer, the entire company should work for these two employees during that time.  Always – even the CEO.

Being invited to demonstrate mission critical software at a customer location is a privilege.   It takes a lot of preparation to ensure that demonstrations are executed flawlessly.   And should  trouble unexpectedly surface, the right-side-up organization will be the difference.  Here is what can happen if right-side-up organization is not in place:

Before the demonstration:

– Another account team was using the only demo environment and promised to reset all customer databases (“trust us, you will never know we changed all the data”)

– Grapevine has its benefits.  When our sales account manager learned that the demo environment was used by another account team, she called to confirm if the demo environment was reset and ready for her customer demonstration.

– Demo environment was still down for maintenance so no one could check.  Our sales account manager had to jump on the plane and never knew the true state of the demo environment.

During the demo:

– Demo environment was still down and unavailable.  The Operations team did not know that a critical, on site customer presentation was about to be conducted.

– When our sales account manager called everyone she knew, every call went directly to the voice mail.  The demo was eventually conducted but later the same afternoon.

In the right-side-up organization, this would occur very differently.

– Even if the demo environment had to be down for maintenance, someone would be 100% accountable to log in later in the evening and reset / restore all databases to ensure that the demo environment was available for the customer demonstration.

– Schedule of all customer demonstrations would be confirmed and shared in a cross-functional meeting on Monday morning or even on Sunday evening via a conference call.  In addition, sales account managers and sales engineers would be able to reach the right person on the first ring because at least 2 people would be on standby to act as problem resolution coordinators.

In the right-side-up organization, everyone always asks, “Who do I enable and how can I help”.    Try it.