Improving Vendor Performance

As we discussed in our previous post on the inefficient technology marketplace, the typical IT shop spends 60% or more of its budget on external vendors – buying hardware, software, and services. Often, once the contract has been negotiated, signed, and initial deliveries commence, attentions drift elsewhere. There are, of course, plenty of other fires to put out. But maintaining an ongoing, fact-based focus on your key vendors can result in significant service improvement and corresponding value to your firm. This ongoing fact-based focus is proper vendor management.

Proper vendor management is the right complement to a robust, competitive technology acquisition process. For most IT shops, your top 20 or 30 vendors account for about 80% of your spend. And once you have achieved outstanding pricing and terms through a robust procurement process, you should ensure you have effective vendor management practices in place that result in sustained strong performance and value by your vendors.

Perhaps the best vendor management programs are those run by manufacturing firms. Firms such as GE, Ford, and Honda have large dedicated supplier teams that work closely with their suppliers on a continual basis on all aspects of service delivery. Not only do the supplier teams routinely review delivery timing,  quality, and price, but they also work closely with their suppliers to help them improve their processes and capabilities as well as identify issues within their own firm that impact supplier price, quality and delivery. The work is data-driven and leverages heavily process improvement methodologies like LEAN. For the average IT shop in services or retail, a full blown manufacturing program may be overkill, but by implementing a modest but effective vendor management program you can spur 5 to 15% improvements in performance and value which accumulate to considerable benefits over time.

The first step to implementing a vendor management program is to segment your vendor portfolio. You should focus on your most important suppliers (by spend or critical service). Focus on the top 10 to 30 suppliers and segment them into the appropriate categories. It is important to group like vendors together (e.g, telecommunications suppliers or server suppliers). Then, if not already in place, assign executive sponsors from your company’s management team to each vendor. They will be the key contact for the vendor (not the sole contact but instead the escalation and coordination point for all spend with this vendor) and will pair up with the procurement team’s category lead to ensure appropriate and optimal spend and performance for this vendor. Ensure both sides (your management and the vendor know the expectations for suppliers (and what they should expect of your firm). Now you are ready to implement a vendor management program for each of these vendors.

So what are the key elements of an effective vendor management program? First and foremost, there should be three levels of vendor management:

  • regular operational service management meetings
  • quarterly technical management sessions, and
  • executive sessions every six or twelve months.

The regular operational service management meetings – which occur at the line management level – ensure that regular service or product deliveries are occurring smoothly, issues are noted, and teams conduct joint working discussions and efforts to improve performance. At the quarterly management sessions, performance against contractual SLAs is reviewed as well as progress against outstanding and jointly agreed actions. The actions should address issues that are noted at the operational level to improve performance. At the nest level, the executive sessions will include a comprehensive performance review for the past 6 or 12 months as well as a survey completed by and for each firm.  (The survey data to be collected will vary of course by the product or service being delivered.) Generally, you should measure along the following categories:

  • product or service delivery (on time, on quality)
  • service performance (on quality, identified issues)
  • support (time to resolve issues, effectiveness of support)
  • billing (accuracy, clarity of invoice, etc)
  • contractual (flexibility, rating of terms and conditions, ease of updates, extensions or modifications)
  • risk (access management, proper handling of data, etc)
  • partnership (willingness to identify and resolve issues, willingness to go above and beyond, how well the vendor understand your business and your goals)
  • innovation (track record of bringing ideas and opportunities for cost improvement or new revenues or product features )

Some of the data (e.g. service performance) will be  summarized from operational data collected weekly or monthly as part of the ongoing operational service management activities. The operational data is supplemented by additional data and assessments captured from participants and stakeholders from both firms. It is important that the data collected be as objective as possible – so ratings that are high or low should be backed up with specific examples or issues. The data is then collated and filtered for presentation to a joint session of senior management representing their firms. The focus of the executive session is straightforward: to review how both teams are performing and to identify the actions that can enable the relationship to be more successful for both parties. The usual effect of a well-prepared assessment with data-driven findings is strong commitment and a re-doubling of effort to ensure improved performance.

Vendors rarely get clear, objective feedback from customers, and if your firm provides such valuable information, you will often be the first to reap the rewards. And by investing your time and effort into a constructive report, you will often gain an executive partner at your vendor willing to go the extra mile for your firm when needed. Lastly, the open dialogue will also identify areas and issues within your team and processes, such as poor specifications or cumbersome ordering processes that can easily be improved and yield efficiencies for both sides.

It is also worthwhile to use this supplier scorecard to rate the vendor against other similar suppliers. For example, you can show there total score in all categories against other vendors in an an anonymized fashion (e.g., Vendor A, Vendor B, etc) where they can see their score but can also see other vendors doing better and worse. Such a position often brings out the competitive nature of any group, also resulting in improved performance in the future.

Given the investment of time and energy by your team, the vendor management program should be focused on your top suppliers. Generally, this is the top 10 to 30 vendors depending on your IT spend. The next tier of vendors (31 through 50 or 75) should get an annual or biannual review and risk assessment but not the regular operational meetings or assessments and management assessment unless the performance is below par. Remediation of such a vendor’s performance can often be turned around by applying such a program.

Another valuable practice, once your program is established and is yielding constructive results, is to establish a vendor awards program. With the objective and thoughtful perspective of your vendors, you can then establish awards for your top vendors – vendor of the year, vendor partner of the year, most improved vendor, most innovative, etc. Perhaps invite the senior management of the vendor’s receiving awards to attends and awards dinner, along with your firm’s senior management to give the awards, will further spur both those who win the awards as well as those who don’t. Those who win will pay attention to your every request, those who don’t will have their senior management focused on winning the award for next year. The end result, from the weekly operational meetings, to the regular management sessions, and the annual gala, is that vendor management positively impacts your significant vendor relationships and enables you to drive greater value from your spend.

Of course, the vendor management process outlined here is a subset of the procurement lifecycle applied to technology. It complements the technology acquisition process and enables you to repairs or improve and sustain vendor performance and quality levels for a significant and valuable gain for your company.

It would be great to hear from your experience with leveraging vendor management.

Best, Jim Ditmore

 

Posted in Best Practices, Efficiency and Cost Reduction, Metrics and Process Improvement, Procurement | Tagged , , | 2 Comments

Expect More Casualties

Smart phones, tablets, and their digital ecosystems have had a stunning impact on a range of industries in just a few short years. Those platforms changed how we work, how we shop, and how we interact with each other. And their disruption of traditional product companies has only just begun.

The first casualties were the entrenched smart phone vendors themselves, as IOS and Android devices and their platforms rose to prominence. It is remarkable that BlackBerry, which owned half of the US smart phone industry at the start of 2009, saw its share collapse to barely 10% by the end of 2010 and to less than 1% in 2014, even as it responded with comparable devices. It’s proving nearly impossible for BlackBerry to re-establish its foothold in a market where your ‘platform’, including your OS software and its features, the number of apps in your store, the additional cloud services, and the momentum in your user or social community are as important as the device.

A bit further afield is the movie rental business. Unable to compete with electronic delivery to a range of consumer devices, Blockbuster filed for bankruptcy protection in September 2010 just 6 years after its market peak. Over in another content business, Borders, the slowest of the big bookstore chains, filed for bankruptcy shortly after, while the other big bookstore chain, Barnes & Noble, has hung on with its Nook tablet and better store execution — a “partial” platform play. But the likes of Apple, Google, and Amazon have already won this race, with their vibrant communities, rich content channels , value-added transactions (Geniuses and automated recommendations), and constantly evolving software and devices. Liberty Mutual recently voted on the likely outcome of this industry with its disinvestment from Barnes & Noble.

What’s common to these early casualties? They failed to anticipate and respond to fundamental business model shifts brought on by advances in mobile, cloud computing, application portfolios and social communities. Combined, these technologies have evolved to lethal platforms that can completely overwhelm established markets and industries.  They failed to recognize that their new competitors were operating on a far more comprehensive level than their traditional product competitors. Competing on product versus platform is like a catapult going up against a precision-guided missile.

Sony provides another excellent example of a superior product company (remember the Walkman?) getting mauled by platform companies. Or consider the camera industry: IDC predicts that shipments of what it calls “interchangeable-lens cameras” or high end digital cameras peaked in 2012 and will decline 9.1% this year compared with last year  as the likes of Apple, HTC, Microsoft, and Samsung build high-quality cameras into their mobile devices. By some estimates, the high-end camera market in 2017 will be half what it was in 2012 as those product companies try to compete against the platform juggernauts.

The casualties will spread throughout other industries, from environmental controls to security systems to appliances. Market leadership will go to those players using Android or iOS as the primary control platform.

Over in the gaming world, while the producers of content (Call of Duty, Assassin’s Creed, Madden NFLC, etc.) are doing well, the console makers are having a tougher time. The market has already forever split  into games on mobile devices and those for specialized consoles, making the market much more turbulent for the console makers. Wii console maker Nintendo, for example, is expected to report a loss this fiscal year. If not for some dedicated content (e.g., Mario), the game might already be over for the company. In contrast, however, Sony’s PS4 and Microsoft’s Xbox One had successful launches in late 2013, with improved sales and community growth bolstering both “partial” platforms for the long term.

In fact, the retail marketplace for all manner of goods and services is changing to where almost all transactions start with the mobile device, leaving little room for traditional stores that can’t compete on price. Those stores must either add physical value (touch and feel, in-person expertise), experience (malls with ice skating rinks, climbing walls, aquariums), or exclusivity/service (Nordstrom’s) to thrive.

It is difficult for successful product companies to move in the platform direction, even as they start to see the platforms eating their lunch. Even for technology companies, this recognition is difficult. Only earlier this year did Microsoft drop the license fee for its ‘small screen’  operating systems. After several years, Microsoft finally realized that it can’t win against a mobile platform behemoths that give away their OS while it charges steep licensing fees for its mobile platform.

It will be interesting to see if Microsoft’s hugely successful Office product suite can compete over the long term with a slew of competing ecosystem plays. By extending Office to the iPad, Microsoft may be able to graft onto that platform and maintain its strong performance. While it’s still early to predict who will ultimately win that battle, I can only reference the battle of consumer iPhone and Android versus corporate BlackBerry — and we all know who won that one.

Over the next few years, expect more battles and casualties in a range of industries, as players leveraging Android, iOS, and other cloud/community platforms take on entrenched companies. Even icons such as Sony and Microsoft are at risk, should they cling to traditional product strategies.

Meantime, the likes of Google, Apple, Amazon, and Facebook are investing in future platforms — for homes, smart cars, robotics and drones, etc. As the ongoing impacts from the smart phone platforms continue, new platforms will add further impacts, so expect more casualties among traditional product companies, even seemingly in unrelated industries. 

This post first appeared in InformationWeek in February. It has been updated. Let me know your thoughts about platform futures. Best, Jim Ditmore.

Posted in Cloud, Looking Ahead, Vision and Leadership | Tagged , , | Comments Off

Overcoming the Inefficient Technology Marketplace

The typical IT shop spends 60% or more of its budget on external vendors – buying hardware, software, and services. Globally, the $2 trillion dollar IT marketplace (2013 estimate by Forrester) is quite inefficient where prices and discounts vary widely between purchasers and often not for reasons of volume or relationship. As a result, many IT organizations fail to effectively optimize their spend, often overpaying by 10%, 20%, or even much more.

Considering that IT budgets continue to be very tight, overspending your external vendor budget by 20% (or a total budget overrun of 12%) means that you must reduce the remaining 40% budget spend (which is primarily for staff) by almost 1/3 ! What better way to get more productivity and results from your IT team than to spend only what is needed for external vendors and plow these savings back into IT staff and investments or to the corporate bottom line?

IT expenditures are easily one of the most inefficient areas of corporate spending due to opaque product prices and uneven vendor discounts. The inefficiency occurs across the entire spectrum of technology purchases – not just highly complex software purchases or service procurements. I learned from my experience in several large IT shops  that there is rarely a clear rationale for the pricing achieved by different firms other than they received what they competitively arranged and negotiated. To overcome this inefficient marketplace, the key prerequisite is to set up strong competitive playing fields for your purchases. With competitive tension, your negotiations will be much stronger, and your vendors will work to provide the best value. In several instances, when comparing prices and discounts between firms where I have worked that subsequently merged, it became clear that many IT vendors had no consistent pricing structures, and in too many cases, the firm that had greater volume had a worse discount rate than the smaller volume firm. The primary difference? The firm that robustly, competitively arranged and negotiated always had the better discount. The firms that based their purchases on relationships or that had embedded technologies limiting their choices typically ended up with technology pricing that was well over optimum market rates.

As an IT leader, to recapture the 6 to 12% of your total budget due to vendor overspend, you need to address inadequate technology acquisition knowledge and processes in your firm — particularly with your senior managers and engineers who are participating or making the purchase decisions. To achieve best practice in this area, the basics of a strong technology acquisition approach are covered here, and I will post on the reference pages the relevant templates that IT leaders can use to seed their own best practice acquisition processes. The acquisition processes will only work if you are committed to creating and maintaining competitive playing fields and not making decisions based on relationships. As a leader, you will need to set the tone with a value culture and focus on your company’s return on value and objectives – not the vendors’.

Of course, the technology acquisition process outlined here is a subset of the procurement lifecycle applied to technology. The technology acquisition process provides additional details on how to apply the lifecycle to technology purchases, leveraging the teams, and accommodating the complexities of the technology world. As outlined in the lifecycle, technology acquisition should then be complemented by a vendor management approach that repairs or sustains vendor performance and quality levels – this I will cover in a later post.

Before we dive into the steps of the technology acquisition process, what are the fundamentals that must be in place for it to work well? First, a robust ‘value’ culture must be in place. A ‘value’ culture is where IT management (at all levels) is committed to optimizing its company’s spending in order to make sure that the company gets the most for its money. It should be part of the core values of the group (and even better — a derivative of corporate values). The IT management and senior engineers should understand that delivering strong value requires constructing competitive playing fields for their primary areas of spending. If IT leadership instead allows relationships to drive acquisitions, then this quickly robs the organization of negotiating leverage, and cost increases will quickly seep into acquisitions.  IT vendors will rapidly adapt to how the IT team select purchases — if it is relationship oriented, they will have lots of marketing events, and they will try to monopolize the decision makers’ time. If they must be competitive and deliver outstanding results, they will instead focus on getting things done, and they will try to demonstrate value. For your company, one barometer on how you are conduct your purchases is the type of treatment you receive from your vendors. Commit to break out of the mold of most IT shops by changing the cycle of relationship purchases and locked-in technologies with a ‘value’ culture and competitive playing fields.

Second, your procurement team should have thoughtful category strategies for each key area of IT spending (e.g. storage, networking equipment, telecommunications services). Generally, your best acquisition strategy for a category should be to establish 2 or 3 strong competitors in a supply sector such as storage hardware. Because you will have leveled most of the technical hurdles that prevent substitution, then your next significant acquisition could easily go to any of vendors . In such a situation, you can drive all vendors to compete strongly to lower their pricing to win. Of course, such a strong negotiating position is not always possible due to your legacy systems, new investments, or limited actual competitors. For these situations, the procurement team should seek to understand what the best pricing is on the market, what are the critical factors the vendor seeks (e.g., market share, long term commitment, marketing publicity, end of quarter revenue?) and then the team should use these to trade for more value for their company (e.g., price reductions, better service, long term lower cost, etc). This work should be done upfront and well before a transaction initiates so that the conditions favoring the customer in negotiations are in place.

Third, your technology decision makers and your procurement team should be on the same page with a technology acquisition process (TAP). Your technology leads who are making purchase decisions should be work arm in arm with the procurement team in each step of the TAP.  Below is a diagram outlining the steps of the technology acquisition process (TAP). A team can do very well simply by executing each of the steps as outlined. Even better results are achieved by understanding the nuances of negotiations, maintaining competitive tension, and driving value.

 

Here are further details on each TAP step:

A. Identify Need – Your source for new purchasing can come from the business or from IT. Generally, you would start at this step only if it is a new product or significant upgrade or if you are looking to introduce a new vendor (or vendors) to a demand area. The need should be well documented in business terms and you should avoid specifying the need in terms of a product — otherwise, you have just directed the purchase to a specific product and vendor and you will very likely overpay.

B. Define Requirements – Specify your needs and ensure they mesh within the overall technology roadmap that the architects have defined. Look to bundle or gather up needs so that you can attain greater volumes in one acquisition to possibly gain better better pricing. Avoid specifying requirements in terms of products to prevent ‘directing’ the purchase to a particular vendor. Try to gather requirements in a rapid process (some ideas here) and avoid stretching this task out. If necessary, subsequent steps (including an RFI) can be used to refine requirements.

C. Analyze Options – Utilize industry research and high level alternatives analysis to down-select to the appropriate vendor/product pool. Ensure you maintain a strong competitive field. At the same time, do not waste time or resources for options that are unlikely.

D, E, F, G. Execute these four steps in concurrence. First, ensure the options will all meet critical governance requirements (risk, legal, security, architectural) and then drive the procurement selection process as appropriate based on the category strategy. As you narrow or extend options, conduct appropriate financial analysis. If you do wish to leverage proofs of concept or other trials, ensure you have pricing well-established before the trial. Otherwise, you will have far less leverage in vendor negotiations after it has been successful.

H. Create the contract – Leverage robust terms and conditions via well-thought out contract templates to minimize the work and ensure higher quality contracts. At the same time, don’t forgo the business objectives of price and quality and capability and trade these away for some unlikely liability term. The contract should be robust and fair with highly competitive pricing.

I. Acquire the Product – This is the final step of the procurement transaction and it should be as accurate and automated as possible. Ensure proper receivables and sign off as well as prompt payment. Often a further 1% discount can be achieved with prompt payment.

J & K. The steps move into lifecycle work to maintain good vendor performance and manage the assets. Vendor management will be covered in a subsequent post and it is an important activity that corrects or sustains vendor performance to high levels.

By following this process and ensuring your key decision makers set a competitive landscape and hold your vendors to high standards, you should be able to achieve better quality, better services, and significant cost savings. You can then plow these savings back into either strategic investment including more staff or reduce IT cost for your company. And at these levels, that can make a big difference.

What are some of your experiences with technology acquisition and suppliers? How have you tackled or optimized the IT marketplace to get the best deals?

I look forward to hearing your views. Best, Jim Ditmore

Posted in Best Practices, Efficiency and Cost Reduction, Procurement | Tagged , , , | 4 Comments

Moving from Offshoring to Global Service Centers II

As we covered in our first post on this topic, since the mid-90s, companies have used offshoring to achieve cost and capacity advantages in IT. Offshoring was a favored option to address Y2K issues and has continued to expand at a steady rate throughout the past twenty years. But many companies still approach offshoring as  ‘out-tasking’ and fail to leverage the many advantages of a truly global and high performance work force.

With out-tasking, companies take a limited set of functions or ‘tasks’ and move these to the offshore team. They often achieve initial economic advantage through labor arbitrage and perhaps some improvement in quality as the tasks are documented and  standardized in order to make it easier to transition the work to the new location. This constitutes the first level of a global team: offshore service provider. But larger benefits around are often lost and only select organizations have matured the model to its highest performance level as ‘global service centers’.

So, how do you achieve high performance global service centers instead of suboptimal offshore service providers? As discussed previously, you must establish the right ‘global footprint’ for your organization. Here we will cover the second half of getting to global service centers:  implementing a ‘global team’ model. Combined with the right footprint, you will be able to achieve global service centers and enable competitive advantage.

Global team elements include:

  • consistent global goals and vision across global sites with commensurate rewards and recognition by site
  • a matrix team structure that enables both integrated processes and local and global leadership and controls
  • clarity on roles based on functional responsibility and strategic competence rather than geographic location
  • the opportunity for growth globally from a junior position to a senior leader
  • close partnership with local universities and key suppliers at each strategic location

To understand the variation in performance for the different structures, first consider the effectiveness of your entire team – across the globe – on several dimensions:

  • level of competence (skill, experience)
  • productivity, ability to improve current work
  • ownership and engagement
  • customization and innovation contributions
  • source of future leaders

For an offshore service provider, where work has been out-tasked to a particular site, the team can provide similar or in some cases, better levels of competence. Because of the lower cost in the offshore location, if there is adequate skilled labour, the offshore service provider can more easily acquire such skill and experience within a given budget. A recognizable global brand helps with this talent acquisition. But since only tasks are sent to the center, productivity and continuous improvement can only be applied to the portions of the process within the center. Requirements, design, and other early stage activities are often left primarily to the ‘home office’ with little ability for the offshore center to influence. Further, the process standards and ownership typically remain at home office as well, even though most implementation may be done at the offshore service provider. This creates a further gap where implications of new standards or home office process ‘improvements’ must be borne by the offshore service provider even if the theory does not work well in actual practice. And since implementation and customer interfaces are often limited as well, the offshore service provider receives little real feedback, furthering constraining the improvement cycle.

For the offshore service provider,  the ability to improve processes and productivity is limited to local optimization only, and capabilities are often at the whims of poor decisions from a distant home office. More comprehensive productivity and process improvements can be achieved by devolving competency authority to the primary team executing the work. So, if most testing is done in India, then the testing process ownership and testing best practices responsibility should reside in India. By shifting process ownership closer to the primary team, there will be a natural interchange and flow of ideas and feedback that will result in better improvements, better ownership of the process, and better results. The process can and should still be consistent globally, the primary competency ownership just resides at its primary practice location.  This will result in a highly competent team striving to be among the best in the world. Even better, the best test administrators can now aspire to become test best practice experts and see a longer career path at the offshore location. Their productivity and knowledge levels will improve significantly. These improvements will reduce attrition and increase employee engagement in the test team, not just in India but globally. In essence, by moving from proper task placement to proper competency placement, you enable both the offshore site and the home sites to perform better on both team skill and experience, as well as team productivity and process improvement.

Proper competency placement begins the movement of your sites from offshore service providers to global service excellence. Couple competency placement with transparent reporting on the key metrics for the selected competencies (e.g., all test teams, across the globe, should report based on best in class operational metrics) and drive improvement cycles (local and global) based on findings from the metrics. Full execution of these three adjustments will enable you to achieve sustained productivity improvements of 10 to 30% and lower attrition rates (of your best staff) by  20 to 40%.

It is important to understand that pairing competency leadership with primary execution is required in IT disciplines much more so than other fields due to the rapid fluidity and advance of technology practices, the frequent need to engage multiple levels of the same expertise to resource and complete projects, and the ambiguity and lack of clear industry standards in many IT engineering areas. In many other industries (manufacturing, chemicals, petroleum), stratification between engineering design and implementation is far more rigorous and possible given the standardization of roles and slower pace of change. Thus, organizations can operate far closer to optimum even with task offshoring that is just not possible in the IT space over any sustained time frame.

To move beyond global competency excellence, the structures around functions (the entire processes, teams and leadership that deliver a service) must be optimized and aligned. First and foremost, goals and agenda must be set consistently across the globe for all sites. There can be no sub agendas where offshore sites focus only on meeting there SLAs or capturing a profit, instead the goals must be the appropriate IT goals globally. (Obviously, for tax implications, certain revenue and profit overheads will be achieved but that is an administrative process not an IT goal. )

Functional optimization is achieved by integrating the functional management across the globe where it becomes the primary management structure. Site and resource leadership is secondary to the functional management structure. It is important to maintain such site leadership to meet regulatory and corporate requirements as well as provide local guidance, but the goals, plans, initiatives, and even day-to-day activities flow through a natural functional leadership structure. There is of course a matrix management approach where often the direct line for reporting and legal purposes is the site management, but the core work is directed via the functional leadership. Most large international companies have mastered this matrix management approach and staff and management understand how to properly work within such a setup.

It is worth noting that within any large services corporation ‘functional’ management will reign supreme over ‘site’ management. For example, in a debate deciding what are the critical projects to be tackled by the IT development team, it is the functional leaders working closely with the global business units that will define the priorities and make the decisions. And if the organization has a site-led offshore development shop, they will find out about the resources required long after the decisions are made (and be required to simply fulfill the task). Site management is simply viewed as not having worthy knowledge or authority to participate in any major debate. Thus if you have you offshore centers singularly aligned to site leadership all the way up the corporate chain, the ability to influence or participate in corporate decisions is minimal. However, if you have matrixed the structure to include a primary functional reporting mechanism, then the offshore team will have some level of representation. This increases particularly as manager and senior managers populate the offshore site and are enable functional control back into home offices or other sites. Thus the testing team, as discussed earlier, if it is primarily located in India, would have not just responsibility for the competency and process direction and goals but also would have the global test senior leader at its site who would have test teams back at the home office and other sites. This structure enables functional guidance and leadership from a position of strength. Now, priorities, goals, initiatives, functional direction can flow smoothly from around the globe to best inform the functional direction. Staff in offshore locations now feel committed to the function resulting in far more energy and innovation arising from these sites. The corporation now benefits from having a much broader pool of strong candidates for leadership positions. And not just more diverse candidates, but candidates who understand a global operating model and comfortable reaching across time zones and cultures. Just what is needed to compete globally in the business. The chart below represents this transition from task to competency to function optimization.

Global Team ProgressionIf you combine the functional optimization with a highly competitive site structure, you can typically organize key function in 2 or 3 locations where global functional leadership will reside. This then adds time of day and business continuity advantages. By having the same function at a minimum of two sites, then even if one site is down the other can operate. Or IT work can be started at one site and handed off at the end of the day at the next site that is just beginning their day (in fact most world class IT command centers operate this way). Thus no one ever works the night shift. And time to market can be greatly improved by leveraging such time advantages.

While it is understandably complex that you are optimizing across many variables (site location, contractor and skill mix, location cost, functional placement, competency placement, talent and skill availability), IT teams that can achieve a global team model and put in place global service centers reap substantial benefits in cost, quality, innovation, and time to market.

To properly weigh these factors I recommend a workforce plan approach where each each function or sub function maps out their staff and leaders across site, contractor/staff mix, and seniority mix. Lay out the target to optimize across all key variables (cost, capability, quality, business continuity and so on) and then construct a quarterly trajectory of the function composition from current state until it can achieve the target. Balance for critical mass, leadership, and likely talent sources. Now you have the draft plan of what moves and transactions must be made to meet your target. Every staff transaction (hires, rotations, training, layoffs, etc) going forward should be weighed against whether it meshes with the workforce plan trajectory or not. Substantial progress to an optimized global team can then be made by leveraging a rising tide of accumulated transactions executed in a strategic manner. These plans must be accompanied or even introduced by an overall vision of the global team and reinforcement of the goals and principles required to enable such an operating model. But once laid, you and your organization can expect to achieve far better capabilities and results than just dispersing tasks and activities around the world.

In today’s global competition, this global team approach is absolutely key for competitive advantage and essential for competitive parity if you are or aspire to be a top international company. It would be great to hear of your perspectives and any feedback on how you or your company been either successful (or unsuccessful) at achieving a global team.

I will add a subsequent reference page with Workforce Plan templates that can be leveraged by teams wishing to start this journey.

Best, Jim Ditmore

Posted in Best Practices, Building High Performance Teams, Looking Ahead, Vision and Leadership | 2 Comments

Moving from Offshoring to Global Shared Service Centers

My apologies for the delay in my post. It has been a busy few months and it has taken an extended time since there is quite a bit I wish to cover in the global shared service center model. Since my NCAA bracket has completely tanked, I am out of excuses to not complete the writing, so here is the first post with at least one to follow. 

Since the mid-90s, companies have used offshoring to achieve cost and capacity advantages in IT. Offshoring was a favored option to address Y2K issues and has continued to expand at a steady rate throughout the past twenty years. But many companies still approach offshoring as  ‘out-tasking’ and fail to leverage the many advantages of a truly global and high performance work force.

With out-tasking, companies take a limited set of functions or ‘tasks’ and move these to the offshore team. They often achieve initial economic advantage through labor arbitrage and perhaps some improvement in quality as the tasks are documented and  standardized in order to make it easier to transition the work to the new location. This constitutes the first level of a global team: offshore service provider. But larger benefits around are often lost and typically include:

  • further ongoing process improvement,
  • better time to market,
  • wider service times or ‘follow the sun’,
  • and leverage of critical innovation or leadership capabilities of the offshore team.

In fact, the work often stagnates at whatever state it was in when it was transitioned with little impetus for further improvement. And because lower level tasks are often the work that is shifted offshore and higher level design work remains in the home country, key decisions on design or direction can often take an extended period – actually lengthening time to market. In fact, design or direction decisions often become arbitrary or disconnected because the groups – one in home office, the other in the offshore location – retain significant divides (time of day, perspective, knowledge of the work, understanding of the corporate strategy, etc). At its extreme, the home office becomes the ivory tower and the offshore teams become serf task executors and administrators. Ownership, engagement, initiative and improvement energies are usually lost in these arrangements. And it can be further exacerbated by having contractors at the offshore location, who have a commercial interest in maintaining the status quo (and thus revenue) and who are viewed as with less regard by the home country staff. Any changes required are used to increase contractor revenues and margins. These shortcomings erase many of the economic advantages of offshoring over time and further impact the competitiveness of the company in areas such as agility, quality, and leadership development.

A far better way to approach your workforce is to leverage a ‘global footprint and a global team’. And this approach is absolutely key for competitive advantage and essential for competitive parity if you are an international company. There are multiple elements of the ‘global footprint and team’ approach, that when effectively orchestrated by IT leadership, can achieve far better results than any other structure. By leveraging high performance global approach, you can move from an offshore service provider to a shared service excellence center and, ultimately to a global service leadership center.

The key elements of a global team approach can be grouped into two areas: high performance global footprint and high performance team. The global footprint elements are:

  • well-selected strategic sites, each with adequate critical mass, strong labor pools and higher education sources
  • proper positioning to meet time-of-day and improved skill and cost mix
  • knowledge and leverage of distinct regional advantages to obtain better customer interface, diverse inputs and designs, or unique skills
  • proper consolidation and segmentation of functions across sites to achieve optimum cost and capability mixes

Global team elements include:

  • consistent global goals and vision across global sites with commensurate rewards and recognition by site
  • a team structure that enables both integrated processes and local and global controls
  • the opportunity for growth globally from a junior position to a senior leader
  • close partnership with local universities and key suppliers at each strategic location
  • opportunity for leadership at all locations

Let’s tackle global footprint today and in a follow on post I will cover global team. First and foremost is selecting the right sites for your company. Your current staff total size and locations will obviously factor heavily into your ultimate site mix. Assess your current sites using the following criteria:

  • Do they have critical mass (typically at least 300 engineers or operations personnel, preferably 500+) that will make the site efficient, productive and enable staff growth?
  • Is the site located where IT talent can be easily sourced? Are there good universities nearby to partner with? Is there a reasonable Are there business units co-located or customers nearby?
  • Is the site in a low, medium, or high cost location?
  • What is the shift (time zone) of the location?

Once you have classified your current sites with these criteria, you can then assess the gaps. Do you have sites in low-cost locations with strong engineering talent (e.g. India, Eastern Europe)? Do you have medium cost locations (e.g., Ireland or 2nd tier cities in the US midwest)? Do you have too many small sites (e.g., under 100 personnel)? Do you have sites close to key business units or customers? Are no sites located in 3rd shift zones? Remember that your sites are more about the cities they are located in than the countries. A second tier city in India or a first or second tier city in Eastern Europe can often be your best site location because of improved talent acquisition and lower attrition than 1st tier locations in your country or in India.

It is often best to locate your service center where there are strong engineering and business universities nearby that will provide an influx of entry level staff eager to learn and develop. Given staff will be the primary cost factor in your service, ensure you locate in lower cost areas that have good language skills, access to the engineering universities, and appropriate time zones. For example, if you are in Europe, you should look to have one or two consolidated sites located just outside 2nd tier cities with strong universities. For example, do not locate in Paris or London, instead base your service desk either in or just outside Manchester or Budapest or Vilnius. This will enable you to tap into a lower cost yet high quality labor market that also is likely to provide more part-time workers that will help you solve peak call periods. You can use a similar approach in the US or Asia.

A highly competitive site structure enables you to meet a global optimal cost and capability mix as well. At the most mature global teams in very large companies, we drove for a 20/40/40 cost mix (20% high cost, 40% medium and 40% low cost) where each site is in a strong engineering location. Where possible, we also co-located with key business units. Drive to the optimal mix by selecting 3, 4, or 5 strategic sites that meet the mix target and that will also give you the greatest spread of shift coverage.  Once you have located your sites correctly, you must then of course drive to have effective recruiting, training, and management of the site to achieve outstanding service. Remember also that you must properly consolidate functions to these strategic sites.  Your key functions must be consolidated to 2 or 3 of the sites – you cannot run a successful function where there are multiple small units scattered around your corporate footprint. You will be unable to invest in the needed technology and provide an adequate career path to attract the right staff if it is highly dispersed.

You can easily construct a matrix and assess your current sites against these criteria. Remember these sites are likely the most important investments your company will make. If you have poor portfolio of sites, with inadequate labor resources or effective talent pipelines or other issues, it will impact your company’s ability to attract and retain it’s most important asset to achieve competitive success. It may take substantial investment and an extended period of time, but achieving an optimal global site and global team will provide lasting competitive advantage.

I will cover the global team aspects in my next post along with the key factors in moving from a offshore service provider to shared service excellence to shared service leadership.

It would be great to hear of your perspectives and any feedback on how you or your company been either successful (or unsuccessful) at achieving a global team.

Best, Jim Ditmore

Posted in Building High Performance Teams, Efficiency and Cost Reduction, Vision and Leadership, World Class Production Availability | Tagged , , , | 3 Comments

Keeping Score and What’s In Store for 2014

Now that 2013 is done, it is time to review my predictions from January last year. For those keeping score, I had six January predictions for Technology in 2013:

6. 2013 is the year of the ‘connected house’ as standards and ‘hub’ products achieve critical mass. Score: Yes! - A half dozen hubs were introduced in 2013 including Lowe’s and AT&T’s as well as SmartThings and Nest. The sector is taking off but is not quite mainstream as there is a bit of administration and tinkering to get everything hooked in. Early market share could determine the standards and the winners here.

5. The IT job market will continue to tighten requiring companies to invest in growing talent as well as higher IT compensation. Score: Nope! - Surprisingly, while the overall job market declined from a 7.9% unemployment rate to 7.0% over 2013, the tech sector had a slight uptick from 3.3% to 3.9% in the 3rd quarter (4Q numbers not available). However, this uptick seems to be caused by more tech workers switching jobs (and thus quitting old jobs) perhaps due to more confidence and better pay elsewhere. Look for a continued tight supply of IT workers as the Labor department predicts that by 2020, another 1.4M IT workers are required and there will only be 400K IT graduates during that time!

4. Fragmentation will multiply in the mobile market, leaving significant advantage to Apple and Samsung being the only companies commanding premiums for their products. Score: Yes and no - Fragmentation did occur in Android segment, but the overall market consolidated greatly. And Samsung and Apple continued in 2013 to capture the lion’s share of all profits from mobile and smart phones. Android picked up market share (and fragment into more players), as well as Windows Phone, notably in Europe. Apple dipped some, but the greatest drop was in ‘other’ devices (Symbian, Blackberry, etc). So expect a 2014 market dominated by Android, iOS, and a distant third to Windows Phone. And Apple will be hard pressed to come out with lower cost volume phones to encourage entry into their ecosystem. Windows Phone will need to continue to increase well beyond current levels especially in the US or China in order to truly compete.

3. HP will suffer further distress in the PC market both from tablet cannibalization and aggressive performance from Lenovo and Dell. Score: Yes! - Starting with the 2nd quarter of 2013, Lenovo overtook HP as the worldwide leader in PC shipments and then widened it in the 3rd quarter. Dell continued to outperform the overall market sector and finished a respectable second in the US and third in the world. Overall PC shipments continued to slide with an 8% drop from 2012, in large part due to tablets. Windows 8 did not help shipments and there does not look like a major resurgence in the market in the near term. Interestingly, as with smart phones, there is a major consolidation occurring around the top 3 vendors in the market — again ‘other’ is the biggest loser of market share.

2. The corporate server market will continue to experience minimal increases in volume and flat or downward pressure on revenue. Score: Yes! - Server revenues declined year over year from 2012 to 2013 in the first three quarters (declines of 5.0%, 3.8%, and 2.1% respectively). Units shipped treaded water with a decline in the first quarter of .7%, an uptick in the second quarter of  4%, and a slight increase in the third quarter of 2%. I think 2014 will show more robust growth with greater business investment.

1. Microsoft will do a Coke Classic on Windows 8. Score: Yes and no - Windows 8.1 did put back the Start button, but retained much of the ‘Metro’ interface. Perhaps best cast as the ‘Great Compromise’, Windows 8.1 was a half step back to the ‘old’ interface and a half step forward to a better integrated user experience. We will see how the ‘one’ user experience across all devices works for Microsoft in 2014.

So, final score was 3 came true, 2 mostly came true, and 1 did not – for a total score of 4. Not too bad though I expected a 5 or 6 :) . I will do one re-check of the score when the end of year IT unemployment figures come out to see if the strengthening job market made up for the 3rd quarter dip.

As an IT manager, it is important to have strong, robust competition – it was good to see both Microsoft and HP come out swinging in 2013. Maybe they did not land many punches but it is good to have them back in the games.

Given it is the start of the year, I thought I would map out some of the topics I plan to cover this coming year in my posts. As you know, the focus of Recipe for IT  is useful best practice techniques and advice that works in the real world and enables IT managers to be more successful. In 2013, we had a very successful year with over 43,000 views from over 150 countries, (most are from the US, UK, India, and Canada). And I wish to thank the many who have contributed comments and feedback — it has really helped me craft a better product. So with that in mind, please provide your perspective on the upcoming topics, especially if there are areas you would like to see covered that are not.

For new readers, I have structured the site into two main areas: posts – which are short, timely essays on a particular topic and reference pages- which often take a post and provide a more structured and possibly deeper view of the topic. The pages are intended to be an ongoing reference of best practice for you leverage. You can reach the reference pages from the drop down links on the home page.

For posts, I will be continue the discussion on cloud and data centers. I will also explore flash storage and the continuing impact of mobile. Security will invariably be a topic. Some of you may have noticed some posts are placed first on InformationWeek and then subsequently here. This helps increase the exposure of Recipe for IT and also ensure good editing (!).

For the reference pages, I have recently refined and will continue to improve the production and quality areas. Look also for updates and improvements to leadership  as well as the service desk.

What other topics would you like to see explored? Please comment and provide your feedback and input.

Best, and I wish you a great start to 2014,

Jim Ditmore

Posted in Best Practices, Just for fun, Looking Ahead | Tagged , | 2 Comments

Celebrate 2013 Technology or Look to 2014?

The year is quickly winding down and 2013 will not be remembered as a stellar year for technology. Between the NSA leaks and Orwellian revelations, the Healthcare.gov mishaps, the cloud email outages (and Yahoo’s is still lingering) and now the 40 million credit identities stolen from Target, 2013 actually was a pretty tough year for the promise of technology to better society.

While the breakneck progress of technology continued, we witnessed so many shortcomings in its implementation. Fundamental gaps in large project delivery and availability design and implementation continue to plague large and widely used systems.   It is as if the primary design lessons of ‘Galloping Gertie’ regarding resonance were never absorbed by bridge builders. The costs of such major flaws in these large systems are certainly similar to that of a failed bridge.  And as it turns out, if there is a security flaw or loophole, either the bad guys or the NSA will exploit it. I particularly like NSA’s use of ‘smiley faces’ on internal presentations when they find a major gap in someone else’s system.

So, given 2013 has shown the world we live in all too clearly, as IT leaders let’s look to 2014 and resolve to do things better. Let’s continue to up the investment in security within our walls and be more demanding of our vendors to improve their security. Better security is the number 2 focus item (behind data analytics) for most firms and the US government. And security spend will increase an out-sized amount even as total spend goes up by 5%. This is good news, but let’s ensure the money is spent well and we make greater progress in 2014. Of course, one key step is to get XP out of your environment by March since it will no longer be patched by Microsoft. For a checklist on security, here is a good start at my best practices security reference page.

As for availability, remember that quality provides the foundation to availability. Whether design, implementation or change, quality must be woven throughout these processes to enable robust availability and meet the demands of today’s 7×24 mobile consumers. Resolve to move your shop from craft to science in 2014, and make a world of a difference for your company’s interface to its customers. Again, if you are wondering how best to start this journey and make real progress, check out this primer on availability.

Now, what should you look for in 2014? As with last January, where I made 6 predictions for 2013, I will make 6 technology predictions for 2014. Here we go!

6. There will be consolidation in the public cloud market as smaller companies fail to gather enough long term revenue to survive and compete in a market with rapidly falling prices. Nirvanx was the first of many.

5. NSA will get real governance, though it will be secret governance. There is too much of a firestorm for this to continue in current form.

4. Dual SIM phones become available in major markets. This is my personal favorite wish list item and it should come true in the Android space by 4Q.

3. Microsoft’s ‘messy’ OS versions will be reduced, but Microsoft will not deliver on the ‘one’ platform. Expect Microsoft to drop RT and continue to incrementally improve Pro and Enterprise to be more like Windows 7. As for Windows Phone OS, it is a question of sustained market share and the jury is out. It should hang on for a few more years though.

2. With a new CEO, a Microsoft breakup or spinoffs are in the cards. The activist shareholders are holding fire while waiting for the new CEO, but will be applying the flame once again. Effects? How about Office on the iPad? Everyone is giving away software and charging for hardware and services, forcing an eventual change in the Microsoft business model.

1. Flash revolution in the enterprise. What looked at the start of 2013 to be 3 or more years out looks now like this year. The emergence of flash storage at prices (with de-duplication) comparable to traditional storage and 90% reductions in environmentals will become a stampede with the next generation of flash costing significantly less than disk storage.

What are your top predictions? Anything to change or add?

I look forward to your feedback and next week I will assess how my predictions from January 2013 did — we will keep score!

Best, and have a great holiday,

Jim Ditmore

Posted in Information Security, Just for fun, Looking Ahead, Vision and Leadership | Tagged , | 1 Comment

How Did Technology End Up on the Sunday Morning Talk Shows?

It has been two months since the Healthcare.gov launch and by now nearly every American has heard or witnessed the poor performance of the websites. Early on, only one of every five users was able to actually sign in to Healthcare.gov, while poor performance and unavailable systems continue to plague the federal and some state exchanges. Performance was still problematic several weeks into the launch and even as of Friday, November 30, the site was down for 11 hours for maintenance. As of today, December 1, the promised ‘relaunch day’, it appears the site is ‘markedly improved’ but there are plenty more issues to fix.

What a sad state of affairs for IT. So, what does the Healthcare website issues teach us about large project management and execution? Or further, about quality engineering and defect removal?

Soon after the launch, former federal CTO Aneesh Chopra, in an Aspen Institute interview with The New York Times‘ Thomas Friedman, shrugged off the website problems, saying that “glitches happen.” Chopra compared the Healthcare.gov downtime to the frequent appearances of Twitter’s “fail whale” as heavy traffic overwhelmed that site during the 2010 soccer World Cup.

But given that the size of the signup audience was well known and that website technology is mature and well understood, how could the government create such an IT mess? Especially given how much lead time the government had (more than three years) and how much it spent on building the site (estimated between $300 million and $500 million).

Perhaps this is not quite so unusual. Industry research suggests that large IT projects are at far greater risk of failure than smaller efforts. A 2012 McKinsey study revealed that 17% of lT projects budgeted at $15 million or higher go so badly as to threaten the company’s existence, and more than 40% of them fail. As bad as the U.S. healthcare website debut is, there are dozens of examples, both government-run and private of similar debacles.

In a landmark 1995 study, the Standish Group established that only about 17% of IT projects could be considered “fully successful,” another 52% were “challenged” (they didn’t meet budget, quality or time goals) and 30% were “impaired or failed.” In a recent update of that study conducted for ComputerWorld, Standish examined 3,555 IT projects between 2003 and 2012 that had labor costs of at least $10 million and found that only 6.4% of them were successful.

Combining the inherent problems associated with very large IT projects with outdated government practices greatly increases the risk factors. Enterprises of all types can track large IT project failures to several key reasons:

  • Poor or ambiguous sponsorship
  • Confusing or changing requirements
  • Inadequate skills or resources
  • Poor design or inappropriate use of new technology

Unfortunately, strong sponsorship and solid requirements are difficult to come by in a political environment (read: Obamacare), where too many individual and group stakeholders have reason to argue with one another and change the project. Applying the political process of lengthy debates, consensus-building and multiple agendas to defining project requirements is a recipe for disaster.

Furthermore, based on my experience, I suspect the contractors doing the government work encouraged changes, as they saw an opportunity to grow the scope of the project with much higher-margin work (change orders are always much more profitable than the original bid). Inadequate sponsorship and weak requirements were undoubtedly combined with a waterfall development methodology and overall big bang approach usually specified by government procurement methods. In fact, early testimony by the contractors ‘cited a lack of testing on the full system and last-minute changes by the federal agency’.

Why didn’t the project use an iterative delivery approach to hone requirements and interfaces early? Why not start with healthcare site pilots and betas months or even years before the October 1 launch date? The project was underway for three years, yet nothing was made available until October 1. And why did the effort leverage only an already occupied pool of virtualized servers that had little spare capacity for a major new site? For less than 10% of the project costs a massive dedicated farm could have been built.  Further, there was no backup site, nor any monitoring tools implemented. And where was the horizontal scaling design within the application to enable easy addition of capacity for unexpected demand? It is disappointing to see such basic misses in non-functional requirements and design in a major program for a system that is not that difficult or unique.

These basic deliverables and approaches appear to have been fully missed in the implementation of the wesite. Further, the website code appears to have been quite sloppy, not even using common caching techniques to improve performance. Thus, in addition to suffering from weak sponsorship and ambiguous requirements, this program failed to leverage well-known best practices for the technology and design.

One would have thought that given the scale and expenditure on the program, top technical resources would have been allocated and ensured these practices were used. The feds are  scrambling with a “surge” of tech resources  for the site. And while the new resources and leadership have made improvements so far, the surge will bring its own problems. It is very difficult to effectively add resources to an already large program. And, new ideas introduced by the ‘surge’ resources, may not be either accepted or easily integrated. And if the issues are deeply embedded in the system, it will be difficult for the new team to fully fix the defects. For every 100 defects identified in the first few weeks, my experience with quality suggests there are 2 or 3 times more defects buried in the system. Furthermore, if one wonders if the project couldn’t handle the “easy” technical work — sound website design and horizontal scalability – how will they can handle the more difficult challenges of data quality and security?

These issues will become more apparent in the coming months when the complex integration with backend systems from other agencies and insurance companies becomes stressed. And already the fraudsters are jumping into the fray.

So, what should be done and what are the takeaways for an IT leader? Clear sponsorship and proper governance are table stakes for any big IT project, but in this case more radical changes are in order. Why have all 36 states and the federal government roll out their healthcare exchanges in one waterfall or big bang approach? The sites that are working reasonably well (such as the District of Columbia’s) developed them independently. Divide the work up where possible, and move to an iterative or spiral methodology. Deliver early and often.

Perhaps even use competitive tension by having two contractors compete against each other for each such cycle. Pick the one that worked the best and then start over on the next cycle. But make them sprints, not marathons. Three- or six-month cycles should do it. The team that meets the requirements, on time, will have an opportunity to bid on the next cycle. Any contractor that doesn’t clear the bar gets barred from the next round. Now there’s no payoff for a contractor encouraging endless changes. And you have broken up the work into more doable components that can then be improved in the next implementation.

Finally, use only proven technologies. And why not ask the CIOs or chief technology architects of a few large-scale Web companies to spend a few days reviewing the program and designs at appropriate points. It’s the kind of industry-government partnership we would all like to see.

If you want to learn more about how to manage (and not to manage) large IT programs, I recommend “Software Runaways,” , by Robert L. Glass, which documents some spectacular failures. Reading the book is like watching a traffic accident unfold: It’s awful but you can’t tear yourself away. Also, I expand on the root causes of and remedies for IT project failures in my post on project management best practices.

And how about some projects that went well? Here is a great link to the 10 best government IT projects in 2012!

What project management best practices would you add? Please weigh in with a comment below.

Best, Jim Ditmore

This post was first published in late October in InformationWeek and has been updated for this site.

Posted in Best Practices, Efficiency and Cost Reduction, Looking Ahead, Project Management and Delivery, Uncategorized, Vision and Leadership | Tagged , , | 5 Comments

Whither Virtual Desktops?

The enterprise popularity of tablets and smartphones at the expense of PCs and other desktop devices is also sinking desktop virtualization. In addition to the clear link that tablets and smartphones are cannibalizing PC sales, mobility and changing device economics is also impacting corporate desktop virtualization or VDI.

The heyday of virtual desktop infrastructure came around 2008 to 2010, as companies sought to cut their desktop computing costs — VDI promised savings from 10% to as much as 40%. Those savings were possible despite the additional engineering and server investments required to implement the VDI stack. Some companies even anticipated replacing up to 90% of their PCs with VDI alternatives. Companies sought to reduce desktop costs and address specific issues not well-served by local PCs (e.g., smaller overseas sites with local software licensing and security complexities).

But something happened on the way to VDI dominance. The market changed faster than the maturing of VDI. Employee demand for mobile devices, in line with the BYOD phenomenon, has refocused IT shops on delivering mobile device management capabilities, not VDI. On-the-go employees are gravitating toward new lightweight laptops, a variety of tablets and other non-desktop innovations that aren’t VDI-friendly. Mobile employees want to use multiple devices; they don’t want to be tied down to a single VDI-based interface. And enterprise IT shops have refocused on delivering mobile device management capabilities so company employees can securely use their smartphones for their work. Given the VDI interface is at best cumbersome on a touch interface with a different OS than Windows, there will be less and less demand for VDI as the way to interconnect.  Given the dominance of these highly mobile smartphones and tablets will only increase in the next few years as the client device war between Apple, Android, and Microsoft (Nokia) heats up further (and they continue to produce better and cheaper products) VDI’s appeal will fall even farther.

Meantime, PC prices, both desktop and laptop, which have had a steady decline in the past 4 years, dropping 30-40% (other than Apple’s products, of course), will accelerate their price drop.  With the decline in shipments these past 18 months, the entire industry is overcapacity and the only way to out of the situation is to spur demand and better consumer interest in PCs is through further cost reductions. (Note that the answer is not that Windows 8 will spur demand). Already Dell and Lenovo are using lower prices to try to hold their volumes steady. And with other devices entering the market (e.g. Smart TVs, smart game stations, etc), it will become a very bloody marketplace. The end result for IT shops will be $300 laptops that are pretty slick that come fully with Windows (perhaps even Office). At those prices, VDI will have minimal or no cost advantage especially taking into account the backend VDI engineering costs.  And if you can buy a $300 laptop or tablet fully equipped that is preferred by most employees, IT shops will be hard pressed to pass that up and impose VDI. In fact, by late 2014, corporate IT shops in 2014 could be faced with their VDI solutions costing more than traditional client devices (e.g., that $300 laptop). This is because the major components of VDI costs (servers and engineering work and support) will not drop nearly as quickly as the distressed market PC costs. 

There is no escaping the additional engineering time and attention VDI requires. The complex stack (either Citrix or VMware) still requires more engineering than a traditional solution. And with this complexity, there will still be bugs between the various client and VDI and server layers that impact user experience. Recent implementations still show far too many defects between the layers. At Allstate, we have had more than our share of defects in our recent rollout between the virtualization layer, Windows, and third party products. And this is for what should be by now, a mature technology.

Faced with greater costs, greater engineering resources (which are scarce) and employee demand for the latest mobile client devices, organizations will begin to throw in the towel on VDI. Some companies now deploying will reduce the scope of current VDI deployments. Some now looking at VDI will jump instead to mobile-only alternatives more focused on tablets and smartphones. And those with extensive deployments will allow significant erosion of their VDI footprint as internal teams opt for other solutions, employee demand moves to smartphones and tablets or lifecycle events occur. This is a long fall from the lofty goals of 90% deployment from a few years ago. IT shops do not want to be faced with both supporting VDI for an employee who also has a tablet, laptop or desktop solution because it essentially doubles the cost of the client technology environment. In an era of very tight IT budgets, excess VDI deployments will be shed.

One of the more interesting phenomenon in the rapidly changing world of technology is when a technology wave gets overtaken well before it peaks. This occurred many times before (think optical disk storage in the data center) but perhaps most recently with netbooks where their primary advantages of cost and simplicity where overwhelmed by smartphones (from below) and ultra-books from above. Carving out a sustainable market niche on cost alone in the technology world is a very difficult task, especially when you consider that you are reversing long term industry trends.

Over the past 50 years of computing history, the intelligence and capability has been drawn either to the center or to the very edge. In the 60s, mainframes were the ‘smart’ center and 3270 terminals were the ‘dumb’ edge device. In the 90s, client computing took hold and the ‘edge’ became much smarter with PCs but there was a bulging middle tier of the three tier client compute structure. This middle tier disappeared as hybrid data centers and cloud computing re-centralized computing. And the ‘smart’ edge moved out even farther with smartphones and tablets. While VDI has a ‘smart’ center, it assumes a ‘dumb’ edge, which goes against the grain of long term compute trends. Thus the VDI wave, a viable alternative for a time, will be dissipated in the next few years as the long term compute trends overtake it fully.

I am sure there will still be niche applications, like offshore centers (especially where VDI also enables better control of software licensing) and there will still be small segments of the user population that will swear by the flexibility to access their device from anywhere they can log in without carrying anything, but these are ling term niches. Long term, VDI solutions will have a smaller and smaller portion of the device share, perhaps 10%, maybe even 20%, but not more.

What is your company’s experience with VDI? Where do you see its future?

Best, Jim Ditmore

 This post was first published in InformationWeek on September 13, 2013 and has been slightly revised and updated.
Posted in Best Practices, Efficiency and Cost Reduction, Looking Ahead, Mobile | 6 Comments

Getting to Private Cloud: Key Steps to Build Your Cloud

Now that I am back from summer break, I want to continue to further the discussion on cloud and map out how medium and large enterprises can build their own private cloud. As we’ve discussed previously, software-as-a-service, engineered stacks and private cloud will be the biggest IT winners in the next five to ten years. Private clouds hold the most potential — in fact, early adopters such as JP Morgan Chase and Fidelity are seeing larger savings and greater benefits than initially anticipated.

While savings is a key reason to move to a private cloud, shorter development cycles and faster time to market are more significant. Organizations can test risky ideas more easily as small, low-cost projects, quickly dispensing with those projects that fail and accelerating those that show more promise.

While savings is a key driver to moving to private cloud, faster development cycles and better time to market are turning out to be both more significant and more valuable to early adopter firms than initially estimated. And it is not just a speed improvement but a qualitative improvement where smaller projects can trialled or riskier pilots can be executed with far greater speed and nominal costs. This allows a ‘fast fail’ approach on corporate innovation that greatly speeds the selection process, avoids extensive wasted investment in lengthier traditional pilots (that would have failed anyway) and greatly improves time to market on those ideas that are successful.

As for the larger savings, early implementations at scale are seeing savings well in excess of 50%. This is well beyond my estimate of 30% and is occurring in large part because of the vastly reduced labor requirements to build and administer a private cloud versus traditional infrastructure.

So with greater potential benefits, how should an IT department go about building a private cloud? The fundamental building blocks required for private cloud are a base of virtualized servers utilizing commodity servers and leveraging open systems. And of course you need the server engineering and administration expertise to support the platform. There’s also a strong early trend toward leveraging open source software for private clouds, from the Linux operating system to OpenNebula and Eucalyptus for infrastructure management. But just having a virtualized server platform does not result in private cloud. There are several additional elements required.

First, establish a set of standardized images that constitute most of the stack. Preferably, that stack will go from the hardware layer to the operating system to the application server layer, and it will include systems management, security, middleware and database. Ideally, go with a dozen or fewer server images and certainly no more than 20. Consider everything else to be custom and treated separately and differently from the cloud.

Once you have established your target set of private cloud images you should build a catalogue and ordering process that is easy, rapid, and transparent. The costs should be clear, and the server units should be processor-months or processor-weeks. You will need to couple the catalogue with highly automated provisioning and de-provisioning. Your objective should be to deliver servers quickly, certainly within hours, preferably within minutes (once the costs are authorized by the customer). And de-provisioning should be just as rapid and regular. In fact, you should offer automated ‘sunset’ servers in test and development environments (e.g., after 90 days the server(s) are allocated, they are automatically returned to the pool). I strongly recommend well-published and clear cost and allocation reporting to drive the right behaviors among your users. It will encourage quicker adoption, better and more efficient usage and rapid turn-in when no longer needed. With these 4 prerequisites in place (standard images, a catalogue and easy ordering process, clear costs and allocations, and automated provisioning and de-provisioning) you are ready to start your private cloud.

Look to build your private cloud in parallel to your traditional data center platforms. There should be both a development and test private cloud as well as a production private cloud. Seed the cloud with an initial investment of servers of each standard type. Then transition demand into the private as new projects initiate and proceed to grow it project by project.

You could begin by routing small and medium size projects to the private cloud environment and as it builds up scale and provisioning kinks are ironed out, migrate more and more server requests until nearly all requests are routed through your private cloud path. As you begin to achieve scale and you prove out your ordering and provisioning (and de-provisioning processes) you can begin to tighten the criteria for projects to proceed with traditional custom servers. Within 6 months, custom, traditional servers should be the rare exception and should be charged fully for the excess costs they will generate.

 Once the private cloud is established you can verify the costs savings and advantages. And there will be additional advantages such as improved time to market because of improvements in the speed of your development efforts given server deployment is no longer a long pole in the tent. Well-armed with this data, you can now circle back and tackle existing environments and legacy custom servers. While often the business case for a platform transition is not a good investment, a transition to private cloud during another event (e.g., major application release, server end-of-life migration) should easily become a winning investment. A few early adopters (such as JPMC or Fidelity) are seeing outsized benefits and strong developer push into these private cloud environments. So, if you build it well, you should be able to reap the same advantages.

How is your cloud journey proceeding? Are there other key steps necessary to be successful? I look forward to hearing your perspective.

Best, Jim Ditmore

 

Posted in Best Practices, Cloud, Data Centers, Looking Ahead | 2 Comments