A Cloudy Future: The Rise of Appliances and SaaS

As I mentioned in my previous post, I will be exploring infrastructure trends, and in particular, cloud computing. But while cloud computing is getting most of the marketing press, there are two additional phenomena that are capturing as much if not more of the market: computer appliances and SaaS. So, before we dive deep into cloud, let’s explore these other two trends and then set the stage for a comprehensive cloud discussion that will yield effective strategies for IT leaders.

Computer appliances have been available for decades, typically in the network, security, database and specialized compute spaces. Firewalls and other security devices have long leveraged an appliance approach where generic technology (CPU, storage, OS) is closely integrated with additional special purpose software and sold and serviced as an packaged solution. Specialized database appliances for data warehousing were quite successful starting in the early 1990s (remember Teradata?).

The tighter integration of appliances gives significant advantage over traditional approaches with generic systems. First, since the integrator of the package often is also the supplier of the software and thus can achieve improved tuning of performance and capacity of the software with a specific OS and hardware set. Further, this integrated stack then requires much less install and implementation effort by the customer.  The end result can be impressive performance for similar cost to a traditional generic stack without the implementation effort or difficulties. Thus appliances can have a compelling performance and business case for the typical medium and large enterprise. And they are compelling for the technology supplier as well because they will command higher prices and are much higher margin than the individual components.

It is important to recognize that appliances are part of a normal tug and pull between generic and specialized solutions. In essence, throughout the past 40 years of computing,   there has been the constant improvement in generic technologies under the march of Moore’s law. And with each advance there are two paths to take: leverage generic technologies and keep your stack loosely coupled so you can continue to leverage the advance of generic components or, closely integrated your stack with the then most current components and drive much better performance from this integration.

By their very nature though, appliances become rooted in a particular generation of technology. The initial iteration can be done with the latest version of technology but the integration will likely result in tight links to the OS, hardware and other underlying layers to wring out every performance improvement available. These tight links yield both the performance improvement and the chains to a particular generation of technology. Once an appliance is developed and marketed successfully, ongoing evolutionary improvements will continue to be made, layering in further links to the original base technology. And the margins themselves are addictive with the suppliers doing everything possible to maintain the margins (thus evolutionary low cost advances will occur but revolutionary (next generation) will likely require too high of an investment to maintain the margins).  This then spells the eventual fading and demise of that appliance, as the generic technologies continue their relentless advance and typically surpass the appliance in 2 or 3 generations. This is represented in the chart below and can be seen in the evolution of data warehousing.

The Leapfrog of Appliances and Generics

The first instances of data warehousing were done using the primary generic platform of the time (the mainframe) and mainstream databases. But with the rise of another generic technology, proprietary chipsets out of the midrange and high end workstation sector, Teradata and others combined these chipsets with specialized hardware and database software to develop much more powerful data warehouse appliances. From the late 1980s through the 1990s the Teradata appliance maintained a significant performance and value edge over generic alternatives. But that began to fray around 2000 with the continued rise of mainstream databases and server chipsets along with low cost operating systems and storage that could be combined to match the performance of Teradata at much lower cost. In this instance, the Teradata appliance held a significant performance advantage for about 10  years before falling back into or below the mainstream generic performance. The value advantage diminished much sooner of course. Typically, the appliance performance advantage is for 4 to 6 years at most. Thus, early in the cycle (typically 3 to 4 generic generations or 4 to 5 years), an appliance offering will present material performance and possibly cost advantages over traditional, generic solutions.

As a technology leader, I recommend the following considerations when looking at appliances:

  • If you have real business needs that will drive significant benefit from such performance, then investigate the appliance solution.
  • Keep in mind that in the mid-term the appliance solution will steadily lose advantage and subsequently cost more than the generic solution. Understand where the appliance solution is in its evolution – this will determine its effective life and the likely length of your advantage over generic systems
  • Factor the hurdle, or ‘switchback’ costs at the end of its life. (The appliance will likely require a hefty investment to transition back to generic solutions that have steadily marched forward).
  • The switchback costs will be much higher where business logic is layered in (e.g. for middleware, database or business software appliances versus network or security appliances (where there is minimal custom business logic layered in).
  • Include the level of integration effort and cost required. Often a few appliances within a generic infrastructure will have a smooth integration and less cost. On the other hand, weaving multiple appliances within a service stack can cause much higher integration costs and not yield desired results. Remember that you have limited flexibility with an appliance due to its integrated nature and this could cause issues when they are strung together (e.g., a security appliance with a load balance appliance with a middleware appliance with a business application appliance and data warehouse appliance (!)).
  • Note for certain areas, security and network in particular, often the follow-on to an appliance will be a next generation appliance from the same or different vendor. This is because there is minimal business logic incorporated in the system (yes, there are lots of parameter settings like firewall rules customer for a business, but the firewall operates essentially the same regardless of the business that uses it).

With these guidelines, you should be able to make better decisions about when to use an appliance and how much of a premium you should pay.

In my next post, I will cover SaaS and I will then bring these views together with a perspective on cloud in a final post.

What changes or additions would you make when considering appliances? I look forward to your perspective.

Best, Jim Ditmore

 

About Jim D

Jim has worked in the IT field for over 25 years and as a senior leader for over 15 years. He has successfully turned around a number of IT shops to become high performing teams and a competitive advantage for their companies.
This entry was posted in Best Practices, Looking Ahead, World Class Production Availability. Bookmark the permalink.

6 Responses to A Cloudy Future: The Rise of Appliances and SaaS

  1. Victor M says:

    Hi Jim,

    Another great blog and it provokes deep thinking as always.

    I like the way you summarized the nature of the issue – highlighted it as “part of a normal tug and pull between generic and specialized solutions”, and frame the discussion around business needs and value.

    Another key word in the blog is “integration”, and you provided thorough analysis and insightful guidelines regarding the tightly coupled Software and Hardware solution as of (hardware) appliance.

    From Integration perspective, there are a couple other use cases that more or less relate to “appliance” concept:

    1. Virtualization (decouple): more Software Providers are leveraging Virtual Appliance to package, deliver and deploy their packaged software solutions, to lower operational cost for both supplier and user, increase flexibility by allowing the use freely choose / replace / migrate among hardware as long as it is supported by the Hypervisor.

    2. Lack of a better word, I’ll call it “convergence”: for instance
    a. A smarter Firewall appliance can also carry out the functions of IDS/IPS, DLP (Data Loss Prevention), etc; or,
    b. Various types / forms of Converged Infrastructure, which maybe marketed under different terms, such as “cloud in a box”

    For sure, those dynamics provide more opportunities as well as complexity to our technology landscape; and the best way to solve the puzzle is keep sharpening the strategic thinking – based on the analysis of business needs, industry trend, internal IT shop maturity, competition and substitution, entrance and exit cost… as exhibited in this blog.

    I am looking forward to hearing your thoughts about SaaS and Cloud Computing in general.

    Cheers,
    Victor

  2. Jim K says:

    This is the first blog post I’ve read by you. Overall, it’s pretty good.

    What do you think the affects of Moore’s law becoming relatively obsolete by 2020 will be to software technology?

    Also curious to hear what you think about companies like Intel artificially slowing down technological growth since they have no real competitors. Slowing down the growth would let throttle the supply effectively increasing the demand. At some point, the chip in your phone will be enough to power every PC in your house. There isn’t nearly as much profit in this for companies like Intel because you probably own 5 devices yourself that have Intel’s chips and will probably buy more at least every two years.

    Regards,
    JK

    • Jim D says:

      Jim,

      Glad you have found the site and found it to be ‘pretty good’ :).

      Thanks for your good comments and observations. In response, a few thoughts:

      – Regarding Moore’s Law: every 5 years or so for the past 40 years we have heard Moore’s law will likely not continue. And yes, as you mention, there is concern that it will not continue beyond the next 2 or 3 generations of semiconductors. Yet, check out this most recent development: http://bits.blogs.nytimes.com/2012/10/28/i-b-m-reports-nanotube-chip-breakthrough/
      So, perhaps another way for Moore’s law to continue for another 10 generations.

      Regarding Intel slowing down its development because it has no competition… I would suggest that doing so will be a major misstep by Intel. They do have competition – the ARM chips in particular – and any let up by Intel will provide an opening for a new entrant in the market. Developing a new generation of chips requires huge capital investment (billions for a new plant alone). The difficulty Intel faces is a switch from a PC model to mobile and sensor based model where power efficiency is a premium. Will Intel dominate the new markets like the PC and server market – not likely. I think advances will continue as we have seen for the pas 3 or 4 decades!

      I appreciate your comments and hope to hear from you again. Since you are new, there are some very good relevant older posts that you might check out: the Peloton post and the AUstralian pilot ones in particular.

      Best, Jim Ditmore

  3. Pingback: A Cloudy Future: SaaS and the Balkanization of the Data Center | Recipes for IT

  4. Pingback: A Cloudy Future: How to Best Leverage the Cloud, SaaS and Appliances | Recipes for IT

  5. Pingback: Cloud Trends: Turning the Tide on Data Centers | Recipes for IT

Leave a Reply

Your email address will not be published. Required fields are marked *