We're Just a Freakin' App on a Hypervisor
Nutanix is simply trying to argue that storage is simply an app on the hypervisor. And just like there is no expectation from an app vendor to OEM ESXi (simply because they are running in a virtualized environment), there should be no expectation for us to OEM any hypervisor as part of our offering.
"The worst thing for OpenStack right now is Red Hat," McKenty says.
I think this is much more insightful from the Mirantis folks: The Real Reason Open Source Startups Fail by Alex Freedland at TechCrunch.At the end of the day, OpenStack will actually be truly successful when we stop talking about it quite so much. Customers will buy a solution from EMC or IBM or Mirantis or Canonical or Blue Box or HP or Rackspace or whoever it might be. That solution will have the capabilities and the interoperability and the extensibility and the cost profile that it has… because it’s built on OpenStack. OpenStack, ultimately, is a feature not a product. And that’s not a bad thing.
Open source ecosystem markets behave differently and therefore require a very different playbook. There, the differentiation is not in the technology you build; it is in the process and expertise that you slowly amass over an extended period of time. All of the successful entrants (Red Hat, Cloudera or Hortonworks, etc) have followed the same playbook.
Still, some dislike the corporate influence for another, more troublesome reason. “I think pretty soon we're going to see how bad it is when every successful [open source] project is backed by a company, most of which fail,” declares Puppet Labs founder and CEO Luke Kanies.
For me, it doesn’t matter whether you think these new steps are good ones or bad ones – I’m simply amazed that the company has been able to turn the ship. They could easily not have done so, you know. They could probably have sat for another decade, collected licensing fees and maintenance fees, and been more or less fine. But they didn’t. They’re taking risks, they’re challenging “traditions,” and they’re expressing their vision. Whether you agree with the vision or not is important, since you’re the customer, but at least they’re moving with confidence. The fact that the company can do so is astounding, and it speaks to the courage, dedication, and genius of the men and women leading it and doing all the work.
The biggest challenge today is the introduction of so many new technologies and design paradigms, that the write-off period of the investment might not be aligned with the duration of the technology relevance any more. On top of that the storage system might perform extremely well in the beginning, however it will appear to become slower over time. A system might perform extremely well, but not at the cost of large quantities of money or by its inability to incorporate new technology to service new application landscapes or services which are going to depend on it.
Vendors that go down a combined hardware + software route are handling more of the stack for me so that I can focus on building a platform and not being a plumber (which is fun in a home lab but wastes too much of my time and energy in the data center). It also allows for additional pieces of the stack to be tested and validated thoroughly by the vendor when new drivers, firmware, and patches are released. As long as the solution is built to scale in a deterministic way, this is usually the simplest method for dropping gear into the environment while maintaining a architecture roadmap and incurring CapEx costs in an iterative / linear nature. While hardware is commodity, you (Troyer) have pointed out that none of the compute offerings act the same way - even out of band access is wildly different across vendors (iLO, CIMC, DRAC, and other flavors of iKVM based on CIM).
Solutions that are combined hardware and software can take many forms. I like my firewall and it is a combined package but also an appliance, and I think it is good like that. I can use virtual storage in my lab but I chose to use an appliance like Synology as it is easier to manage. One environment - DNS, or Internet, or virtualization, having issues doesn’t hurt my Synology or firewall. So there is a place for appliances. However in big companies, with big virtualization environments, I can see the need to appliance being different. Virtual firewalls, virtual storage works different in that world and I think not being an appliance has value. And that becomes a leadin for the cloud. Appliances in the cloud are much tougher.
I've no hands-on experience with the recent converged/hyperconverged infrastructures, but we've been using combined hardware/software offerings for years - even 'legacy' SANs have management software custom written for the hardware. Are they an evolution from what we've had previously? Yes, I'd say so, but everything improves over time - I'm no longer tweaking HIGHMEM in DOS because every modern OS handles that for me. I expect the new kids on the block have a temporary advantage having coded from scratch to what's needed by companies today, but in five/ten years time they'll be bogged down by legacy support and a fractured product line just as the larger companies are today (think Netapp's 'unified' storage back in the day).
Does it really change much from a customer perspective? No. As always it's just a matter of risk vs cost and knowing your requirements. Everything else is just marketing and spin and vendor battles, with 'true cloud' being the pinnacle of hype mountain.
There’s a 3rd business aspect of HW+SW bundling beyond margin, especially for early stage companies like Nutanix: getting the run rate numbers up as high as possible in a shorter timeframe. Try getting a run rate of $100mln with software only.