OpenStack is gradually gaining acceptance as an enterprise-grade framework for automating datacentre infrastructure and enabling organisations to operate a diverse array of applications and services.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
The platform started life in 2010 as a joint project between hosting provider Rackspace and Nasa. It has since grown into one of the largest open source projects ever, with releases driven by the bi-annual meetings of the OpenStack community, where priorities for the next version are thrashed out.
Market research indicates a growing number of enterprise OpenStack deployments are moving from pilot projects (or test and development platforms) to full production status, but there are still issues to be ironed out. Chief among these is ensuring a smooth update of the myriad components that make up OpenStack when upgrading to the latest release.
Upgrading was always problematic with early releases of OpenStack, partly because much of the development effort focused on the capabilities needed for it to fully operate as an infrastructure as a service platform.
Early adopters often found themselves faced with an unpalatable choice. They could either take their OpenStack infrastructure offline while installing the new code, or simply move workloads to an entirely separate deployment based on the new code.
Newer OpenStack versions, such as the Ocata release unveiled earlier this year, have focused more on stability and reliability, with an emphasis across all modules on the ability to upgrade from one release to the next with as close to zero downtime as possible.
Yet more than half of all OpenStack users are still running a version of the platform that is more than two releases old, the findings from its most recent user survey suggest.
This means their releases are “unsupported”, as far as the official OpenStack lifecycle goes. However, firms that package and distribute OpenStack builds typically provide commercial support for much longer – often three to five years.
More importantly, it could mean they are using OpenStack software modules that have been identified as having security vulnerabilities and issues since their release.
Early adopters using an old release
It often emerges that many of these users are still running an old release because they were early adopters that modified their OpenStack build to make it a better fit for their requirements, or to dovetail better in their existing IT environment.
“A lot of the OpenStack early adopters looked at the technology and saw that it was going to offer great potential in terms of agility and flexibility, but also that it may have needed a bit of work to fit it into their organisation and integrate it with their own existing platforms and systems,” says Mark Baker, Ubuntu Product Manager for OpenStack at Canonical, developers of one of the most widely used OpenStack distributions.
This may have involved customising the code itself, or perhaps importing modules from upstream, meaning they have taken projects such as the Keystone authentication service for example, and deployed it into an OpenStack build that pre-dates its development.
“We’ve seen examples where customers have based their initial build on Ubuntu 14.04, and because they wanted some particular network or virtualisation features, they’ve applied patches to provide that functionality,” says Baker.
“As OpenStack has matured and been able to take advantages of things in the virtualisation layer, they might have pulled down later versions of that virtualisation stack or things like Open vSwitch, and you end up running a hybrid environment which is not updatable in a standard way.”
In some cases, upgrading OpenStack also means updating the operating system layer, and this can no longer be accomplished using the automated update tools available, he says.
Ironically, the ability to adapt OpenStack to the organisation’s exact requirements has been one of its major selling points.
“Unfortunately, the value proposition of OpenStack has largely revolved around it being very easy to customise and highly pluggable. That very value proposition is an anti-pattern when it comes to cloud disruption, where the infrastructure stack is highly vertically integrated,” says Boris Renski, co-founder of OpenStack specialist Mirantis.
Mirantis styles itself as the “pure-play” OpenStack distributor because it offers services and support around a straight build of the platform, instead of integrating this with a version of Linux, as many other distributors such as Canonical and Red Hat do.
Perhaps not surprisingly, Mirantis has been lecturing OpenStack users for some time on the merits of deploying a standard OpenStack distribution and using techniques such as continuous integration/continuous deployment (CI/CD) to keep their infrastructure up to date with the current release code.
OpenStack customisation complexity
This does not mean OpenStack users should avoid customising their deployment, but tread carefully when making changes to the actual code itself, according to Baker.
“One of the beauties of OpenStack is that it’s got a comprehensive set of application programming interface (API) services, and you can plug different storage technologies and different networking technologies into it, and there’s a very healthy ecosystem that sits around OpenStack and distributions like ours,” he says.
“The challenge isn’t so much using OpenStack in conjunction with a third-party module, because just about everybody is doing that. That itself isn’t the problem – it’s where you see companies that have basically developed their own OpenStack, either starting with the upstream code or basing it on a distribution like ours and then customising the code, customising the packaging, and customising the tooling around it.”
Renski backs this view, and advocates the use of “standard deployment paradigms and plug-ins” that are broadly used by the entire community for this reason.
“It is OK to tune some knobs here and there or integrate with external systems like (lightweight directory access protocol) LDAP or billing, but changes down the stack should be avoided at all costs,” he says.
The path to enterprise open source adoption
The message from the OpenStack suppliers is to stick with the release code, and deploy a commercially-supported OpenStack distribution from a supplier such as themselves, naturally.
As self-serving as this might sound, Linux’s journey to enterprise acceptance has followed a similar path over the past 20 years or so. While Linux is still open source, businesses now typically source it from just a handful of commercial supplier such as Red Hat, Canonical or Suse.
This highlights the fact that, while many open source software projects may be free to download and use, technical support and operational assistance are worth paying for when your business depends on the smooth running of any particular technology.
“We see that more OpenStack users are now turning to a distribution like ours for their infrastructure because their understanding is that they don’t want to be investing their valuable development resources in innovating at the infrastructure layer. They want to focus those resources on innovating at the application space where they can compete more effectively,” says Baker.
This does not help the early adopters who face a tricky manual upgrade process that will most likely require the assistance of the developers or consultants that helped customise their OpenStack deployment in the first place.
All of this points to the fact the processes for deploying and maintaining OpenStack infrastructure need to be improved upon, according to Ovum principal analyst Roy Illsley.
“OpenStack must come up with an easier upgrade process (which is on the road map for the next release) so that they do not end up with even more workloads on unsupported releases than are run on supported releases,” he says.
OpenStack Foundation’s lifecycle policy questioned
Furthermore, Illsley questions the OpenStack Foundation’s lifecycle policy of only supporting the most current release and its immediate predecessor, when new releases come just six months apart.
“The current approach of ending support based on time is clearly not working for the users, so OpenStack must think differently and use the composable concept to develop a better model,” he says.
Composable here means that OpenStack needs to become more modular, so that individual components of the stack can more easily replaced with something else to satisfy specific user requirements.
Coincidentally, or perhaps not, this is the direction the OpenStack Foundation now seems to be pushing its development teams to take. At the recent OpenStack Summit event in Boston, chief operating officer Mark Collier spoke of “composable open infrastructure” and urged OpenStack developers to work towards this model.
“The opportunities to bring together more value by putting different projects, different services together, is greater than ever, but it’s also a challenge if we don’t create them in a way that is designed to be composable,” he said.
Cutting out complexity
Designing to be composable means there must be good interfaces between modules, but developers also needed to cut out unnecessary complexity.
One example Collier cited of this approach is the Swift object storage module, developed to provide storage support for OpenStack deployments, but which can also be deployed and operated as a software-defined storage platform in other contexts.
“Swift is probably one of the most forward-looking projects, as it was built from day one to be composable, meaning it was built to be used with or without other OpenStack components,” says Collier.
All of which probably has a familiar ring to OpenStack users: “Don’t worry, things will be better in future. Just trust us.”