Getting started is often considered to be the hardest part of any project, and while that sentiment may hold true for moving to the cloud, it’s what comes next the OpenStack Foundation is intent on tackling.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
The community of open source contributors who shape the features and functionality of OpenStack have focused their efforts on making the platform easier for users to install and deploy, but it is the needs of users who are a bit further along the Openstack adoption curve that need addressing now.
The people who are on day 100 of their cloud journey, suggests OpenStack Foundation CEO, Jonathan Bryce, as opposed to only those who are just starting out.
“You’d don’t build a cloud to have it for a few weeks – you have it for years,” he tells Computer Weekly, during an interview ahead of the OpenStack Summit in Sydney.
This has seen the Foundation and its supporting community of developers pivot towards working on projects that are focused more towards the “lifecycle management” of OpenStack, and ensure users have what they need to scale up their deployments, he continues.
“It’s about answering the question of how do you get a cloud up and running. How you monitor and scale it, and make sure it integrates well with your other enterprise systems,” says Bryce.
And this is important because the days of people using OpenStack simply to create, what he terms, “centralised datacentre clouds” that effectively act as “vending machines for virtual machines” are drawing to a close, giving way to more sophisticated use cases for the technology, he says.
As proof, Bryce points to the way the OpenStack user community has keenly adopted its bare metal management product, Ironic, over the past year, contributing to it becoming one of the Foundation’s fastest-growing projects.
What’s fuelling its growth is user demand for big data and machine learning workloads virtualisation could hamper the performance of.
“They’re two workloads where you don’t necessarily want a hypervisor in the middle,” he says. “You want to get maximum performance. Or, if you’re doing machine learning, you might have GPUs and they tend to work better if they’re not virtualised,” he adds.
Using OpenStack to stand-up edge computing and multi-cloud environments have also emerged as popular use cases for the technology, and are further signs of how enterprise use of the technology is maturing and growing in sophistication, he claims.
The ability of enterprises to tap into these use cases remains a challenge for some, and is something the Foundation is making a concerted effort to address on a number of fronts.
Addressing the OpenStack adoption pain points
A persistent sticking point for some users is trying to make sense of the wide variety of tools, technologies and projects that fall under the OpenStack umbrella, and what role (if any) these should play in their organisation’s wider digital transformation efforts.
This is a problem partly exacerbated by the Foundation’s adoption of the “Big Tent” governance model in May 2015, which ushered in an overhaul of how OpenStack projects are defined.
Previously, contributors had to – essentially – petition to have their projects included in an integrated OpenStack release before they could start work on them. Under the Big Tent approach, they could toil away on them provided the projects adhere to certain OpenStack community guidelines.
Dustin Kirkland, vice-president of product at OpenStack distribution provider Canonical, describes the “Big Tent era” as a confusing time, culminating in lots of new projects and add-ons for the platform of variable interest to users.
“Canonical has really tried to solidify a very stable core – that’s compute, that’s storage, that’s network, and infrastructure that makes those work [because] that is what our customers really want,” he says.
“All the other things around the periphery [of Openstack]: you get interest in those, but when it comes to the real-world implementations, it’s really all about that stable core.”
Big Tent projects confuse users
This is a sentiment Bryce appears to share, explaining – during a media session at the Sydney Summit – the proliferation of Big Tent projects largely served to confuse users about what OpenStack is all about.
“The side effects were we lost some of the specificity around what projects were mature, and which ones were helper tools, so we wanted a focus on the core, and offer more explanation around how these tools fit together,” says Bryce.
To this point, the Foundation showcased a map at the Sydney Summit that shows how the core compute, storage and networking components of its technology stack fit together, and plug into the other peripheral projects its community has created.
Its efforts in this area appear to be landing well with users, with Amy Wheelus from US-telco giant AT&T telling Computer Weekly at the show about the challenges she has faced when trying to pin down exactly what constitutes the core of OpenStack.
The company is in the midst of a multi-year network function virtualisation (NFV) effort, termed Domain 2.0, geared towards future-proofing its infrastructure against the exponential growth in mobile data traffic its networks are subjected to each year.
Having worked indirectly with OpenStack in another role at AT&T, Wheelus took on the role of vice-president of cloud and Domain 2.0 platform integration at the firm in June 2017, which required her to deepen her “broad-breadth of knowledge” of OpenStack.
“When I sat down with my team, I asked two different direct reports – one works on the design side and the other on the software delivery side – for a list of the core components of OpenStack, because I was learning. It wasn’t the same list. It was very close,” she says.
For an organisation, like AT&T, involved in a large-scale OpenStack deployment, this lack of consistency could prove problematic as they seek to scale up their use of the platform over time.
“And if you’re looking at longevity, scale and how you’re going to scale deployments, there is a certain piece of the OpenStack project that needs to be consistent and needs to get mature,” she adds.
The OpenStack project cull
These simplification efforts have also seen the Foundation take steps to cull projects that are not going anywhere or have failed to capture the imagination of its contributor community, says Bryce.
“It’s not a flippant decision. In general, it is projects that have failed to gather a lot of momentum or projects where the team has shrunk down to a handful of contributors, and it’s not like we’re deleting them from the internet. Some of them continue to work,” he says, during a pre-Summit interview.
“The [discontinued projects] might not have a lot of adoption or contribution, or really gained much traction, and we have to decide is it worth continuing or do we focus on the core project that 90% of the users are working on?”
So far, this process has not resulted in any push back from contributors, claims Bryce, and – in some instances – has been welcomed by some tasked with propping these projects up.
“There is a fairly high bar in terms of the kind of work you have to do to prove that your [project is] tested, upgradeable, and that you’re meeting security processes and all those kind of things,” he says.
“So when a project gets down to just a couple of developers, sometimes they’re relieved and some have actually requested it.”
Enabling greater OpenStack integration
Another area signposted as hampering the ability of enterprises to tap into OpenStack’s full innovation potential is the interoperability problems many face when trying to integrate it with other open source technologies.
Users often find themselves having to do a lot of heavy lifting to ensure their OpenStack deployment plays nicely with other open source technologies, including Cloud Foundry and Kubernetes, it is claimed. This in turn diverts the time and attention of enterprise IT teams, so they have less time to spend on building services and products for their own customers.
During the course of the Sydney Summit, the Foundation outlined its commitment to tackling users’ technology integration issues by forging closer ties with other open source communities, for example.
The Foundation also talked about encouraging its contributors to share details of any common integration problems they encounter while using OpenStack to underpin their digital transformation efforts so that stable and repeatable workarounds for these issues can be found.
Without this work, users are likely to encounter roadblocks when using OpenStack to underpin their edge computing or multi-cloud efforts, says Bryce, during a media session at the Sydney Summit.
“This is part of why we think the integration piece is really critical. The things that are enabling multi-cloud are these tools that sit on top of the underlying cloud,” he says.
“So when we talk about integrating with Cloud Foundry or Kubernetes or some other tools that run on top of these environments, we need to make sure we’re doing everything possible in our community to be a really good target for those kind of platforms.”
Critical thinking on cloud
The setup of the OpenStack community is designed so users can share feedback with ease about the problems they encounter as they build out their deployments, so they can be addressed and worked on.
To the Foundation’s detractors, the acknowledgement of these issues is often seized upon as a stick to beat OpenStack with, with outsiders citing these admissions as evidence the platform remains complex to use and upgrade.
Alan Clark, chairman of the OpenStack Foundation board of directors and director of industry initiatives at OpenStack distribution provider Suse, says there is some degree of truth to this, where users of the community version of OpenStack are concerned.
“One of the criticisms we continue to see this and have discussions in the community around long-term supportability, because OpenStack comes out every six months,” he says.
“If you come in and just try to use a community release, those problems will just exist forever because the community is releasing every six months.”
For enterprise users, there is a tendency to want to hold off upgrading and maintain the release they have for longer periods, which is not always conducive to how the community works.
“The community is releasing every six months, and if you’re involved in the community, your interest as an engineer, naturally, is just to work on the latest and greatest things rather than sit and try to maintain something for two, to three to five years,” he says.
In these types of scenarios, a lot of the aforementioned complexity can be negated by using third-party distributions from Suse, Red Hat, Canonical and others.
“We have long-term support, we’ve made it very easy to maintain, and we offer rolling upgrades and we’ve mitigated those problems,” he adds.
When it comes to silencing OpenStack’s critics, AT&T’s Wheelus says one only has to look at some of the mission-critical services users are now using the technology to run as proof of its enterprise-readiness.
In AT&T’s case, the telco shared details at the summit about how OpenStack is being used to stand-up the FirstNet network, which is a US-based, nationwide broadband network used by blue light services to host their communications.
“I think it sends an important message to people who may not understand the value of OpenStack, and to the OpenStack community that we’re willing to put our most critical first responder network on this technology,” she says.
“When you’re dealing with life and death situations, and where you have first responders who need to be able to communicate and talk, OpenStack has to be dependable, resilient and have the security features that will protect not just the network itself, but the people using them.”