Cloud Architecture

For hybrid, on-prem storage, containers need fresh approach


As organizations continue to build out their hybrid cloud infrastructure, the relationship between on-premises storage and cloud-native containers needs to evolve, from how IT teams communicate to how the infrastructure operates.

Legacy storage and containers have existed for decades with orchestration tools such as Docker and Kubernetes pushing containers into the mainstream in the last 10 years. But storage was built for a behind-the-firewall, array-dependent enterprise where SAN or NAS devices interact with VMs and servers — not for a cloud-native infrastructure where containers provide flexibility, scalability and portability for application development and deployment.

Regardless, hybrid cloud enterprises need the technologies to overcome their differences and work together, given storage’s deep enterprise roots and the continued pull to containers and the cloud.

There is a need to strike a balance between the benefits of legacy storage and the advantages offered by container and cloud-native storage solutions.
Dmitrii IvashchenkoLead software engineer and game developer, My.Games

“There is a need to strike a balance between the benefits of legacy storage and the advantages offered by container and cloud-native storage solutions,” said Dmitrii Ivashchenko, a lead software engineer and game developer at My.Games.

Storage administrators are a critical component of bringing the tech together, potentially acting as the bridge between two technologies built for different IT eras, he said.

Same problem, new day

Containers bring the promise of application portability, but organizations can run into infrastructure issues, according to Sam Werner, director of offering management for the IBM software-defined storage portfolio.

“If you still depend on storage admins to go get your storage and allocate certain amounts of capacity under an SLA [service-level agreement], you’re going to run into the same roadblocks as always,” Werner said.

Cloud promises infrastructure flexibility, but on-premises storage requires users to wait on the storage admin to provision the infrastructure needed, he said. Rather than using two different, siloed approaches, enterprises and vendors need to move to a different model.

As more companies look to containerized and cloud-native storage, strategies on both technologies need to be rethought, according to Vikas Kaushik, CEO at TechAhead, a mobile app development company. Containers are more flexible, but come with added data protection issues, for example.

An approach that combines the benefits of legacy storage and containers would be ideal and could mitigate issues around scalability, flexibility, data protection and other enterprise-grade features, Kaushik said.

“This enables businesses to make use of their existing investments while simultaneously adopting more contemporary storage paradigms,” he said.

The increased complexity of combining on-prem and containers is also an issue for storage admins, Ivashchenko said. They understand the intricacies of legacy equipment, but also must learn and adapt to container and cloud-native paradigms.

A diagram outlining VM versus container architecture.
VMs and containers require different approaches from storage admins to make them work with on-premises storage.

Evolution, again

One analyst believes enterprises have been in this situation before — with the evolution of VMs in the enterprise.

“With VMs, when VMware first got popular, storage vendors had to rework their equipment because the servers had to be treated differently when they were virtualized,” said Dave Raffo, an analyst at Futurum Group. “With containers, it is the same thing.”

Now the enterprise is in the midst of reworking its equipment again to support the use of containers, the basis for modern application development and deployment. With VMs, multiple virtual servers were deployed on one physical server, and the software had to reflect this. With containers, the software will also have to adapt to a new order that includes the ability to scale, be flexible and make rapid changes.

But IBM’s Werner said VM mapping of physical resources to virtual ones is more straightforward than mapping containers. VMs map to a volume and create consistency groups that allow for things such as the creation of snapshots and mirroring.

“When you get to containers, it is much more dynamic,” he said.

Containerized applications can map to thousands of volumes, making it difficult if not impossible for storage admins to meet the data availability demand, Werner said. Storage teams need awareness, and this comes from communication.

The people problem

Part of the friction between storage admins and apps teams is not about technology — it’s about communication.

“[Apps teams] decided to build out a DevOps environment, shifting their model for application development to be more agile — a ‘build once, deploy anywhere’ type of methodology,” Werner said. “But they’ve never talked to the infrastructure team.”

The lack of transparency can work in the early stages, but as it scales, the infrastructure isn’t flexible or elastic with the workload and won’t be able to support the ultimate outcome of being deployed anywhere, Werner said. For example, an application’s runtime can fall out of alignment with VMware, resulting in the inability to back up an application or use vMotion to move data.

That lack of communication leads to a lack of visibility, Raffo said. Storage admins are responsible for maintaining data protection policies and adhering to governing policies. But if they don’t know what is needed by the application teams, some of this will have to be done after the fact, draining some efficiency.

To avoid this, infrastructure and applications teams need to meet before the application is deployed. DevOps teams should provide a clear vision for what it needs or understand that their projects will be slowed down.

“There has to be a lot of communication,” Raffo said. “[Admins] need to know what policies are needed [beforehand].

It’s good enough for now, but for how long?

While there are container-specific on-premises storage products, such as Portworx, organizations continue to use a combination of their legacy storage and the public cloud for all storage needs, Raffo said.

“People are using cloud for container storage and legacy [for traditional use cases],” he said.

Once the containerized storage needs get high enough, on-premises storage products such as Portworx or other container-specific storage will make more sense, Raffo said.

Some storage companies such as Supermicro and IBM believe software-defined products are the way forward. A software-defined approach allows for the abstraction of any hardware and for the creation of policies to meet SLAs without making changes to the legacy system.

Paul McLeod, senior field application engineer at Supermicro, a maker of hardware and software-defined storage, also believes this approach could help the two technologies meld.

“As the [software-]defined methods become more effective and cost-effective, they allow for more design innovation, [allowing] you to go from a completely self-supported environment to proprietary and vendor software support with hardware support,” McLeod said.

Regardless of the technology, as the relationship between containers and storage evolves in the enterprise, it will be the storage admins who act as the linchpin, according to Ivashchenko.

“By sharing their experiences [and] insights and rethinking requirements, storage administrators can contribute to the successful integration of these storage paradigms, enabling organizations to leverage the benefits of both legacy storage systems and container/cloud-native architectures,” Ivashchenko said.

Adam Armstrong is a TechTarget Editorial news writer covering file and block storage hardware and private clouds. He previously worked at StorageReview.com.



Source

Related Articles

Back to top button