Over the years, the IT industry’s monogamous relationship with on-prem infrastructure became stale, restrictive, and lacked ambition. When the new kid on the block came along, we rebelled and jumped feet first into a love affair with Public Cloud. It was new, exciting and catered to all our desires – and all at a very convenient hourly rate!
The onset of the COVID pandemic, coupled with the resulting global component shortages, has only fuelled our desperate need for an ultra-available resource, at a price point that is just too good to be true. Unsurprisingly, we’ve become hooked, a little obsessed, and totally dependent with nowhere else to turn.
Now that the novelty of our whirlwind romance has worn off, we’re left penniless, guilt-ridden and asking ourselves, ‘Why?!’. As we walk the lonely data-centre aisles of self-reflection, the moment of realisation finally hits – we need to get back with our trusty, reliable ex! Yes it’s going to be costly at first, and yes we are going to have to listen to those around us who will inevitably say, ‘I told you so’, but when all is said and done, we know it’s the right thing to do. But it doesn’t necessarily have to be like it was before…
Second time around, we’re older and wiser. We can establish rules and boundaries, (maybe even agree on a safe word?), and every now and again, when the timing and pricing are in line, we can still blow off some steam with Public Cloud.
Here’s 5 reasons why our new relationship with cloud can provide us with everything we need to thrive, as we move to a cloud native world. Spoiler alert: they are not sexy and exciting – they just make sense!
1 On Premise Private Cloud
New technologies are re-invigorating traditional data centres. Advances in opensource cloud architectures and the ability to deliver, with minimal coding, composable infrastructures that provide highly flexible environments, are giving Public Cloud a run for its money. These technologies allow organisations to deploy private clouds that offer the benefits of public cloud, but with greater security, compliance, and performance, and often at a lower cost.
2 Open Cloud Platforms / APIs
Vendor lock-in is a common problem for adopters of cloud technology. Cloud providers offer a large variety of services, but many of them cannot be extended to other cloud platforms. Migrating workloads from one cloud to another can be challenging. Many organizations start using cloud services, and later find it difficult to switch providers, if the current provider doesn’t suit their requirements, or when costs start to spiral.
Data security and regulatory risk can be associated with loss, leakage, or unavailability of data. This can cause business interruption, loss of revenue, loss of reputation, or regulatory incompliance. Just a few weeks ago, the Bank of England called for increased powers to oversee the banking and finance sector’s switch to cloud computing. Their concerns stem from cloud adoption making systems more secretive, and concentrating sensitive data in the hands of a select few, unregulated US based tech giants like Amazon, Google and Microsoft. In short, we don’t fully know who we are jumping into bed with just yet, and there are things we can and should be doing to protect ourselves.
4 Easy Migration to and from the *any* cloud
If you want to go with a multi-cloud architecture, and potentially migrate your applications between multiple cloud providers, you should consider one of the full-featured paid tools, Hystax for example. These tools are vendor-agnostic and allow you to migrate your workloads to any Public Cloud, as well as to Private Cloud, based on technology like OpenStack, VMware or Kubernetes.
5 Machine Driven Workloads
Application mobility and portability are critical here. When training AI workloads, you’ll need to ensure your application can “go” where the data is. Often with large scale models, training the data is the most challenging part, so ensuring portability is a critical requirement. Employing an open standards approach here will allow organisations to maintain agility, and place workloads close to the where the data is. The same holds true for inference – taking these trained models and running them close to the end user(s), again requires these same characteristics. Open distributed platforms, with interoperable open APIs, provide the base level to facilitate federated next generation distributed machine driven workloads.
We’re at the very beginning of the cloud computing era, and Public Cloud will be around for the foreseeable future. Legislation is still far behind where it needs to be, and it isn’t clear where this will end up. Public Cloud has been an excellent resource to help drive the rest of the industry towards new models of IT infrastructure delivery, but it’s by no means the most cost effective or performant way to run infrastructure at scale.
We must avoid temptation from the short term, instant gratification Public Cloud provides, and look towards a more controlled and reliable future. Today’s organisations need to be cloud native, but that doesn’t mean just using Public Cloud. Carefully considered cloud techniques and strategies should be employed internally, and scaled under the appropriate, cost controlled parameters, when it makes sense (i.e. short burst workloads, geographically distributed services etc.). We need to protect our businesses and associated data for the long term, in what will no doubt be, a turbulent road ahead. This will ensure we’re ready for the next wave of AI and cloud-native workloads, and building a safe, secure and cost effective cloud, “at home”.
To discuss your cloud-native strategy with one of our experts – please get in touch.