Vendor lock-in has been an often-quoted risk since the mid-1990’s.
Fear that by investing too much with one vendor, an organization reduces their options in the future.
Was this a valid concern? Is it still today?
Organizations walk a fine line with their technology vendors. Ideally, you select a set of technologies that not only meet your current need but that align with your future vision as well.
This way, as the vendor’s tools mature, they continue to support your business.
The risk is that if you have all of your eggs in one basket, you lose all of the leverage in the relationship with your vendor.
If the vendor changes directions, significantly increases their prices, retires a critical offering, the quality of their product drops, or if any number of other scenarios happen, you are stuck.
Locking in to one vendor means that the cost of switching to another or changing technologies is prohibitively expensive.
All of these scenarios have happened and will happen again. So it’s natural that organizations are concerned about lock-in.
When the cloud started to rise to prominence, the spectre of vendor lock-in reared its ugly head again. CIOs around the world thought that moving the majority of their infrastructure to AWS, Azure, or Google Cloud would lock them into that vendor for the foreseeable future.
Trying to mitigate this risk, organizations regularly adopt a “cloud neutral” approach. This means they only use “generic” cloud services that can be found from the providers. Often hidden under the guise of a “multi-cloud” strategy, it’s really a hedge so as not to lose position in the vendor/client relationship.
In isolation, that’s a smart move.
Taking a step back and looking at the bigger picture starts to show some of the issues with this approach.
The first issue is the heavy use of automation in cloud deployments means that vendor “lock-in” is not nearly as significant a risk as in was in past decades. The manual effort required to make a vendor change for your storage network used to be monumental.
Now? It’s a couple of API calls and a consumption-based bill adjusted by the megabyte. This pattern is echoed across other resource types.
Automation greatly reduces the cost of switching providers, which reduces the risk of vendor lock-in.
When your organization sets the mandate to only use the basic services (server-based compute, databases, network, etc.) from a cloud service provider, you’re missing out one of the biggest advantages of moving to the cloud; doing less.
The goal of a cloud migration is to remove all of the undifferentiated heavy lifting from your teams.
You want your teams directly delivering business value as much of the time as possible. One of the most direct routes to this goal is to leverage more and more managed services.
Using AWS as an example, you don’t want to run your own database servers in Amazon EC2 or even standard RDS if you can help it. Amazon Aurora and DynamoDB generally offer less operation impacts, higher performance, and lower costs.
When organizations are worried about vendor lock-in, they typically miss out on the true value of cloud; a laser focus on delivering business value.
In this new light, a multi-cloud strategy takes on a different aim. Your teams should be trying to maximize business value (which includes cost, operational burden, development effort, and other aspects) wherever that leads them.
As organizations mature in their cloud usage and use of DevOps philosophies, they generally start to cherry pick managed services from cloud providers that best fit the business problem at hand.
They use automation to reduce the impact if they have to change providers at some point in the future.
This leads to a multi-cloud split that typically falls around 80% in one cloud and 10% in the other two. That can vary depending on the situation but the premise is the same; organizations that thrive have a primary cloud and use other services when and where it makes sense.
There are some tools that are more effective when they work in all clouds the organization is using. These tools range from software products (like deployment and security tools) to metrics to operational playbooks.
Following the principles of focusing on delivering business value, you want to actively avoid duplicating a toolset unless it’s absolutely necessary.
The maturity of the tooling in cloud operations has reached the point where it can deliver support to multiple clouds without reducing its effectiveness.
This means automation playbooks can easily support multi-cloud (e.g., Terraform). Security tools can easily support multi-cloud (e.g., Trend Micro Cloud One). Observability tools can easily support multi-cloud (e.g., Honeycomb.io).
The guiding principle for a multi-cloud strategy is to maximize the amount of business value the team is able to deliver. You accomplish this by becoming more efficient (using the right service and tool at the right time) and by removing work that doesn’t matter to that goal.
In the age of cloud, vendor lock-in should be far down on your list of concerns. Don’t let a long standing fear slow down your teams.
The post The Fear of Vendor Lock-in Leads to Cloud Failures appeared first on .