With the cloud now a key staple of IT strategies, IT leaders would be wise to keep apprised of offerings and pricing tactics evolving within the cloud market. Here’s the latest CIOs should know.
The cloud market has been a picture of maturity of late.
The pecking order for cloud infrastructure has been relatively stable, with AWS at around 33% market share, Microsoft Azure second at 22%, and Google Cloud a distant third at 11%. (IBM, Oracle, and Salesforce are in the 2-3% range.)
Revenue growth remains solid across the industry, but slowing somewhat, with none of the Big 3 outperforming the others enough to materially alter the balance of power. That overall market stability has extended to prices, which, with some exceptions, have remained relatively flat. And at this point, the market has evolved to the point where the major players all have similar offerings.
SUBSCRIBE TO OUR NEWSLETTER
From our editors straight to your inbox
Get started by entering your email address below.
Please enter a valid email address
But the emergence of generative AI changes everything.
The frenzy created by the public release of OpenAI’s ChatGPT has triggered an arms race among hyperscalers to differentiate themselves by developing their own large language models (LLMs), building platforms that enable enterprises to create generative AI applications, and integrating generative AI throughout their portfolios of service offerings.
As cloud computing expert David Linthicum explains, “What’s occurring is that the cloud providers are approaching feature saturation in terms of the services they can provide versus their peers. Thus, these services will begin to commoditize, and with the popularity of multicloud, core services such as storage and computing will be pretty much the same from cloud to cloud.”
He adds, “This is behind the drive to generative AI by the cloud providers. It’s a race to determine who owns this space and the ability to de-commoditize their services with this new technology layered on top of more traditional cloud services.” At this early stage in the gen AI race, there’s no clear leader, but all the players are pouring resources into the fray.
Microsoft, which bankrolled OpenAI to the tune of $10 billion, has embedded ChatGPT features into everything from productivity apps like Word and Excel, to its Edge browser, to a cloud offering aimed at enterprises, the Azure OpenAI Service.
Google is racing to build out its gen AI platform; co-founders Sergey Brin and Larry Page have even come out of semi-retirement to jumpstart the Google AI initiative. Google has its own large language model called PaLM, is building its own AI chips (Tensor Processing Units), and is launching new industry specific AI-based services under the Vertex AI banner. Most recently the company launched gen AI-based services aimed at healthcare and life science organizations.
And AWS recently announced Bedrock, a fully managed service that enables enterprise software developers to embed gen AI functionality into their programs. AWS also builds its own low-cost AI chips (Inferentia and Trainium) in limited volumes; the company is using the chips internally to power its gen AI capabilities, and it is making the chips available to customers.
While generative AI is certainly the hottest trend in the cloud market, there are others that CIOs need to be aware of. Here are the top cloud market trends and how they are impacting CIO’s cloud strategies.
The gen AI gold rush — with little clarity on cost
“It’s the year of AI,” declares Forrester Research. “Every hyperscaler, SaaS provider, and startup hopes to use the buzz around AI to their advantage. Cloud providers are pushing AI services to break out of sluggish revenue growth and differentiate themselves from rivals. Enterprise cloud customers are eager to use AI wherever they can for strategic initiatives, but without busting IT budgets already under pressure from multicloud complexity and sprawl.”
The Big 3 hyperscalers aren’t the only players offering generative AI-based cloud services to enterprise IT. IBM is stepping up with its Open-Stack-based watsonx AI platform. And Nvidia, which supplies everybody with the vast majority of their generative AI chips (GPUs), has built its own full-stack cloud platform called DGX Cloud, an AI service that lives inside the Oracle cloud, and will soon be available on both Azure and Google Cloud.
For CIOs, this means there will be multiple cloud-based options for building generative AI functionality into existing business processes, as well as creating new AI-based applications.
The challenge, says Bernard Golden, executive technical advisor at VMware, is how to make sure sensitive corporate data is protected and kept out of the general pool that makes up LLM databases.
Linthicum adds that generative AI-based apps will be “costly to run,” so “CIOs need to find the proper use cases for this technology.”
And for CIOs looking to make the most of gen AI capabilities built into the cloud offerings they depend on, initial explanations as to how pricing will work have been rather opaque.
Cloud price creep — with leaps thanks to AI
IBM caused quite a stir when it announced price increases for storage services that ranged as high as 26%, as well as smaller price hikes for IaaS and PaaS offerings.
Generally speaking, however, cloud providers have held the line on price increases in order to remain competitive. But the slowdown in growth across the industry will likely put pressure on all cloud vendors to hike prices going forward. As Linthicum says, “We’re entering the phase of technology when they need to harvest value from their investments. I would suspect that prices will creep up over the next several years.”
Of course, the benefit of using cloud services is that customers can select whatever infrastructure configuration suits their needs. If they choose a first-generation processor, there are values to be had. But for organizations that need high-performance computing, or organizations looking to reap the benefits of AI, selecting a newer model chip comes at a premium.
For example, choosing to run your workload on an Nvidia H100 chip versus an earlier model A100 will result in a price increase of more than 220%, says Drew Bixby, head of operations and product at Liftr Insights.
And as the hyperscalers add more GPUs (which are exponentially more expensive than traditional CPUs) to the mix in their own data centers, those costs will likely be passed on to enterprise customers.
Industry clouds ripe for gen AI edge
Industry clouds are on the rise — and will benefit from the emergence of generative AI, says Brian Campbell, principal at Deloitte Consulting, who explains that industry clouds “tend to be at the forefront of both business and technology executive’s agendas.”
Tech execs like the speed, flexibility, and efficiency that industry-specific clouds provide, and business leaders appreciate the ability to focus scarce internal resources on areas that enable them to differentiate their business. Early adopters of industry clouds were healthcare, banking, and tech companies, but that has expanded to energy, manufacturing, public sector, and media.
Campbell adds, “With the recent explosion of gen AI, executives are increasingly looking at how to use gen AI beyond proofs-of-concept, thus turning to the major providers of industry clouds, hyperscalers, independent software vendors, and systems integrators who have been quickly embedding gen AI alongside other technologies in their offerings.”
Line between cloud, on-prem blurs
The old paradigm of a clear line of demarcation between cloud and on-prem no longer exists. There are many terms that apply to this phenomenon of cloud-style services being deployed in a variety of scenarios all at once — hybrid cloud, private cloud, multicloud, edge computing, or as IDC defines it, dedicated cloud infrastructure as a service (DCIaaS.)
IDC analyst Chris Kanaracus says, “We increasingly see the cloud as not about a particular location, but rather a general operating model for IT. You can have the cloud anywhere in terms of attributes such as scalability, elasticity, consumption-based pricing, and so on. The challenge moving forward for CIOs is to stitch it all together in a mixed-vendor environment.”
For example, AWS offers Outposts, a managed service that enables customers to run AWS services on-premises or at the edge. Microsoft offers a similar service called Microsoft Azure Stack. Traditional hardware vendors also have as-a-service offerings that can run in data centers or at the edge: Dell Apex and HPE GreenLake.
Increased interoperability as lock-in loses some luster
Competing cloud vendors aren’t particularly incentivized to enable interoperation. The business model for cloud providers is to lock in a customer, get them used to that particular vendor’s tools, processes, marketplaces, software development platforms, etc., and continue to encourage that customer to move more resources to their cloud.
But enterprise customers have overwhelmingly adopted a multicloud approach and cloud vendors have been forced to deal with that reality.
For example, Microsoft and Oracle recently launched Oracle Database@Azure, which allows customers to run Oracle’s database services on Oracle Cloud Infrastructure (OCI) and deploy them in Microsoft Azure datacenters.
And storage leader NetApp recently announced a fully managed service that enables customers to seamlessly bring business-critical workloads across both Windows and Linux environments to the Google Cloud without refactoring code or redesigning processes.
As these barriers to interoperability come down, enterprises will benefit by being able to move storage volumes and applications to the most appropriate cloud platform.
Rise of the citizen developer
There has always been a tension between traditional IT and so-called shadow IT. The emergence of low-code, no-code solutions has made it easier for non-IT staffers to build simple applications. For example, Microsoft’s Power Platform enables the creation of mobile and web apps that can interact with business tools.
But ChatGPT has blown any technical constraints out of the water. For example, with Microsoft’s Copilot, end users can write content and create code with a simple prompt. For IT leaders, this can be a double-edged sword. It’s beneficial to the organization if employees can boost their productivity through the creation of new tools and software programs.
But Golden points out that tools like Copilot are “great until they’re not great.” In other words, these simple, one-off applications created by citizen developers can create security risks, they aren’t built to scale, and they don’t necessarily interoperate with complex business processes.
During the pandemic, there was a “mad dash” of enterprises shifting workloads to the cloud in order to make them more easily accessible to remote workers. “Now they are getting the big bills,” Linthicum says.
As a result, organizations are adopting FinOps technology to manage and optimize cloud costs. Linthicum says that FinOps enables organizations to reduce technical debt and “drive more cost savings by normalizing the use of cloud resources. In essence, it fixes mistakes that were made in the past, such as the use of the wrong cloud services, too much data movement, etc.”
Forrester researchers concur, noting that, “whenever economic headwinds hit, IT cost optimization gains momentum. For cloud cost management, high interest hit in 2018 and once again this year.” The good news for IT is that all of the cloud providers offer FinOps services and there is a slew of third-party software vendors that offer cloud cost management tools.
>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : CIO – https://www.cio.com/article/655693/7-cloud-market-trends-and-how-they-will-impact-it.html