Why the cloud shows us the future of AI

Why the cloud shows us the future of AI

A graphic image of a cloud set in a digital background.

(Image credit: Shutterstock/ZinetroN)

More than a decade ago, businesses were faced with a new, disruptive technology. This technology promised to cut operational costs, increase productivity, and allow for collaboration from around the world. It also raised concerns about reliability, security, and government regulations.

A decade later, these are the same promises and concerns businesses have about AI, potentially the most disruptive technology in a generation.

We are hearing from customers that they are excited, skeptical, and worried – and each reaction is rightfully warranted. We are headed into an uncertain future as AI upends both the business and consumer world, but we are not without clues as to what an AI-powered future might look like or how we might proactively prepare for it.

We only have to look to the lessons learned from the disruptive technology that came before it: cloud computing.

Chief Strategy and Marketing Officer at GitLab.

The cloud: a generational tech leap, not just an on-premises alternative

For many businesses, the cloud was initially viewed as an alternative to hosting servers, data, and applications on-premises. It was inexpensive, instantaneous to deploy, and relieved IT from the ongoing maintenance burden.

The reality, however, is that where a company hosts its IT infrastructure is just one small part of the journey we now call digital transformation. By using hosted services in the cloud, companies gained access to computing power that was inexpensive, resilient, and scalable based on their changing needs. This led to spillover effects, including many of the cloud’s initial promises, such as increased productivity, collaboration, and a larger focus on data.

There were also unforeseen costs. Many companies were surprised by data transfer costs, usage bills due to overprovisioning, or poor customer experiences due to under provisioning. Security breaches and privacy violations related to cloud-hosted services were commonplace, as were outages impacting many customers simultaneously. Few people could have predicted these unforeseen costs, and most IT teams at the time were simply not trained to handle these brand-new situations.

Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!

We see a similar situation happening with AI. Take software development, for example, where generative AI has shown the potential to greatly increase the speed of writing code. There are many examples, and they are indeed impressive: code generation, suggested functions, and how-to’s for writing scripts in different languages and frameworks.

But building great software is not just about writing code. In fact, developers have told us that it’s how they spend just 25% of their time. It’s one part of an entire process that involves testing, security, monitoring, and more – where generative AI is still in its infancy. When we make drastic changes in one area, such as how we write code, we must proactively anticipate unforeseen side effects elsewhere.

As recent headlines such as Samsung’s ban on staff using AI following a ChatGPT data leak demonstrate, one of these side effects is how a company’s code is used to further train language-learning models that their competitors can leverage. Our customers are excited to adopt AI across the software development lifecycle, but are justly concerned about what safeguards are in place to protect their private code and intellectual property.

The onus is on those of us building AI into our products and services to show our customers that they can trust and verify AI-generated code, while exploring ways to use AI elsewhere in software development, such as detecting and explaining security vulnerabilities.

Disruptive technologies double as upskilling opportunities

There is no doubt that disruptive technologies generate fear and uncertainty; the same was true of the cloud. At the time, IT departments were often hesitant to hand over the reins of mission-critical hardware and company processes to any outside third party.

There were also legitimate concerns about the future of their jobs. In hindsight, it’s easy to see that although IT is no longer primarily focused on managing on-premises hardware, it has not been replaced. If anything, as they have learned new skills such as cloud scripting, security research, and systems design, IT is more critical to a company’s vision than ever before. They are architects, designing the very infrastructure that makes modern software, platforms, and infrastructure services possible.

The situation with AI will be similar, offering opportunities for people to bring their ideas to life without the need to be expert coders. At the same time, AI will also allow upskilling opportunities for those in traditionally high-skill roles to accelerate their careers by applying their existing skills in new ways, just as the cloud did for IT. Also of note, reducing the maintenance of software will allow organizations to focus developers on more strategic efforts and spread the skilled work across more of them, rather than requiring a single superhero.

To really make change, responsibility and oversight must lead the way

When I was at Tableau in 2013, I had the title of Head of Cloud Strategy. A few other forward-thinking companies had similar-sounding roles like Chief Cloud Officer. Today, the title sounds silly, but these leaders served an important purpose at the time: they helped businesses wrap their heads around a brand new framework, evangelizing the benefits of cloud computing, establishing clear guardrails surrounding its adoption, and introducing new innovative concepts like infrastructure as code and GitOps.

We are seeing such leaders again: Head of AI, AI Evangelist – even Salesforce has a CEO of AI. They will all champion AI’s possibilities while ensuring their companies adopt it responsibly.

The cloud remains one of the most disruptive technologies of the modern era. Some of the most innovative companies in recent history created their products and services because of cloud computing. Quite a few companies, however, have also lost the trust of their customers and as a result, their business, due to their failure to adopt the cloud quickly and securely.

AI is poised to be even more disruptive, and although organizations are optimistic about AI, they know that similar failures to think strategically about responsibility could lead to even worse outcomes regarding data privacy, intellectual property, and, worst of all, trust. For example, in an interview with 60 Minutes, Google and Alphabet CEO Sundar Pichai discussed how AI could be used to spread disinformation through videos of anybody saying anything–himself included.

Like in the early days of the cloud, we need to strike the right balance between caution and optimism. AI will not simply change how we code, write, communicate, or any single part of our businesses. It will change everything, and with the right leaders in place, we will be ready.

We list the best cloud cost management service.

This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Ashley Kramer is the Chief Strategy and Marketing Officer at GitLab.

>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : TechRadar – https://www.techradar.com/pro/why-the-cloud-shows-us-the-future-of-ai

Exit mobile version