The AI wars are heating up. This is why we should be worried

The AI wars are heating up. This is why we should be worried

Opinion

Updated November 30, 2023 — 10.42am

Updated November 30, 2023 — 10.42am

This week Amazon joined what is fast developing into an arms race among the world’s biggest technology companies to develop artificial intelligence products.

Amazon Q, launched on Wednesday, is designed for corporate clients rather than consumers, with Amazon trying to leverage its leading market share in cloud computing services to make up ground on competitors such as Microsoft, Google and Meta Platforms (Facebook’s parent company).

The OpenAI chaos involving CEO Sam Altman sent a very clear message.Credit: Bloomberg

There has been a perception that Amazon is trailing in the race to develop generative AI products that was ignited only a year ago when OpenAI released ChatGPT.

It moved to dispel that in September when it invested $US4 billion ($6 billion) in Anthropic, a start-up competitor to OpenAI and has followed that open with the release of Amazon Q, which is focused on helping companies deliver improved customer experiences at lower costs, build chatbots and optimise their supply chains.

Meta released its own chatbot last month, having only established a dedicated generative AI development group in February. Its products mainly target consumers.

Loading

Google’s Gemini, announced in May, is a set of large language models developed to directly challenge OpenAI’s leadership in the sector.

It’s been described as “multimodal,” integrating text, images and other data. It exploits Google’s access to vast amounts of data from its search function and ownership of YouTube, Google Books and Google Scholar.

Microsoft has its investment and a $US13 billion funding commitment to OpenAI behind both consumer and business offerings, with its financial support a key factor in the outcome of the recent implosion in OpenAI’s novel governance arrangements.

Less than a fortnight ago, OpenAI’s co-founder and chief executive, Sam Altman, was ousted by the board of the non-profit entity that sits above the for-profit company that Microsoft owns 49 per cent of and which commercialises the group’s AI products.

A revolt by the staff – 95 per cent of whom threatened to leave unless Altman was reinstated – and a Microsoft announcement that it would employ Altman and set up a new internal generative AI business led to Altman returning to OpenAI and to the displacement of a board that had a charter to prioritise safety over profits with new directors who may or may not have the same priorities.

Amazon unveiled a new product this week. Credit: AP

Reuters and The Information have reported that the catalyst for Altman’s original sacking was a letter from OpenAI researchers to the board that warned of a breakthrough in AI the company had made.

Q apparently has the ability to solve fairly basic mathematical problems, which requires a level of understanding of maths, abstract concepts and an ability to reason logically and make deductions that would be a step beyond what AI models have been capable of.

The reason this might have agitated OpenAI researchers is that it would be a step towards artificial general intelligence (AGI), or a level of intelligence approaching, or eventually surpassing, that of humans. The founder of Google’s DeepMind AI lab, Shane Legg, has said he believed there was a 50-50 chance that AGI would be achieved by the end of this decade.

Loading

There are plenty within the AI research community who have downplayed the implications of Q*, if it is as it has been described, but the outcome of the battle between OpenAI’s not-for-profit directors and Altman and Microsoft sends a clear signal.

If there’s a choice between safety and profit, profit will win.

That’s partly from necessity – developing the large language models that power generative AI devours cash – but also because, to attract executives and developers, the companies need to compete with other tech companies offering big salaries and huge equity upside.

OpenAI has been valued (for the purpose of creating an opportunity for staff in the for-profit entity to cash out some of their equity) at $US86 billion. When that sort of money is on the table any qualms about risks to humanity would be relegated to, at best, second order issues.

The outcome at OpenAI – the removal of those who prioritised safety over profits – says quite strongly that, in their scramble to develop generative AI and keep up with their competitors, the big tech companies can’t be trusted to self-regulate. The legislators will need to do that.

Joe Biden issued an executive order late last month requiring companies to share the results of their safety testing and to develop tools and tests to ensure the models are safe. The Europeans have passed draft laws forcing transparency and restricting some of the perceived more risky applications.

The pace at which AI developments are moving, however, creates the risk that the legislators will either significantly lag the development of AI or, if they are too heavy-handed, limit what might otherwise be transformative innovation.

It’s not just the big US and, to a much lesser degree, UK and European tech companies that are ploughing vast amounts of capital into AI.

In the first half of this year China shaded the US for the number of AI start-ups receiving funds. China’s tech giants, Tencent Holdings, Baidu and Alibaba, are investing in start-ups while also pouring capital into their own large language models.

If there’s a choice between safety and profit, profit will win.

China’s government is also a major funder of AI developers (it has a particular interest in facial recognition and in military applications) and, given the authoritarian nature of the state and sheer size of the population, has vast amounts of data that can be drawn on to develop the models in a race for AI supremacy that is reliant on access to huge amounts of data.

The geopolitical implications of AI development add to the pressure on governments and regulators to ensure that legislation and regulation driven by safety concerns don’t inhibit their developers.

Loading

There will inevitably be tensions, and probably controversies, as regulators and the companies themselves wrestle with the conflicts inherent in trying to reconcile the economic and strategic imperatives of being at the leading edge of AI development while keeping humans safe.

Given the nature of the companies leading the race – and the outcome of the brief but brutal boardroom battle for control of OpenAI’s developments – it’s hard not to be pessimistic about the outcomes.

Get news and reviews on technology, gadgets and gaming in our Technology newsletter every Friday. Sign up here.

Most Viewed in Technology

Loading

>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : WAToday – https://www.watoday.com.au/technology/the-battle-for-ai-is-heating-up-we-should-be-worried-20231130-p5enxi.html?ref=rss&utm_medium=rss&utm_source=rss_technology

Exit mobile version