UK judge rates ChatGPT as ‘jolly useful’ after using it to help write a decision

UK judge rates ChatGPT as ‘jolly useful’ after using it to help write a decision

AI in brief A judge working at the UK’s Court of Appeal has admitted he used ChatGPT to help him write a ruling.

Speaking at a Law Society event, Lord Justice Birss said he turned to ChatGPT to generate a paragraph for court documents in a case related to intellectual property law. Birss said he directly copied and pasted the words into the ruling he wrote, adding that tools like ChatGPT had “great potential,” the UK’s Telegraph reported.

“I think what is of most interest is that you can ask these large language models to summarize information. It is useful and it will be used and I can tell you, I have used it,” he revealed.

Birss’s remarks are thought to be the first reported instance of a British judge admitting to using generative AI software in their work.

As such it is controversial, as ChatGPT makes many errors. In the US, two lawyers were heavily criticized for using the chatbot to defend a client in court after judges realized the software had generated false information.

“I’m taking full personal responsibility for what I put in my judgment, I am not trying to give the responsibility to somebody else. All it did was a task which I was about to do and which I knew the answer to and could recognize as being acceptable,” Birss argued.

Are VCs cooling on AI startups?

AI-centric chip startups hoping to compete with Nvidia are facing an uphill battle to secure funds from venture capitalist firms.

They have collectively managed to raise $881.4 million this year to the end of August, according to PitchBook data, reported by Reuters. That’s a decrease of around $908 million – over 80 percent – compared to the amount of money raised in the first three quarters of last year.

The data shows that VCs are spending less money backing AI chip startups and making fewer deals: just four startups have received funding this year so far compared to 23 in 2022.

Hardware startups are a riskier proposition than their software-centric cousins because hardware can take years to design and build. Established chip makers have many advantages.

“Nvidia’s continued dominance has put a really fine point on how hard it is to break into this market,” Greg Reichow, a partner at Eclipse Ventures, said. “This has resulted in a pullback in investment into these companies, or at least into many of them.”

The future of the cloud sure looks like it’ll be paved in even more custom silicon

Netflix offers up to $900,000 for AI product manager while actors strike for protection

X may train its AI models on your social media posts

Coca-Cola adds AI flavor

Soft drink giant Coca-Cola has created a limited edition variety of Coke with a flavor profile generated using AI.

The drink – dubbed Coca-Cola Y3000 – is described as being “futuristic flavored” and is part of the carbonated colossus’s “Creations” series of limited edition varieties. It comes in a silver can with pink, blue, and purple bubbles designed using text-to-image tools. At the bottom it states that it was “co-created with artificial intelligence.”

A spokesperson for the fizzy titan told CNN that the flavor profile was created with help from machine learning. Coca-Cola collected data to see what tastes people associated with the future and turned to software to create different flavor pairings.

So what does it taste like? Normal Coke, apparently – with a twist.

Oana Vlad, senior director of global brand at Coca-Cola, previously told CNN that the drink vendor never discloses what is inside its recipes. “We’re never really going to answer that question” he said, at least not in a “straightforward” way. For limited edition varieties Coke’s “flavor profile is always, we say, 85 to 90 [percent] Coke. And then that 10 to 15 [percent] twist of something unexpected,” he added.

Y3000 will be available in stores starting this week for consumers in the US and Canada.

Uh oh, even The New York Times is getting into AI journalism

The New York Times is hiring a senior editor to bring generative AI tools into its newsroom.

“This editor will be responsible for ensuring that The Times is a leader in GenAI innovation and its applications for journalism,” states the ad for the position. “They will lead our efforts to use GenAI tools in reader-facing ways as well as internally in the newsroom.”

Putting generative AI to work in newsrooms has proven controversial. Early adopters like Red Ventures’ CNET or G/O Media’s Gizmodo produced errors – even after human oversight

Such mistakes can be difficult to spot, since AI generates text that is grammatically correct and often convincing. Without deep expertise on a subject, it can be difficult for editors to detect mistakes.

The ad states that whoever gets the gig “will also help shape further guidelines for how GenAI is used by journalists throughout the newsroom, in partnership with the Standards department, taking into account the evolving nature of the technology and its risks.”

But The NYT appears to see a role for AI in its newsroom and famously pedantic fact-checking process.

“The editor’s primary focus will be on producing a steady stream of projects demonstrating high potential and responsible ways to incorporate GenAI tools into Times journalism and workflows,” the ad reads.

The move to incorporate generative AI technology into a top newsroom will no doubt spur others – who don’t want to fall behind – to follow suit. ®

>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : The Register – https://go.theregister.com/feed/www.theregister.com/2023/09/18/ai_in_brief/

Exit mobile version