A few months ago, I was called in at the last minute to participate in an onstage fireside chat at an Authors’ Guild event. (I’m on the nonprofit’s council, but of course I speak here only for myself.) Guild CEO Mary Rasenberger and I spent much of the session exploring the implications of a future where AI robots could create viable literary works. For writers, it’s a terrifying scenario. As we discussed the prospect of a marketplace flooded by books authored by prompting neural nets, I had a revelation that seemed to mitigate some of the anxiety. It may not have been an original thought, and I may have even come up with it myself earlier and forgotten about it. (My ability to retain what’s in my training set falls short of that of ChatGPT or Claude.) But it did frame the situation in a way that transcended issues like copyright and royalties.
I put it to the audience something like this: Let’s say you read a novel that you really loved, something that inspired you. And only after you were done were you told that the author had not been a human being, but an artificial intelligence system … a robot. How many of you would feel cheated?
Almost every hand went up.
The reason for that feeling, I went on, is that when we read—when we take in any piece of art, actually, in any medium—we’re looking for something more than great content. We are seeking a human connection.
This applies even when an author is long dead. If anyone is still reading Chaucer (Has he been canceled yet?), somehow over centuries we can vibe into the mind of some dude that lived in the 14th century and would have been amazing to talk to over a beer or a goblet of mead. In fact, we get to know him better through reading him, even if we have to struggle a bit with Middle English. (Props to Ann Matonis, my rock star of a Medieval Lit professor at Temple University. Tough grader, though.)
That epiphany about the meaning of human authorship has been my northern star as I work my way through the challenging AI issues that seem to besiege us every day. I thought about it this week when I sat in on a press briefing from Google product managers explaining some new AI features of its large language model–powered chatbot Gemini. (For those not keeping score at home, that’s the bot formerly known as Bard; these companies change names more than spies with safe-deposit boxes full of passports.) The new, enhanced Gemini promises, they said, “to supercharge your productivity and creativity.”
Productivity is a slam dunk win for algorithms. No quibble there. Creativity we have to talk about.
Google provided some illustrative examples. One was organizing snacks for a kids soccer team. Gemini could figure out who brings what at which game, send personalized emails to the right people, and even map out the destinations. That seems a great way to save time on what can be a thankless time suck. Productivity!
A second example involved the creation of “a cute caption” for a picture of the family dog. Gemini provided: “Baxter is the hilltop king! 👑 Look who’s on top of the world!” That’s a reasonably fun caption. But it makes me think about the purpose of posting to social media, which is all about human connections. Sharing a remark pinned to your dog’s picture is part of a conversation. Using a ghostwriter invariably distances you from friends and followers who read the caption. Having a robot provide your part of the conversation seems like outsourcing to the extreme.
>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : Wired – https://www.wired.com/story/plaintext-the-thing-ai-just-cant-do/