When photo agencies issued a kill notice for a photo of Princess Catherine and her children on Monday — deeming the image to have been manipulated after spotting some inconsistencies — certain sections of the internet took it as proof that the entire picture was an elaborate fake.
But while you can argue about whether or not the royal family should have been more careful in the images they opted to post, the truth is “doctored” images are everywhere. In certain cases, the default processing applied by a smartphone camera — to zoom the image, improve complexions or sharpen focus — can produce inconsistencies just as egregious, and photos are easily fixed up on-device with increasingly powerful AI tools, which makes the images manipulated but not necessarily “fake”.
People edit images for all kinds of reasons, and have been doing so since before generative AI, or even Photoshop.Credit: Instagram
But even though we know doctored images are everywhere, the problem is we still can’t tell how doctored they’ve been. Even telltale signs of manipulation — visible artefacts, lines that don’t match up, elements that are out of place — don’t often tell us much. Has the photo been edited slightly for personal reasons? Has the smartphone messed up the background blur it applied automatically? Has it been generated entirely by AI?
In the case of an image such as Catherine’s, which is scrutinised to such an impossibly high degree, online pundits were even pointing to elements that to my eye are completely benign — one of the children’s fingers, one of the people’s teeth — as evidence of AI generation. Which on the one hand shows that some people really just see what they want to see, but on the other shows how ill-equipped many of us are to deal with a real concern that we can’t trust any digital image.
James Berrett, a senior lecturer at Swinburne University of Technology, said it was sensible to become more discerning about the photos you take at face value, even if they appear completely ordinary.
“We can be fooled easily. It’s very difficult to spot this kind of manipulation unless you’ve got an eye for it. Even then, it’s quite difficult to do,” he said.
Despite app stores being filled with software that claim to be able to detect fake photos, the technology to scan an image to determine definitively if and how it had been edited, doctored or generated does not yet exist. Some apps and programs that generate or edit images attach special metadata that helps prevent them from being passed off as unedited, but others do not, while many technological analysis methods are highly specialised and can be thrown off by sophisticated fakers.
“There’s a lot of work going on behind the scenes to help identify whether an image is AI-generated or not. But in terms of an app you can download to find out, almost like a reverse image search, it doesn’t really exist yet,” Berrett said.
“That could be a potential way forward. But I think there’s a lot that needs to happen for that to take place.”
Loading
Still, being unsure about the provenance of images doesn’t mean we need to leap to assuming that everything is AI-generated. Some recent news stories, particularly around fake images of Taylor Swift, have given the impression that anyone can create realistic-looking photos of anything. But, in truth, current AI image generation tends to produce far more obvious signs of manipulation than old-fashioned editing, and it’s rare to find images created wholly with consumer-level AI models that would fool too many people.
Potentially more concerning are the AI editing tools now available on many phones and in various software that can make small tweaks instantly that look pretty good to the untrained eye, like removing people or objects from photos or shifting the composition. Berrett said this made manipulation that used to be the realm of professionals accessible to everyone. Which isn’t necessarily a bad thing, but it’s something to keep in mind.
“It is important to remember that image manipulation has been happening for many years prior to this technology getting to the point it is at today,” he said.
Loading
“There’s been plenty of magazine covers and so on, images that have been doctored in some way, and there’s no real way of knowing unless there’s a leak or if we find something strange with it.”
It’s also interesting that this whole saga kicked off because news wires refused to distribute the image, after realising it had been edited. News outlets, of course, have had no issues running certain images in the past that they knew had been touched up, and that surely includes photos of the royals that have been issued for publicity. But today’s technology makes that fraught, especially when there are too many unanswered questions.
Berrett said a way forward for the near term could simply involve platforms and individuals being upfront when posting images, informing viewers of how the picture was created and edited.
“If you’re wanting to build trust with community, and build trust in your platforms, or you’re a source, making the statement that something’s been done to an image may make people question it less,” he said.
“But it’s very difficult to police that sort of thing.”
Get news and reviews on technology, gadgets and gaming in our Technology newsletter every Friday. Sign up here.
Most Viewed in Technology
Loading
>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : WAToday – https://www.watoday.com.au/technology/kate-s-photo-shows-why-editing-is-so-tricky-in-the-time-of-ai-20240312-p5fbtp.html?ref=rss&utm_medium=rss&utm_source=rss_technology