Apple’s image editing AI model dubbed MGIE (short for MLLM-Guided Image Editing) is making waves. While it is still in early development, the model offers a glimpse into the future of intuitive photo editing.
The skyrocketing popularity and adoption of generative AI technology have paved the way for the development of artistic powerhouses like Microsoft’s Image Creator (formerly Bing Image Creator) and Midjourney. However, this magic has a frustrating limitation: there’s no quick way to edit an already-generated image. However, Apple is trying to change that.
Google’s experimental image generation tool ImageFX lets you quickly modify your AI-generated concepts by allowing you to alter your prompt using expressive chips. Similarly, Apple researchers have released a new AI image tool that lets you edit photos with just words.
Apple has released MGIE 🍎
An open source model that uses natural language to edit images!
MGIE can crop, resize, rotate, flip, add filters, add and remove objects, change the contrast, brightness, & color of images via text commands pic.twitter.com/ufhUlheh5B
— Subhan Qureshi (@LearnWithSubhan) February 9, 2024
Instead of clicking through the menus, you just tell the MGIE model what you want: resize, flip, crop and even add filters via text prompts. It is also worth noting that the MGIE model has earned recognition at a leading AI research conference: the International Conference on Learning Representations (ICLR) 2024.
In the research paper, which was accepted at ICLR 2024, the researchers used a photo of a pepperoni pizza and the prompt “make it more healthy” to implement changes to the photo. The model added vegetables to the pepperoni pizza.
How does MGIE work?
MGIE delivers improved instruction-based image editing with the help of powerful AI models called MLLMs, which are capable of processing both text and images. Despite their remarkable cross-modal understanding and visual-aware response generation capabilities, MLLMs haven’t been widely applied to image editing tasks.
Notably, MGIE uses MLLMs for the image editing process in 2 ways. Apple’s AI image editing model uses MLLMs to get expressive instructions from user input. These concise and clear instructions act as guides for the editing process.
【Appleから生成AI】
MGIEは、Appleが発表した、画像編集AIモデル。自然減を使用して画像を編集することができるオープンソース。
画像トリミング、サイズ変更、回転、反転、フィルター追加、オブジェクト追加と削除、画像コントラスト、明るさ、色変更が可能!
続く>>pic.twitter.com/wXZ7FPRWrx
— 田中義弘 | taziku CEO / AI × Creative (@taziku_co) February 8, 2024
Aside from this, the model uses MLLMs to understand what you want to change, then imagines how the photo would look afterwards. This imagination guides the tool to make the changes you want, pixel by pixel, to match your photo.
Designed to get better with every edit, MGIE masters understanding, imagining, and editing all at the same time.
Using MGIE
MGIE is currently available as an open-source project on GitHub, where you can find the code, data and pre-trained models. Moreover, there’s a demo notebook that serves as a guide for using MGIE for various editing tasks.
Alternatively, you can try out MGIE online through a web demo hosted on Hugging Face Spaces. So, it is safe to say that MGIE is an easy-to-use and flexible-to-customise image editing tool from the Cupertino-based tech giant.
After replacing Samsung as the world’s most popular phone maker last month, Apple is now showing interest in joining the AI arms race, with CEO Tim Cook hinting at generative AI features for upcoming iPhone and MacBook models.
>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : IBTimes – https://www.ibtimes.co.uk/apple-unveils-mgie-ai-model-that-edits-photos-based-your-text-commands-1723413
Ben Affleck: Why AI Will Never Replace the Magic of Shakespeare in Entertainment” | Mint