Apple Releases Early Stage Version of Image Editing AI Model

  • 📰 LowyatNET
  • ⏱ Reading Time:
  • 27 sec. here
  • 15 min. at publisher
  • 📊 Quality Score:
  • News: 59%
  • Publisher: 59%

Technology News

Apple,Image Editing,AI Model

Apple has released an early stage version of its image editing AI model, known as Multimodal Large Language Models-Guided Image Editing (MGIE). The model leverages MLLMs to interpret textual commands for manipulating images and excels at transforming simple or ambiguous text prompts into precise instructions. It can handle various image editing tasks through text commands.

Apple has released an early stage version of its image editing AI model, known as Multimodal Large Language Models-Guided Image Editing (MGIE). As noted by VentureBeat, it is currently accessible through GitHub, while a demo hosted on Hugging Face Spaces (shown above) is also available if you're interested to try the tool out. The company's image editing AI model leverages MLLMs (multimodal large language models) to interpret textual commands for manipulating images.

According to its project paper, MGIE excels at transforming simple or ambiguous text prompts into precise instructions, facilitating clearer communication with the photo editor. For example, a request to 'make a pepperoni pizza more healthy' could prompt the tool to add vegetable toppings. Image: Apple Beyond image alterations, MGIE can also handle fundamental image editing tasks such as cropping, resizing, and rotating, as well as enhancing brightness, contrast, and color balance - all of which through text command

 

Thank you for your comment. Your comment will be published after being reviewed.
Please try again later.
We have summarized this news so that you can read it quickly. If you are interested in the news, you can read the full text here. Read more:

 /  🏆 13. in TECHNOLOGY

Technology Technology Latest News, Technology Technology Headlines