not to be left out — Apple slices its AI image synthesis times in half with new Stable Diffusion fix Creating AI-generated images on Macs, iPhones, and iPads just got a lot faster.
Benj Edwards – Dec 2, 2022 10:27 pm UTC Enlarge / Two examples of Stable Diffusion-generated artwork provided by Apple.Apple reader comments 47 with 0 posters participating Share this story Share on Facebook Share on Twitter Share on Reddit
On Wednesday, Apple released optimizations that allow the Stable Diffusion AI image generator to run on Apple Silicon using Core ML, Apple’s proprietary framework for machine learning models. The optimizations will allow app developers to use Apple Neural Engine hardware to run Stable Diffusion about twice as fast as previous Mac-based methods. Further ReadingWith Stable Diffusion, you may never believe what you see online again
Stable Diffusion (SD), which launched in August, is an open source AI image synthesis model that generates novel images using text input. For example, typing “astronaut on a dragon” into SD will typically create an image of exactly that.
By releasing the new SD optimizationsavailable as conversion scripts on GitHubApple wants to unlock the full potential of image synthesis on its devices, which it notes on the Apple Research announcement page. “With the growing number of applications of Stable Diffusion, ensuring that developers can leverage this technology effectively is important for creating apps that creatives everywhere will be able to use.”
Apple also mentions privacy and avoiding cloud computing costs as advantages to running an AI generation model locally on a Mac or Apple device. Further ReadingHeres why Apple believes its an AI leaderand why it says critics have it all wrong
“The privacy of the end user is protected because any data the user provided as input to the model stays on the user’s device,” says Apple. “Second, after initial download, users dont require an internet connection to use the model. Finally, locally deploying this model enables developers to reduce or eliminate their server-related costs.” Advertisement
Currently, Stable Diffusion generates images fastest on high-end GPUs from Nvidia when run locally on a Windows or Linux PC. For example, generating a 512512 image at 50 steps on an RTX 3060 takes about 8.7 seconds on our machine.
In comparison, the conventional method of running Stable Diffusion on an Apple Silicon Mac is far slower, taking about 69.8 seconds to generate a 512512 image at 50 steps using Diffusion Bee in our tests on an M1 Mac Mini.
According to Apple’s benchmarks on GitHub, Apple’s new Core ML SD optimizations can generate a 512512 50-step image on an M1 chip in 35 seconds. An M2 does the task in 23 seconds, and Apple’s most powerful Silicon chip, the M1 Ultra, can achieve the same result in only nine seconds. That’s a dramatic improvement, cutting generation time almost in half in the case of the M1. Further ReadingStable Diffusion in your pocket? Draw Things brings AI images to iPhone
Apple’s GitHub release is a Python package that converts Stable Diffusion models from PyTorch to Core ML and includes a Swift package for model deployment. The optimizations work for Stable Diffusion 1.4, 1.5, and the newly released 2.0.
At the moment, the experience of setting up Stable Diffusion with Core ML locally on a Mac is aimed at developers and requires some basic command-line skills, but Hugging Face published an in-depth guide to setting Apple’s Core ML optimizations for those who want to experiment.
For those less technically inclined, the previously mentioned app called Diffusion Bee makes it easy to run Stable Diffusion on Apple Silicon, but it does not integrate Apple’s new optimizations yet. Also, you can run Stable Diffusion on an iPhone or iPad using the Draw Things app. reader comments 47 with 0 posters participating Share this story Share on Facebook Share on Twitter Share on Reddit Benj Edwards Benj Edwards is an AI and Machine Learning Reporter for Ars Technica. For over 16 years, he has written about technology and tech history for sites such as The Atlantic, Fast Company, PCMag, PCWorld, Macworld, How-To Geek, and Wired. In 2005, he created Vintage Computing and Gaming. He also hosted The Culture of Tech podcast and contributes to Retronauts. Mastodon: benjedwards@mastodon.social Twitter @benjedwards Advertisement Channel Ars Technica ← Previous story Next story → Related Stories Today on Ars