Convert Stable Diffusion Model Speed on M1, M2 and Pro M2

I am benchmarking these 3 devices: macbook Air M1, macbook Air M2 and macbook Pro M2 using ml-stable-diffusion. I convert Stable Diffusion Models DreamShaper XL1.0 from pyTorch to Core ML. I found the macbook Air M1 is fastest. The benchmark table is as below. I found "Running MIL default pipeline" the Pro M2 macbook will become slower than M1. What are the differences from Pro M2, M2 and M1 chip?


MacBook Pro (M2, 2022)

Posted on Oct 23, 2023 9:01 PM

Reply
Question marked as Top-ranking reply

Posted on Oct 25, 2023 11:02 PM

I found its Python version effects. Python 3.9.6 on M2 Pro vs Python 3.11.5 on M1, M1 will run conversion faster than M2 Pro.

If they are on the same Python version, their speed will be M1 < M2 < M2 Pro.

4 replies

Oct 24, 2023 10:31 AM in response to TimYao

Same memory configurations across all three? And beyond the potential for memory constraints, I don’t know if that code is using the CPU, GPU, or NPU. If it has offloaded from the CPU, there can be different numbers of units available in different Macs.


The FAQ lists memory-related command tweaks and related issues when running Python on Apple silicon, particularly around 8 GB systems. Have you reviewed those details?


The benchmarks posted do indicate performance improves over more recent Apple silicon, including some about converting models into Core ML.


If you are encountering issues not addressed in the FAQ and related docs, I’d suggest checking with the app developers. (Yeah, I know it’s Apple code you’re referencing, but the Apple ML developers tend not to reply here.)


This thread has been closed by the system or the community team. You may vote for any posts you find helpful, or search the Community for additional answers.

Convert Stable Diffusion Model Speed on M1, M2 and Pro M2

Welcome to Apple Support Community
A forum where Apple customers help each other with their products. Get started with your Apple Account.