While the most significant AI news this week was Google’s introduction of its new multimodal model, Gemini, Apple also quietly made its own AI announcement named MLX. But before you get too excited, what Apple has launched is not a large language model or LLM like Gemini, it is an AI framework which is essentially designed to run on the Apple Silicon chips and could possibly bring generative AI apps and features to MacBooks (and may be even also iPhones, eventually).
MLX is essentially a framework for machine learning, which will allow developers to construct models that operate efficiently on Apple Silicon, alongside the MLX Data deep learning model library. Both resources are available through open-source repositories such as GitHub and PyPI.
“The design of MLX is inspired by frameworks like PyTorch, Jax, and ArrayFire. A noteable difference from these frameworks and MLX is the unified memory model. Arrays in MLX live in shared memory. Operations on MLX arrays can be performed on any of the supported device types without performing data copies. Currently supported device types are the CPU and GPU,” Apple posted on GitHub.
Machine learning researcher, Awni Hannun, at Apple also posted on X about MLX. Hannun describes it as “an efficient machine learning framework specifically designed for Apple silicon (i.e. your laptop!)”.