I can use BLAS and LAPACK functions via the Accelerate framework to perform vector and matrix arithmetic and linear algebra calculations. But do these functions take advantage of Apple Silicon features?
Do BLAS and LAPACK functions use Apple Silicon features
But do these functions take advantage of Apple Silicon features?
Probably.
So the BLAS and LAPACK functions that come with Accelerate are optimized for Apple Silicon?
As I said, probably. I guess I have to be more verbose:
The docs say that these things are "optimized for high performance". We could reasonably infer that they use, for example, NEON vector instructions on ARM. But the docs aren't explicit about this.
If you want to be certain, in the absence of source code you would need to reverse-engineer the implementation. That's definitely not something that I would do, however, since it would violate the terms of the Apple developer agreement and would result in the termination of my developer account.
If you want to be certain that you're using NEON code it would be better to write it yourself, or use an open-source library where you can check exactly what it is doing.
One other thing to note is that these Accelerate functions are not inline
, as far as I can tell. If you're dealing with large matrices that's probably not a concern, but if you're doing e.g. 2D/3D geometry (small vectors and matrices) then I would expect to get a measurable improvement when the compiler can inline the implementations.
It looks like you get a response from Steve — who knows more about this stuff than I ever will! — on Swift Forums.
Share and Enjoy
—
Quinn “The Eskimo!” @ Developer Technical Support @ Apple
let myEmail = "eskimo" + "1" + "@" + "apple.com"