I was just recently going through some old episodes from Software Engineering Radio when I came across this one episode featuring Casey Muratori, where he goes through some of his thoughts around his video from February 2023, titled "‘Clean’ Code, Horrible Performance". I was actually already aware of the video by this time, but listening through the episode gave me an itch to see these concepts in my reality, experiment them by myself.
It would be interesting to see if an iterator instead of a manual for loop would increase the performance of the base case.
My guess is not, because the compiler should know they are equivalent, but would be interesting to check anyway.
I wonder if the compiler checks to see if the calls are pure and are therefore safe to run in parallel. It seems like the kind of thing the Rust compiler should be able to do.
If by parallel you mean across multiple threads in some map-reduce algorithm, the compiler will not do that automatically since that would be both extremely surprising behavior and in most cases, would make performance worse (it’d be interesting to see just how many shapes you’d need to iterate over before you start seeing performance benefits from map-reduce). If you’re referring to vectorization, then the Rust compiler does automatically do that in some cases, and I imagine it depends on how the area is calculated and whether the implementation can be inlined.
Removed by mod
I think they meant using for accumulating, like this:
Yes. That’s what I meant.
Though I heavily expect the rust compiler to produce identical assembly for both types of iteration.
Removed by mod
Off-topic, but does that actually work? I would assume OpenAI would just ignore it and you’d have to prove that they did so.
Removed by mod
Maybe I’ll join you. :)
Removed by mod
I’m on Wayland, but I’m sure I can figure something out.
I do most of my lemmy-ing on mobile, so I’ll probably make a bot to auto-edit my posts or something.