• TheAgeOfSuperboredom@lemmy.ca
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    1 year ago

    I disagree. Async Rust is fine, but it does have some baggage, not least of which is Pin/Unpin which I still don’t fully understand. But that aside, I prefer writing async Rust to any other language because the rest of Rust comes along for the ride!

    It’s actually amazing that I can use the same mental model for async code on a small MCU or a large server.

    Is Arc really the worst GC? Doesn’t Swift use reference counting also? I did a few minutes of searching but couldn’t really find any benchmarks comparing Arc with Swift RC or some other GC.

    I feel that async Rust is a good set of tradeoffs that allows it to scale to a lot more than just writing web servers. Rust seems to be pretty good for web servers too though.

    • Esp@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      1 year ago

      ARC can memory leak if you don’t properly use weak references when appropriate. You trade GC ticks for having to deal with a counter. generally GC will (raw language performance aside) have higher throughput (because its not spending time doing ref counts) but will have more sporadic latency because if the GC ticks, then your program is basically on pause until it’s done.

      Comparing them as if one is better than the other feels like painfully missing the point. Completely different memory models. And if you’re only slightly making use of ARC, then switching to a GC is a big jump.

      • TheAgeOfSuperboredom@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        I guess that’s my point. The article criticizes reference counting as if it’s strictly worse, but it’s not so simple. Even with a GC funny things can happen so it’s worth understanding the memory model of the language.