• brucethemoose@lemmy.world
    link
    fedilink
    English
    arrow-up
    31
    arrow-down
    2
    ·
    edit-2
    5 months ago

    That’s the great thing about open models. Censorship? Once identified, all it takes is one person and a bit of cash to get rid of it, though it seems Perplexity did a particularly good job (unlike some “abliterated” models that are pretty dumbed down).

    • brucethemoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 months ago

      It’s honestly not that big a deal, as it’s not like knowing anything about how it was trained (beyond the config) would help you modify it. It’s still highly modifiable. It’s not like anyone can afford to replicate it.

      It would be nice to publish the hyperparameters for research purposes, but… shrug.

      I think a subset of the exact training data/hyperparameters would help with quantization-aware-training, maybe, but that’s all I got.

  • biofaust@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    5 months ago

    I run an uncensored version on my PC since weeks, there are multiple ones on HuggingFace.

  • ZILtoid1991@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    6
    ·
    5 months ago

    IDK, but this seems like wankery to me. Just google it if you want to know about it, the AI isn’t an “all knowing being” nor “the arbitrer of truth”.

    I have a feeling that a new logical fallacy will soon emerge (if it isn’t already widespread on certain places of the internet), that will be “X is true because the LLM said so”.

    • fruitycoder@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      5 months ago

      It’s really an extension of “Would some really do that? Just lie on the Internet?” But now “Would AI, which is built to create content like what people post on the Internet, really just lie?”