• EvilBit@lemmy.world
    link
    fedilink
    English
    arrow-up
    80
    arrow-down
    2
    ·
    1 year ago

    As I understand it, one of the ways AI models are commonly trained is basically to run them against a detector and train against it until they can reliably defeat it. Even if this was a great detector, all it’ll really serve to do is teach the next model to beat it.

    • magic_lobster_party@kbin.social
      link
      fedilink
      arrow-up
      26
      ·
      1 year ago

      That’s how GANs are trained, and I haven’t seen anything about GPT4 (or DALL-E) being trained this way. It seems like current generative AI research is moving away from GANs.

      • EvilBit@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        I know it’s intrinsic to GANs but I think I had read that this was a flaw in the entire “detector” approach to LLMs as well. I can’t remember the source unfortunately.

      • KingRandomGuy@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        Also one very important aspect of this is that it must be possible to backpropagate the discriminator. If you just have access to inference on a detector of some kind but not the model weights and architecture itself, you won’t be able to perform backpropagation and therefore can’t generate gradients to update your generator’s weights.

        That said, yes, GANs have somewhat fallen out of favor due to their relatively poor sample diversity compared to diffusion models.