• citytree@lemmy.ml
    link
    fedilink
    arrow-up
    20
    ·
    1 year ago

    Why would I use this ChatGPT thing when I can self-host Llama 2 or Falcon, which is free and open source?

    • philoko@lemmy.ml
      link
      fedilink
      arrow-up
      7
      ·
      1 year ago

      I’m a bit out of the loop with LLMs but it depends on what you’re doing.

      Last I heard, you’re going to want to use a 65b or 70b model if you want something that runs as good as GPT 3.5 but good luck with getting a GPU with enough VRAM to hold it without breaking the bank. You could offload layers to system RAM or even swap but that can come with pretty steep performance implications.

      I haven’t heard of a model that’s comparable to GPT 4 but like I said, I’m pretty out of the loop. But, you’d still probably have the same VRAM and performance issues but even worse since bigger models usually is better.

      All that being said, you might not need some huge model depending on what you’re doing. There’s some smaller models that can fit on consumer GPUs that can perform surprisingly well in certain situations. There’s also uncensored variants of models that won’t give you some moral lecture if you ask it for something questionable. Then there’s also the privacy aspect; I absolutely would not trust OpenAI with any personal information. I believe there’s a way to opt out of them using your personal data for training for personal accounts but you’re still trusting them with whatever information you send them.

    • dan@lemm.ee
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      They use data, just not the data from the customers paying them for enterprise licenses.

      Honestly fear of leaking customer data is the only thing that’s kept my work from spunking every single byte of data we have at some LLM service a lazy attempt to come up with a product they can sell with minimal effort. They’re gonna love this shit.