• This is fine🔥🐶☕🔥@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    10 months ago

    ⢀⡴⠑⡄⠀⠀⠀⠀⠀⠀⠀⣀⣀⣤⣤⣤⣀⡀⠀⠀⠀⠀⠀ ⠸⡇⠀⠿⡀⠀⠀⠀⣀⡴⢿⣿⣿⣿⣿⣿⣿⣿⣷⣦⡀⠀⠀⠀⠀ ⠀⠀⠀⠀⠑⢄⣠⠾⠁⣀⣄⡈⠙⣿⣿⣿⣿⣿⣿⣿⣿⣆⠀⠀⠀ ⠀⠀⠀⠀⢀⡀⠁⠀⠀⠈⠙⠛⠂⠈⣿⣿⣿⣿⣿⠿⡿⢿⣆⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⢀⡾⣁⣀⠀⠴⠂⠙⣗⡀⠀⢻⣿⣿⠭⢤⣴⣦⣤⣹⠀⠀⠀⢀⢴⣶⣆ ⠀⠀⢀⣾⣿⣿⣿⣷⣮⣽⣾⣿⣥⣴⣿⣿⡿⢂⠔⢚⡿⢿⣿⣦⣴⣾⠁⠸⣼⡿ ⠀⢀⡞⠁⠙⠻⠿⠟⠉⠀⠛⢹⣿⣿⣿⣿⣿⣌⢤⣼⣿⣾⣿⡟⠉⠀⠀⠀⠀⠀ ⠀⣾⣷⣶⠇⠀⠀⣤⣄⣀⡀⠈⠻⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⡇⠀⠀⠀⠀⠀⠀ ⠀⠉⠈⠉⠀⠀⢦⡈⢻⣿⣿⣿⣶⣶⣶⣶⣤⣽⡹⣿⣿⣿⣿⡇⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠉⠲⣽⡻⢿⣿⣿⣿⣿⣿⣿⣷⣜⣿⣿⣿⡇⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⢸⣿⣿⣷⣶⣮⣭⣽⣿⣿⣿⣿⣿⣿⣿⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⣀⣀⣈⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠇⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⢿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠃⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠹⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⡿⠟⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠉⠛⠻⠿⠿⠿⠿⠛⠉

    • CrayonRosary@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      9 months ago

      This looks like junk in a web browser. Here it is inside a code block.

       ⢀⡴⠑⡄⠀⠀⠀⠀⠀⠀⠀⣀⣀⣤⣤⣤⣀⡀⠀⠀⠀⠀⠀  
      ⠸⡇⠀⠿⡀⠀⠀⠀⣀⡴⢿⣿⣿⣿⣿⣿⣿⣿⣷⣦⡀⠀⠀⠀⠀ 
      ⠀⠀⠀⠀⠑⢄⣠⠾⠁⣀⣄⡈⠙⣿⣿⣿⣿⣿⣿⣿⣿⣆⠀⠀⠀ 
      ⠀⠀⠀⠀⢀⡀⠁⠀⠀⠈⠙⠛⠂⠈⣿⣿⣿⣿⣿⠿⡿⢿⣆⠀⠀⠀⠀⠀⠀⠀ 
      ⠀⠀⠀⢀⡾⣁⣀⠀⠴⠂⠙⣗⡀⠀⢻⣿⣿⠭⢤⣴⣦⣤⣹⠀⠀⠀⢀⢴⣶⣆  
      ⠀⠀⢀⣾⣿⣿⣿⣷⣮⣽⣾⣿⣥⣴⣿⣿⡿⢂⠔⢚⡿⢿⣿⣦⣴⣾⠁⠸⣼⡿  
      ⠀⢀⡞⠁⠙⠻⠿⠟⠉⠀⠛⢹⣿⣿⣿⣿⣿⣌⢤⣼⣿⣾⣿⡟⠉⠀⠀⠀⠀⠀ 
      ⠀⣾⣷⣶⠇⠀⠀⣤⣄⣀⡀⠈⠻⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⡇⠀⠀⠀⠀⠀⠀ 
      ⠀⠉⠈⠉⠀⠀⢦⡈⢻⣿⣿⣿⣶⣶⣶⣶⣤⣽⡹⣿⣿⣿⣿⡇⠀⠀⠀⠀⠀⠀ 
      ⠀⠀⠀⠀⠀⠀⠀⠉⠲⣽⡻⢿⣿⣿⣿⣿⣿⣿⣷⣜⣿⣿⣿⡇⠀⠀⠀⠀⠀⠀ 
      ⠀⠀⠀⠀⠀⠀⠀⠀⢸⣿⣿⣷⣶⣮⣭⣽⣿⣿⣿⣿⣿⣿⣿⠀⠀⠀⠀⠀⠀⠀ 
      ⠀⠀⠀⠀⠀⠀⣀⣀⣈⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠇⠀⠀⠀⠀ 
      ⠀⠀⠀⠀⠀⠀⢿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠃⠀⠀⠀⠀⠀⠀⠀ 
      ⠀⠀⠀⠀⠀⠀⠀⠹⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⡿⠟⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀ 
      ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠉⠛⠻⠿⠿⠿⠿⠛⠉
      
    • General_Effort@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      They use the bomb-making example but mostly “unsafe” or even “harmful” means erotica. It’s really anything, anyone, anywhere would want to censor, ban, or remove from libraries. Sometimes I marvel that the freedom of the (printing) press ever became a thing. Better nip this in the butt, before anyone gets the idea that genAI might be a modern equivalent to the press.

    • General_Effort@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      It is almost certainly illegal in various countries already. By using such prompts you are bypassing security to get “data” you are not authorized to access.

        • General_Effort@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 months ago

          Law-makers wanted to outlaw all kinds “hacking” even involving future technology. If people were prosecuted for jail-breaking ChatGPT, that would probably be within the intention of the makers of these laws.

          Fun fact: The US hacking law, CFAA, was inspired by the 1983 movie War Games, in which an out-of-control AI almost starts a nuclear war. If you travelled back in time, and told them that people will trick AIs to answer questions on bomb-making, they’d probably add the death penalty. In fact, if reactions to AI in this Technology community are any guide, they might still get around to that.

      • douglasg14b@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        It’s a glorified autocomplete, I’m not sure how we can consider it bullying even with the most elaborate mental hoops.

        • NocturnalEngineer@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 months ago

          I don’t know… In America they’re currently rolling back rights for women, inserted religion into supreme court decisions, and are seriously debating a second term of Trump.

          None of that makes any fucking sense. If it requires elaborate mental hoops, they’ll find it.

  • planish@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 months ago

    How much of this is “the model can read ASCII art”, and how much of this is “the model knows exactly what word ought to go where [MASK] is because it is a guess-the-word-based computing paradigm”?

    • catloaf@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      10 months ago

      I think it’s the latter. I just tried chatgpt 3.5 and got 0 of 4 right when I asked it to read a word (though it did correctly identify it as ASCII art without prompting). It would only tell me it said “chatgpt” or “python”, or when pushed, “welcome”. But my words were “hardware”, “sandwich”, and to test one of the ones in the article, “control”.

  • interdimensionalmeme@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    9 months ago

    How is that harmful ? The trick to counterfeiting money is to defeat the security feature then print a lot of it then exchange it for real money and then not get caught

    That is ridiculous fear mongering by the dumb journos again. Money has utterly corrupted journalism, as expected.

    • FractalsInfinite@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      9 months ago

      The harmful bit wasn’t the instructions for counterfeit money, its the part where script kiddies use chatgpt to write malware or someone trys to get instructions to make VX nerve agent. The issue is the fact that the air can spit back anything in its dataset in a way that can lower the barrier to entry to committing crimes ( Hay chatgpt, how do I make a 3d printed [gun] and where do I get the stl).

      You’ll notice they didn’t censor the money instructions, but they did censor the possible malware.

      • interdimensionalmeme@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 months ago

        But malware is already in the full disclosure mailing list. Except for the zero days that are for sale to the elites.

        Actually dangerous is synthesis of new zero day malware from scratch.

        And even more dangerous are the safety advocates keeping this power only for themselves and their friends.

        Nothing is more dangerous than the guard rails themselves.