Basically a deer with a human face. Despite probably being some sort of magical nature spirit, his interests are primarily in technology and politics and science fiction.

Spent many years on Reddit and then some time on kbin.social.

  • 0 Posts
  • 739 Comments
Joined 9 months ago
cake
Cake day: March 3rd, 2024

help-circle


  • The “how will we know if it’s real” question has the same answer as it always has. Check if the source is reputable and find multiple reputable sources to see if they agree.

    “Is there a photo of the thing” has never been a particularly great way of judging whether something is accurately described in the news. This is just people finding out something they should have already known.

    If the concern is over the verifiability of the photos themselves, there are technical solutions that can be used for that problem.



  • FaceDeer@fedia.iotoComic Strips@lemmy.worldThe algorithm
    link
    fedilink
    arrow-up
    10
    arrow-down
    6
    ·
    7 days ago

    As recent advances in AI have shown, humans are really quite predictable when you throw enough data and compute at the problem. At some point the algorithm will be sophisticated enough that it’ll be able to get to know you better than you know yourself, and will be able to provide you with things you had no idea were what you really wanted.

    Interesting times.




  • I’ve got a spray bottle filled with windshield wiper fluid I sometimes use to “pre-treat” an icy windshield before I get to scraping it, it’s often able to loosen the ice’s grip on the glass so the scraper can just lift it off. Simpler and more controllable than relying on the built-in windshield sprayers.

    A one-handed garden pick is a nice tool to have handy if you find your car’s wheels stuck in some hard-packed snow or ice. Don’t spin your wheels fruitlessly, the friction is just making the ice slicker and harder. Use the garden pick to dig the wheels out instead, creating a rough surface to get some initial traction on. There are also traction plates or mats that you can stick in there to help get moving, though you need to be able to move the car far enough to get them caught under the wheels for them to work.

    Make sure your car battery is in good condition. Cold weather will reduce its power output, so if your car’s going to fail to start it’ll be in the dead of winter when that happens. For peace of mind I bought one of those battery booster packs that you can use to jump-start a car with and I really like it, it’s got a built-in air pump, USB charger, and light source as well and I’ve used it for all of those things now and then. Wasn’t very expensive.

    Stash a warm hat and a pair of warm mittens in the car somewhere. If you end up stranded on a roadside you won’t have known ahead of time that you were going to be stranded so you might not have brought adequate clothing with you. A flashlight, too. In northern latitudes there’s a lot of darkness during winter time.







  • My point is that if we turn up our gibberish dial now then at least our llms will be learning the wrong thing & we have some control.

    We’d be covering ourselves in poop to prevent people from sitting next to us on the train. Sure, people will avoid sitting next to us, but in the meantime we’ll be covered in poop.

    And then other people will learn the trick, cover themselves in poop too, and now everyone’s poopy and the trick stops working.

    There is still a lot of understanding that we do automatically that an llm will never do.

    Are you willing to bet the convenience of comprehensible online discourse on that? “Automatically understanding stuff” is basically the one job of LLMs.

    LLMs model language, and coming up with some kind of “gibberish” filter is simply inventing a new language. If there’s semantic meaning in it the LLMs will figure it out just like any other language, and if there isn’t semantic meaning then we’ve lost the ability to communicate entirely. I see no upside.



  • I’m not talking about a summarizer, I’m talking about a classifier. It just needs to identify which parts of the page are advertising and which are not.

    The point of such a tool is that it would read the web page in exactly the same way that a human would, so using trickery like pre-rendered images of text or funky unicode wouldn’t really change anything. If a human can read it then so can the AI.


  • Well, the “at least for now” part is my point - if people start using “gibberish” to communicate or to hide their communication, that provides training material for LLMs to let them figure out how to use it too.

    LLMs learn how to communicate based on existing examples of communication. As long as humans are communicating with each other somehow then LLMs will be able to train how to do that too. They have the same communication capabilities that we do at this point, so there’s not really any way we can make a secret clubhouse that they can’t figure out how to infiltrate.

    Personally, I think there’s two main routes we can go to deal with this. Either we can simply accept that there’s no way to be 100% sure we’re talking to a human any more and evaluate the value of our conversation based on the content of the words spoken rather than the composition of the entity generating them, or we could come up with some kind of “proof of personhood” system to allow people to label the text the write as coming from them.

    The latter is extremely hard to do, of course, both from a technical and cultural perspective. And such a system would likely still allow someone’s “person token” to be sneakily used by AI, either by voluntarily delegating it (I could very well be retyping all of this out of a ChatGPT window) or through hackery.

    So I’m inclined toward the former. If I’m chatting with someone and I’m having a good time doing it, and then later I find out it was a bot, why should that change how much fun I had?


  • I don’t see how that would be practical. People who aren’t “in on the joke”, as it were, will call out the gibberish and downvote it. If enough people are “in on the joke” then the whole forum becomes useless and some other forum will be created to fill the role of the original. The AI will train off of that one.

    Basically, if you don’t want an AI training on your content, then don’t post your content in public where an AI will see it. The Fediverse is the last place you should be posting since its very nature is about openly broadcasting your content to whoever wants to see it.