Then the lemmy title is misleading, no? Isn’t that against the rules?
Then the lemmy title is misleading, no? Isn’t that against the rules?
You wouldn’t steal a car…
Hmm, what if the shadowbanning is ‘soft’? Like if bot comments are locked at a low negative number and hidden by default, that would take away most exposure but let them keep rambling away.
Trap them?
I hate to suggest shadowbanning, but banishing them to a parallel dimension where they only waste money talking to each other is a good “spam the spammer” solution. Bonus points if another bot tries to engage with them, lol.
Do these bots check themselves for shadowbanning? I wonder if there’s a way around that…
This. I’m surprised Lemmy hasn’t already done this, as it’s such a huge glaring issue in Reddit (that they don’t care about, because bots are engagement…)
GPT-4o
Its kind of hilarious that they’re using American APIs to do this. It would be like them buying Ukranian weapons, when they have the blueprints for them already.
Oh, and as for benchmarks, check the huggingface open llm leaderbard. The new one.
But take it with a LARGE grain of salt. Some models game their scores in different ways.
There are more niche benchmarks floating around, such as RULER for long context performance. Amazon ran a good array of models to test their mistral finetune: https://huggingface.co/aws-prototyping/MegaBeam-Mistral-7B-512k
Honestly I would get away from ollama. I don’t like it for a number of reasons, including:
Suboptimal quants
suboptimal settings
limited model selection (as opposed to just browsing huggingface)
Sometimes suboptimal performance compared to kobold.cpp, especially if you are quantizing cache, double especially if you are not on a Mac
Frankly a lot of attention squatting/riding off llama.cpp’'s development without contributing a ton back.
Rumblings of a closed source project.
I could go on and on, inclding some behavior I just didn’t like from the devs, but I think I’ll stop, as its really not that bad.
Jokes aside (and this whole AI search results thing is a joke) this seems like an artifact of sampling and tokenization.
I wouldn’t be surprised if the Gemini tokens for XTX are “XT” and “X” or something like that, so it’s got quite a chance of mixing them up after it writes out XT. Add in sampling (literally randomizing the token outputs a little), and I’m surprised it gets any of it right.
The plan is to monetize the AI results with ads.
I’m not even sure how that works, but I don’t like it.
Honestly I am not sold on petals, it leaves so many technical innovations behind and its just not really taking off like it needs to.
IMO a much cooler project is the AI Horde: A swarm of hosts, but no splitting. Already with a boatload of actual users.
And (no offense) but there are much better models to use than ollama llama 8b, and which ones completely depends on how much RAM your Mac has. They get better and better the more you have, all the way out to 192GB. (Where you can squeeze in the very amazing Deepseek Code V2)
RAM capacity and bandwidth.
That basically the only two things that matter for local LLM performance, as it has to read the entire model from memory for every token (aka half word). And for the same money, a “higher end” M2 (like an M2 Max or Ultra) will just have more of it than the equivalent cost M3 or (probably) M4.
but what am I realistically looking at being able to run locally that won’t go above like 60-75% usage so I can still eventually get a couple game servers, network storage, and Jellyfin working?
Honestly, not much. Llama 8B, but very slowly, or maybe deepseek v2 chat, preprocessed on the 270 with vulkan but mostly running on CPU. And I guess just limit it to 6 threads? I’d host it with kobold.cpp vulkan, or maybe the llama.cpp server if there will be multiple users.
You can try them to see if they feel OK, but llms are just not something that like old hardware. An RTX 3060 (or a Mac, or a 12GB+ AMD GPU) is considered bare minimum in the community, a 3090 or 7900 XTX standard.
I dunno. My experience on Reddit is that even bringing up the word “AI” in discussions outside of it will almost get be doxxed. I asked a TV fandom if cleaning up a bad release with diffusion models and some “non AI” filters sounded interesting, and I felt like I had triggered Godwin’s law.
I did bring this up in AskLemmy, and got a mostly positive response, but I also felt like it was a tiny subset of the community.
On my G14, I just uses the ROG utility to disable turbo and make some kernel tweaks. I’ve used ryzenadj before, but its been awhile. And yes I measured battery drain in the terminal (but again its been awhile).
Also throttling often produces the opposite result in terms of extended battery life as it likely takes more time in the higher states to do the same amount of work whereas running at a faster clock speed, the work is completed faster and the CPU returns to a lower less energy using state quicker and resides there more of the time.
“Race to sleep” is true to some extent, but after a certain point the extra voltage one needs for higher clocks dramatically outweighs the benefit of the CPU sleeping longer. Modern CPUs turbo to ridiculously inefficient frequencies by default before they thermally throttle themselves.
OK, so the reaction here seems pretty positive.
But when I bring this up in other threads (or even on Reddit in the few subreddits I still use) the reaction is overwhelmingly negative. Like, I briefly mentioned fixing the video quality issues of an old show in an other fandom with diffusion models, and I felt like I was going to get banned and doxxed.
I see it a lot here too, in any thread about OpenAI or whatever.
Agreed. This is how a lot of people use them, I sometimes use it as a pseudo therapist too.
Obviously theres a risk of it going off the rails, but I think if you’re cogniziant enough to research the LLM, pick it, and figure out how to run it and change sampling settings, it gives you an “awareness” of how it can go wrong and just how fallable it is.
What RAM capacity?
Honestly, if LLMs are your focus, you should just upgrade to a used M2 Max (or Ultra) when the M4 comes out, lol. Basically the only thing that matters is RAM capacity and bandwidth, and the M2 is just going to be faster and better than a similarly priced M4.
Or better yet, upgrade to and AMD Strix Halo. This will buy you into linux and the cuda ecosystem (through AMD rocm), which is going to open a lot of doors and save headaches (while admittedly creating other headaches).
TBH this is a great space for modding and local LLM/LLM “hordes”