So, simply receiving “aim bot” as a whisper (private) message was enough to get permabanned. FUCKING JEE-NIUS ANTICHEAT, GREAT JOB, GUYS!!!
That kernel level anti-cheat is really working out well, eh?
Kernel level isn’t about stopping cheaters, it’s about gaining system access
Naw, it’s about pretending to stop cheaters. It’s security theatre, same as the TSA
And about putting a buzzword on your game that makes people think they’re safe from cheaters
To what end?
Any mention of data collection in the ToS?
Source?
Their source comes from it giving system access and that is what they want.
Why would they want that? Are there any cases of it being abused?
Yeah why would any mega conglomerate corporate entity want the most valuable and easy to harvest resource on earth
Computer usage statistics? Score!
Of course, no unfathomably demonic corporate entity would ever choose to track users and target them like that, to for instance sell products with zero repressions due to a lobbying hate group bribing lawyers to make it legal
It has system access yet doesn’t prevent cheating.
Nothing is perfect bruh.
But you know, according to EA Linux is worse than guys like this deliberately causing disruptions in service to legit players.
Vizor explained that Ricochet uses a list of hardcoded strings of text to detect cheaters and that they then exploited this to ban innocent players by simply sending one of these strings via an in-game whisper. To test the exploit the day they found it, they sent an in-game message containing one of these strings to themselves and promptly got banned.
Vizor elaborates, “I realized that Ricochet anti-cheat was likely scanning players’ devices for strings to determine who was a cheater or not. This is fairly normal to do but scanning this much memory space with just an ASCII string and banning off of that is extremely prone to false positives.”
This is insane, they had an automatic script to connect to games and ban random people on loop so they could do it while away
a list of hardcoded strings
Violating a core programming tenet right off the bat. I wonder how much money Activision payed for this software…
We and the hacker have no idea if this list is config driven or truly “hard coded” i.e. a const in the source code. It’s hardly an indicator of violating a core programming tenet.
Is stopping cheaters in most video games even possible?
It requires the server to verify all inputs. It’s doable, path of exile does that and most arpgs. But it leaves them very open to lag and desync issues, most games will prioritize a smooth experience.
Cod is peer to peer I think, clients host the server, very cheap for the company. But obviously you need to give the client a lot more information.
I doubt COD is peer to peer anymore. Maybe like 10 years ago. No way they are giving up that much control over the game
What about things that only need to be done client side, like wallhacks?
In order to wallhack your client must get information about enemy positions in fog of war.
When I was at my peak effectiveness in Urban Terror, I could hold my own against them…
Heh. GTA V used to be fun even when a cheater would show up and because you could just use a rocket launcher on them to keep them ragdolled forever so they couldn’t use their cheat menu (or any menu). They’d have to alt-F4 to quit since being ragdolled closes any open menus.
Yes, but not through standard methods. Even AI aimbot can be filtered, but the amount of RND required is likely to much for a single studio to bear alone. I believe we are more likely to see neural network trained bots largly replacing real players using an off the shielf model. Just a guess, not an expert.
There is already a solution using relatively simple analytics and building a profile of the player. It becomes very easy to find cheaters because it is easy to analyze how fast and directionally they aim. It is obvious when someone is using macros for instance or a aimbot.
The problem is this does not require intrusive programs that are essentially spyware for your OS. This is what attracts the big studios to these solutions not their effectiveness.
There is a workable solution but let’s be honest. Cheaters are often whales and spend a lot of time and money on the game. It is bad engagement to send them away.
Big studios already recognize this. So to be blunt they allow a certain amount of cheating because they don’t want to really solve the problem.
I was speaking to the long term, 5-10 year in.the future. Analytics is a current solution and as far as I know works well. I was just talking vaguely about long term problems and solutions.
I think the best thing I’ve heard for long term solutions is to fix a lot of the cheating using server side solutions. In a game like CoD, that means the server doesn’t send you player positions unless you absolutely need to know them.
The other thing honestly is just increasing the investment required to cheat. That could mean that in order to play competitive game modes, you need to have signed in at least once for 4 weeks straight and played the game. Or you need to be a certain level. Issue hardware bans and IP bans to people. Require phone number verification.
What those things do as barriers is actually increase the potency of current detection methods. This should also carry over to accounts. I’m not sure why steams VAC ban system isn’t more popular. As in accounts need to be flagged as a whole when cheating in just one game is found.
There are many solutions but it’s just not a big deal for companies as the prior person said. Plenty could be done to at least make cheating harder and cost more time/money. But that won’t happen
I’m not sure why steams VAC ban system isn’t more popular. As in accounts need to be flagged as a whole when cheating in just one game is found.
Presumably because this opens players to significantly damaging abuse from server operators. Players aren’t the only ones who fuck around.
I don’t mean individual servers. What I more meant was let’s say a game uses a standardized anti-cheat. Like EasyAntiCheat or Battleye or similar. And whoever runs your game service (Steam, PSN, Xbox) can vet these anti cheat programs and allow them to create a record on your account of cheating.
And obviously these things get false flags so you can account for that, give people strikes and allow appeals. And games would have the option of banning you for: having too many strikes total, violating only a specific anti-cheat X times, or ignoring this system except to place extra suspicion and resources on those already having strikes.
Also having an account tied to hardware is a no brainer and I’m surprised that this doesn’t get employed often. I know IDs can be spoofed but that’s another barrier potentially.
Cod is peer to peer. Clients host the game server.
They use a hybrid system now and only use peer to peer when dedicated servers aren’t enough, so they could just swap to purely dedicated servers.
However ignoring that, even a peer to peer system can do similar tricks if you don’t isolate the host peer to just one machine. That can even be done by spot checking with a company owned server. You use the server as a verification peer and have it as a backup host to the assigned peer. If your verification peer gets different ram values or what not, you shut the server down at the very least and place that peer on a suspicion list.
But even if they went the cheap route, just distribute the peer network. Let’s say that you have a game of 12 people. You could make it so that each peer is only assigned a certain part of the simulation and players (with overlap on assignments) and cannot track the entire simulation. It’s more complicated than a single server hiding info from you, but they could at least make it to where you’d need multiple infected peers to take over a lobby.
I think you were spot on about training a neural network with player data. It is already happening without a doubt.