GothamChess vs Dewa Kipas
March 15, 2021 1:09 PM   Subscribe

Wired Magazine recaps some recent online chess drama

The success of the popular Netflix show Queens Gambit has lead to a rise in online chess and interest in watching people stream chess online. One popular Chess twitch streamer IM Levy Rozman aka "GothamChess" was live streaming and playing online chess when he complained to his fans that his opponent, Dadang Subur (Dewa_Kipas) appeared to be cheating with the aid of a computer program (referred to as an "engine").

Subur was reported to chess.com's Fairplay team who ultimately banned his account for violating their fair play policies.

None of this is unusual. In fact there have been many stories about the frequency of cheating in chess such as this recent one, or this one.

Chess.com generally refrains from calling people "cheaters" and instead simply says the user violated their "FairPlay policy" as a means to try to limit their exposure to defamation claims. Chess.com also keeps their mechanisms for cheat detection a closely guarded secret in order to try to make it harder for cheaters to implement countermeasures to detection. They did provide a committed of experts from the US Chess Federation access under an NDA and it issued an endorsement of FairPlay.

Subur's son launched a social media blitz in their native country of Indonesia to clear his father's name. The story went viral and was picked up by a popular Indonesian podcaster and in multiple news outlets. A fierce backlash started against Rozman with hate tweets and death threats against him and his family. It got so bad that he temporarily left social media and blocked his YouTube Channel from Indonesia.

Gradually though Subur's defense have fallen apart. His claims such as being a former chess champion in Indonesia were refuted by the Indonesian Chess association (PERCASI). Danny Rensch the Chief Chess Officer at Chess.com has said that this was a clear cut case of cheating and that the number of reports about a player do not affect their FairPlay decisions. Finally one of Indonesia's largest daily papers has weighed in and said that Subur was cheating.
posted by interogative mood (52 comments total) 11 users marked this as a favorite
 
I got a message from chess.com a couple of weeks ago that they detected my opponent in one of the games I played violated their FairPlay policy and they adjusted my rating up a bit.

I'd like to know how they determine that. I'd long ago heard that "computers don't play the way humans play" but I'd like to know exactly what they mean. Surely some humans have played some games exactly the way a computer might play it. Perhaps they do some aggregate cross all a person's games and compare it to how an engine would play them.

I usually play quick, 10 minute games and any time my opponent takes a long time to move, especially in the beginning, I assume they're putting the moves into a computer chess game they have open in another tab.
posted by bondcliff at 1:33 PM on March 15, 2021 [4 favorites]


social media blitz

I saw a post on the r/chess subreddit, about how the the Real Story exonerating Dadang Subur is coming together on Twitter: it had something like 4000 upvotes, about ten times as many as other popular posts. So they are not slacking on the brigading, it seems.
posted by thelonius at 1:38 PM on March 15, 2021 [1 favorite]


All else aside, I found this bit confusing:

Rensch says Dewa_Kipas was an “absolute, absolute certain case” of cheating. Over dozens of games, Chess.com determined that Dewa_Kipas’ moves in chess games matched a chess engine at a rate that is “not reasonably possible for a human player,” higher even than the top-ranked Indonesian chess player, grandmaster Susanto Megaranto: 95.3 to 94.4.

How does it bolster the argument that engine-like play indicates cheating to say that Subar's was very slightly higher than that of an extremely good chess player? The implication elsewhere in the article is that deviation from the play of known engines is a good sign, and that much follows, but the fact that in turn a known-excellent player also very narrowly deviates seems to undercut the utility of that. And if Susanto Megaranto is, besides being a grandmaster, also an outlier there, why bring them up?
posted by cortex at 1:46 PM on March 15, 2021 [2 favorites]


Well first of all, a jump of .9 percent in engine move matching is not trivial at all, once we are talking about matching more than 90% of engine moves. It's huge.
posted by thelonius at 2:02 PM on March 15, 2021 [1 favorite]


How does it bolster the argument that engine-like play indicates cheating to say that Subar's was very slightly higher than that of an extremely good chess player?

Probably because if you are playing as the best person in your country of over a quarter billion people, maybe you'd have made more of a name for yourself beyond playing randos on chess.com
posted by sideshow at 2:04 PM on March 15, 2021 [1 favorite]


I'm not sure about the mathematical specifics so idk how powerful each .1% is when comparing to engine moves. It definitely looks like this wasn't the only metric by which they determined things, though. Apparently the cheater's account also gained 1k points in like 2 weeks which is crazy. The time management was also super suspect, with the cheater taking 10 seconds / move on moves that were 'obvious' or in some cases required based on the strategy at that point.
posted by lazaruslong at 2:17 PM on March 15, 2021 [4 favorites]


Just making some numbers up ... suppose a brand new player will agree with the chess engine 40-60% of the time, and a decent player will agree with the chess engine 75-85% of the time, and then as players get more advanced the correlation gets tighter -- maybe a player with rating 2350 is likely to agree with the engine 90.2 to 90.7% of the time, with 99.9% likelihood across 100 games.

Those are made up numbers! But it would make sense if the statistics worked out that way, because it would make sense if both your rating and your engine-agreement rate were roughly equivalent to your rate of mistakes. If you make mistakes 9.3 to 9.8% of the time, then you'll tend to lose to people who make fewer mistakes than that and tend to win to people who make more mistakes than that and you'll tend to end up with a very predictable rating.

So if we had strong statistical confidence in a connection between engine-agreement rate and rating, then we could look at the function of engine-agreement-rate -> rating distribution and say, if you really do make mistakes only 4/5 as often as grandmaster Susanto Megaranto, then the lowest reasonable chess rating you could have is impossible for a human.

One question then is how they detect play that isn't superhuman, but is just better than you usually are -- like what if you usually play unassisted but you occasionally break out the engine in tricky spots, bumping you up a few notches? I assume that stuff is catchable as well, but we'd need some way to detect "tricky spots" or other cheat-prone classes of moves and see if your engine-agreement rate was weird for those in particular.
posted by john hadron collider at 2:18 PM on March 15, 2021


Well first of all, a jump of .9 percent in engine move matching is not trivial at all, once we are talking about matching more than 90% of engine moves. It's huge.

I can see that being the case (I'm totally on the outside on this as far as chess fandom); it's more that the article establishes Rozman's 76 to Subar's 94 in the instigating match earlier on and implicitly establishes a baseline, and then later puts Megaranto at a 94.4 average as a reassuringly human value. My complaint I think is with the writing choosing to try and do both those things without bothering to either reconcile them or put Megaranto's own high similarity value in more context, not with the usefulness of a similarity value as part of the Fair Play evaluation.
posted by cortex at 2:20 PM on March 15, 2021 [4 favorites]


There's an interesting thread that doesn't get explored in the story, about how that accuracy rating is calculated. Presumably it's an aggregation of scores among a variety of representative chess engines. How does a chess engine get into that selection? How do the engines compare, as in what's their accuracy with respect to each other? And are the engines converging on something like "perfect play"?
posted by fatbird at 2:24 PM on March 15, 2021


I'd like to know how they determine that. I'd long ago heard that "computers don't play the way humans play" but I'd like to know exactly what they mean. Surely some humans have played some games exactly the way a computer might play it. Perhaps they do some aggregate cross all a person's games and compare it to how an engine would play them.

Yeah, they'd have to be looking at the entirety of someone's record, not just a single game. A couple of weeks ago, I had occasion to report one of my opponents, a sub-1000 rated player who had won every single one of their games with an accuracy over 98%. There was zero chance that they weren't cheating, and their account was closed within the hour.
posted by Crane Shot at 2:42 PM on March 15, 2021


- Human moves have a story to them, while computer moves can be mysterious and break principles for no clear reason.
- Humans look for positions that are "easier to play"; computers are fine playing messy positions where it's only even on both sides if you make ten perfect moves in a row.
- Humans calculate lines in advance, and therefore often will ponder one move for a long period of time and then play the next four moves automatically; a cheater using a computer will spend ten seconds on every move, including the most obvious recaptures.

The rise of online chess over the past year has been amazing, though it's really putting the cheat detection systems to the test. This is pushing serious players more and more into shorter formats, where an engine becomes less practical. It's a particularly fun genre of chess videos where a grandmaster realizes that someone is cheating, and their strategy becomes 100% about "flagging" their opponent -- survive long enough for the game to turn into a time scramble, and the cheater's game crumbles.
posted by fishhouses at 3:11 PM on March 15, 2021 [18 favorites]


How do the engines compare, as in what's their accuracy with respect to each other? And are the engines converging on something like "perfect play"?

That's a great question, and I'm not an expert, but if you watch AlphaZero play Stockfish you see two very different styles of play. AlphaZero, trained on reinforcement learning, has this sort of alien deep strategic understanding, while Stockfish is being much more of a number cruncher (and showing a weakness to long-term plans).
posted by fishhouses at 3:14 PM on March 15, 2021 [3 favorites]


an underlying subtext of all this is, "you are weird and different and therefore wrong," with chess trappings
posted by glonous keming at 3:45 PM on March 15, 2021


Chess.com has its own seven-person Fair Play team, less like moderators and more like data scientists ... They close thousands of Chess.com accounts every day.

If just seven people are closing thousands of accounts every day, they can't be spending more than about two minutes per account. Which sounds like there's really no investigation going on beyond their algorithm. And algorithms always have edge cases.
posted by rikschell at 3:55 PM on March 15, 2021 [2 favorites]


I assume the review is triaged. If a new player comes on and gets the top engine move every single time in 20 games, I doubt there is much review necessary. More subtle cheating, or cheating by long term players, probably gets more attention. The bulk of the cheating is probably incredibly blatant.

An interesting note is that the amount of cheating has really increased dramatically in the last year. Look at March 2020, when they had
  • 9,843 accounts closed for fair play (including 7 titled players).
  • 22,234 accounts closed for abuse.

In February 2021, that increased to:
  • 32,337 accounts closed for fair play (including 8 titled players).
  • 49,621 accounts closed for abuse.
That's still not 1000 per hour though (which would be 730,000 per month) so I'm not sure if that's accurate.
posted by demiurge at 4:27 PM on March 15, 2021 [2 favorites]


an underlying subtext of all this is, "you are weird and different and therefore wrong," with chess trappings

If you’re implying that Dadang Subur is being unfairly scrutinized over being a “weird” player, I think you are underestimating how much he matches established patterns of frauds and cheaters. It’s a delicate issue because, no, you can’t 100 percent prove that somebody is cheating algorithmically. But that he’s considered suspicious is more Occam’s Razor than anything.

If just seven people are closing thousands of accounts every day, they can't be spending more than about two minutes per account. Which sounds like there's really no investigation going on beyond their algorithm. And algorithms always have edge cases.

I’d expect the majority of cases are fairly low-effort and trivially detected, though.
posted by atoxyl at 4:33 PM on March 15, 2021 [2 favorites]


I don't know about this specific case nor the particular forensics that were used to determine cheating violations of the Fairplay policies, but I do know that one of the key academics who've looked into cheating in chess (specifically players using engines) is Ken Regan of U. Buffalo. This page has a brief summary and links to academic papers on the subject. While I don't see Regan cited in the Wired article, I'd be very surprised if the methodology used by chess.com wasn't heavily informed/influenced by his work.
posted by mhum at 4:40 PM on March 15, 2021 [1 favorite]


My understanding is that chess.com cheat detection uses 4 things to make determinations about players. The first is statistical measures of a players results on the site in terms of wins, losses, move complexity and move selection vs the computer. Humans who are trying to mask their behavior will often be revealed because their patterns of wins / losses, best move selection over time will not be random. The second is behavioral data collection and classification from the web browser and apps. For example computer cheaters often make moves at a regular interval as they wait a specific amount of time for the computer to find the best move. or they are focused on copying the moves their opponent makes into another chess program and then playing the computer move. The third is human review by chess experts. Expert level players on the fair play team look at the games and make a judgement about the humanness of the move. This is very subjective and is used only in combination with the behavioral and statistical analysis and only when the previous analysis isn't clear. Based on talking to chess.com employees it is my impression that the fairplay team mostly focuses on a handful of non-obvious cases and does in depth reviews when a titled player is accused. Finally they have been increasingly hiring machine learning folks to build better algorithms out of the datasets above.

It is also rumored that they pair suspected cheaters against simulated opponents and see if the games look like two computers playing each other. They may also use identified cheaters (without their knowledge) to test other suspects for similar comparisons before they ban the player. Thus getting a bit of free labor out of the cheater.
posted by interogative mood at 4:43 PM on March 15, 2021 [12 favorites]


Their main business is selling people accounts with more features (for example: game analysis, instructional videos, streams) than the free account has, so this really is an existential threat for them. If players feel that they will either lose to cheaters too much, or have too great a chance of being banned in error for cheating, they will not be buying any more platinum accounts. It seems like this has in fact occurred to their leadership team, and that they are putting a lot of resources in.
posted by thelonius at 4:53 PM on March 15, 2021 [1 favorite]


A couple of things here are interesting to me. I have a chess.com account, but I've only played one match there, and it was against a guy I knew from OTB chess. But I do go there almost every day for the [free] puzzles. My USCF rating is high 1700's, but my puzzle rating is 2130 (and falling), while my game rating is only 1562, based on only one game. I'm not sure how much my puzzle rating has to do with my playing rating because it's a different sort of thing.

I mainly play on redhotpawn.com (since this MetaFilter comment in 2006), where my current rating is 1807 (also falling). OTB, I almost always win against under 1700 players and usually lose to players over 1800, so I don't consider myself an 'A' player, but here I am. In my 20's, I preferred to play 5-minute chess over tournaments, but I've never played speed online. I like the chance to play over lines as in postal chess, and I think that accounts for the higher rating than normal.

I was curious about the 'accuracy' thing, and I just happened to have next to me a game I played online against my neighbor that I had subsequently run through computer analysis. I just have a 20-year-old program called Chess Assistant and it used Tiger 15.0 (which I think was hot shit in 2003 but is probably way behind Stockfish, et al). I am always amazed at how many ?'s I get when I run my games. This one showed 9 out of 34 moves that it rated as inferior, including 4 ?'s. (Although it did give me 2 !'s- indicating that it liked my move better than its first impression) We were in a standard opening for a while and then he got way behind, so I'd say the moves to look at were 6-30, where all the inferior moves were, and that would give me an accuracy of under 60%. I think that's fairly normal for me (and depressing). The idea of getting 95% 'correct' is ridiculous for me. And it would probably be a lot worse run through Stockfish.
Online, I usually am playing in a 1 or3 day/move with a 3-7 day timebank, so the idea that you could tell much from how long it takes me to move doesn't work. Was I asleep? Was I on vacation?

Human moves have a story to them
In my case the story is that I wasn't paying attention, or was overoptimistic.
posted by MtDewd at 5:35 PM on March 15, 2021 [2 favorites]


I've been playing Carrom with a popular phone app. There's a hacked version of the app going around, and it provides an aiming line, enabling perfect aim. Spotting cheats is simple, because the algorithm that matches players is very good at matching you with players of similar skill. So my non-cheating win rate has trended steadily to 50%, as it should. Any player with a reasonable number of games played, and whose win percentage is above about 54%, is almost guaranteed to be a cheat; you can see how they try out all kinds of crazy angles when setting up a shot, something only someone with an aiming line would do. Unfortunately you can't decline a match without forfeiting the game. In theory, with something like 50% of players now cheating, they ought to be mostly matched against each other, pulling the cheats back towards the 50% area. Yet somehow the algorithm is matching them with me, so clearly win rate isn't the main matching factor. Most of the cheats are at around 60-65% - presumably that's some sort of equilibrium for them, given the ratio of cheats to non-cheats.

I've become too involved in beanplating this. A sensible person would have moved on to a different Carrom app, rather than living with this for two years.
posted by pipeski at 5:58 PM on March 15, 2021 [1 favorite]


I think the story here is less about the way in which this cheater was identified but the absolute brutal campaign of harassment that GothamChess was subjected to. Which to be fair was in the OP. Outing a cheater should not result in a sustained campaign of death threats.
posted by Justinian at 6:06 PM on March 15, 2021 [15 favorites]


I'm going to read thru the whole thing but uh, welcome to the high energy levels of indonesian twitter.
posted by cendawanita at 6:29 PM on March 15, 2021


One of the things I heard was that the story and hate mob was amplified by the Indonesian equivalent of Joe Rogan. How depressing that Joe Rogan has metastasized and spread to imitators around the world.
posted by interogative mood at 6:37 PM on March 15, 2021


I kind of feel like this whole thing is terrible. Playing known cheaters to humiliate them is terrible. Inciting harassment mobs are terrible. The most plausible explanation is that the man cheated, but didn’t want to tell his son about it, and that’s why the father wasn’t angry but the son was. This just seems ugly all the way around.
posted by corb at 6:49 PM on March 15, 2021


An interesting one for sure. I was born in 1982, grew up in what was on paper a neutral country during the end of the Cold War, obsessed with chess, with maths, with programming, with the alternate planet that was the USSR and the stream of geniuses that it seemed to produce. I read the David Levy book about his bet that a computer couldn't beat him in the next 10 years when I was 7. That Christmas I asked Santa for a chess computer and he obliged.

I think that for the less obvious cases, it's something that reveals itself over several matches. People's personalities leak out - I'm a conservative mid-level José Mourinho type that chips away taking pieces and waiting for the opposition to slip up, and running over them if they do. You had actual players like Mikhail Tal, or his opposite Tigran Petrosian, and you could recognise them in matches just on the moves they made.

Playing a bot, or someone copying a bot's moves, feels completely different. It's like the T-1000 in Terminator 2. They run with their mouth closed.
posted by kersplunk at 7:00 PM on March 15, 2021 [11 favorites]


Playing known cheaters to humiliate them is terrible.

That isn't what happened here. Players queue up to be paired against a random opponent of ostensibly near-equal skill. Then it was discovered that this opponent was almost certainly a cheater.

At that point, one can have the patience of a saint and still say in one's commentary, "I'm pretty sure this person is cheating."

Notably, most video game report features (including this one) look at the actual gameplay data and not the volume of reports. Dewas-Kipas wasn't banned because of a mob of GothamChess's viewers. He was banned because he was in fact cheating.
posted by explosion at 7:13 PM on March 15, 2021 [16 favorites]


Perhaps this analogy will work for the non chess players. Computers pick chess moves like the Waze app does directions. That’s how we experts can tell. No human is going to drive that route without the confidence of GPS and real time turn by turn directions. Chess engines find those kinds of routes through the chess board.
posted by interogative mood at 8:18 PM on March 15, 2021 [7 favorites]


In a couple of years people will stop playing chess online altogether. Video chat only.
posted by bleep at 9:13 PM on March 15, 2021 [1 favorite]


And if Susanto Megaranto is, besides being a grandmaster, also an outlier there, why bring them up?

I mean being a grandmaster is what makes Megaranto an outlier. There really aren't that many GMs worldwide. Even some historical players who've had opening variations named after them aren't GMs. And while I'm sure Subur might make the claim that engine matches suffice, it's generally agreed you just don't reach GM skill level without playing against GM opposition.

The point being that if Subur can play with GM accuracy over an extended run of games, then he would either already be a GM himself or already be known to be a non-GM playing near GM level (i.e. through matches against other GMs, not picking fights with IMs).
posted by juv3nal at 10:12 PM on March 15, 2021


non-GM playing near GM level

Even rarer, but it is a thing, especially in blitz chess. Genrikh Chepukaitis, for example. won the Leningrad blitz championship multiple times, over players like Korchnoi and Stein.
posted by thelonius at 10:19 PM on March 15, 2021


Like the period when humans could beat computers with some effort, the current period when algorithms can readily distinguish computers from humans is going to be brief. Some Kaggle kid will eventually throw a GAN at it trained on millions of human games with the twin objectives of playing well and being indistinguishable from a human to the other side of the GAN, and then it's over. Cheaters will just pick some random set of parameters in human play personality space and be that "person" forever after. It won't be impossible to catch them, but it will basically just be another game played between computers.
posted by chortly at 10:23 PM on March 15, 2021 [3 favorites]


Since chess.com pairs you with opponents close to your own rating, wouldn't these undetectably human-like engines just end up playing each other at some astronomical rating like 4000, leaving the rest of us to keep slogging away at the "people" end of the scale?
posted by Crane Shot at 10:55 PM on March 15, 2021 [1 favorite]


Oh this is a non sequitur, but for the lulz, Magnus and Hikaru both played a bongcloud three move repetition draw in the Magnus Carlsen Invitational currently going on (neither needed more points to move on to the next round). (youtube)
posted by juv3nal at 11:14 PM on March 15, 2021 [2 favorites]


I think the story here is less about the way in which this cheater was identified but the absolute brutal campaign of harassment that GothamChess was subjected to. Which to be fair was in the OP. Outing a cheater should not result in a sustained campaign of death threats.

We made it to the 23rd comment on this post before this was even mentioned.

Are orchestrated hate campaigns on social media just a normal part of everyday life now?
posted by Cardinal Fang at 1:17 AM on March 16, 2021 [1 favorite]


^ It seems the default response now is to call out people on social media because, I dunno? It's the only way issues will get dealt with as when there's enough traction to a tweet/post/image/comment the [original] platform [product/service] involved is compelled to respond? “Praise in public; criticize in private.”? Doesn't seem to apply in this brave new world of social media noise. Of course the problem is when you do this the peanut gallery pile on and the hate campaigns start, it's fucking awful.

Related on the cheating issue - I started playing Carcassonne on boardgamearena during the lockdown and it's incredibly frustrating the number of players that collude to boost their ratings. This is against the rules of the site, but people will do it anyway. It only appears to happen in games amongst "stronger" players (according to the ELO, which is something people get far too attached to) but if you get to that point by cheating then what's the point of it?
posted by lawrencium at 2:08 AM on March 16, 2021


In a couple of years people will stop playing chess online altogether.

I'll still be on lichess if (a) it exists and (b) anybody else is still up for correspondence games at two days max per move.
posted by flabdablet at 2:57 AM on March 16, 2021 [2 favorites]


Re. cheat detection, I think false positives are pretty low. Apparently Dewa Kipas had over 99% accuracy in a couple of matches. Combined with never taking less than 5-10 seconds per move this is ironclad (I'm assuming his timing must have showed this pattern).

Good players will usually pre-move or make instant moves to save time when a move is obvious, and when time is low make multiple moves per second. It's hard to even follow what's going on when both players play this fast. For cheaters this kind of play is impossible as they have to check every single move with the computer. Daniel Naroditsky often beats cheaters simply making fast moves and surviving until the cheater runs out of time. If you play with 95%+ accuracy but take five seconds or more for super-obvious moves and never play quickly, or play a lot worse when you play quickly, it's a pretty safe bet you're a cheater.

Someone on Reddit also said the Chess.com web app can detect frequent tab switching, which sounds plausible.
posted by mokey at 3:00 AM on March 16, 2021


The point being that if Subur can play with GM accuracy over an extended run of games, then he would either already be a GM himself or already be known to be a non-GM playing near GM level (i.e. through matches against other GMs, not picking fights with IMs).
My first exposure to computer cheating was in 1967, watching Mission: Impossible. Even as a kid, I thought two things seemed unlikely- one was that a grandmaster would not see that he was in a mate-in-one position, and the other was that the Martin Landau showed up unknown at an international tournament and no one was suspicious that he was doing so well.
posted by MtDewd at 4:48 AM on March 16, 2021 [1 favorite]


ill come back having read the whole article and any of the other comments but am i the only person devastated to learn that Dewa Kipas was not a pun/plan on the name Dua Lipa but referencing a yamulkah/kip(p)a?
posted by Exceptional_Hubris at 5:04 AM on March 16, 2021


but referencing a yamulkah/kip(p)a?

Oh how does that figure? To my malay tongue (and not particularly fluent in vernacular indonesian) it's just literally a God of Fan(s). And that itself could be a pun on the literal translation of 'fans' (of the fandom kind versus the literal kind).
posted by cendawanita at 5:44 AM on March 16, 2021 [2 favorites]


Obviously i don't understand chess, but i looked up 'dewa kipas' and got a couple more of relevant hits (unfortunately all in indonesian, which means this didn't even burst through the english reporting of the region).

This one reports that Irena Kharisma Sukandar, one of the top players in Indonesia who was quoted in the news article above, that she posted an open letter to Deddy Corbuzie to reconsider setting the record straight because he intended to have DK on his podcast.

The following btw is not a defence, but Wired truly underplayed how vehemently SEAsian internet, of which Indonesia is a big part of, can react when they feel wronged by a westerner. It doesn't even need to be a celebrity. Usually it's quite warranted because like Thailand, Indonesians have a long-standing ambivalence with how the west treats them as an exotic tourist locale etc, and the other is, much like elsewhere in SEA, mob justice for petty things are pretty much the only thing that seems to extract any kind of immediate attention. That it devolved into death threats is truly unsurprising either and completely unnecessary.

This reminds me of that time earlier this year when Black Twitter and SEAsian Twitter faced off over Kristen Gray (and that needs its own post!).
posted by cendawanita at 6:03 AM on March 16, 2021 [6 favorites]



but referencing a yamulkah/kip(p)a?

Oh how does that figure?


step 1 was me writing the comment before beginning caffinating for the day - the second step was probably that my brain works like british rhyming slang but with even more tenuous connections betwen what i sometimes consider to be related thoughts. There is no actual basis for this connection whatsoever - perhapts the grammys being on sunday (not that i watched) had me primed to think about Dua Lipa?
posted by Exceptional_Hubris at 9:06 AM on March 16, 2021




It only appears to happen in games amongst "stronger" players (according to the ELO, which is something people get far too attached to) but if you get to that point by cheating then what's the point of it?

I've come to believe that for some people cheating is the real game they are playing. The specific things they cheat at - be it chess, video games, running a marathon, being in a relationship, the law, or their taxes are just the different arenas where they perform their recreational cheating.
posted by srboisvert at 12:20 PM on March 16, 2021 [3 favorites]


It isn’t the bongwater attack, it’s the Bongcloud attack.
posted by interogative mood at 9:48 PM on March 16, 2021 [3 favorites]


Yeah. The Bongwater attack is where you make your opponent's pieces smell so bad they don't want to pick them up and they run out of time before making a move.
posted by flabdablet at 4:45 AM on March 17, 2021


Whoever wrote the wired article didn't seem like a chessplayer to me. Unless it was badly edited. E.g.
"He likes the Caro-Kann Defense, the Sicilian Defense, and Gambit."
posted by Obscure Reference at 12:25 PM on March 17, 2021 [2 favorites]


The over the board matches against IM Irene Sukandar just took place. English coverage is a little thin at the moment, but I believe she won every match. Agadmator covered one of them here.

If you plug the moves into the chess.com analysis engine, it says he played with 27.7% accuracy (though some in the YT comments are saying 36%? maybe down to different analysis engines).
posted by juv3nal at 3:31 AM on March 22, 2021 [3 favorites]


lol. well that oughta put the question to rest. what an asshole.
posted by lazaruslong at 10:32 AM on March 22, 2021 [2 favorites]


...and Levy's recap.
posted by juv3nal at 7:21 PM on March 22, 2021 [1 favorite]


Came back here to see if people were still talking about this - that was not close! At least Sukandar made a quick buck off it I guess.

He likes the Caro-Kann Defense, the Sicilian Defense, and Gambit

He means the X-Man!
posted by atoxyl at 11:21 PM on March 22, 2021 [1 favorite]


« Older Sorry, Elvis's birthplace!   |   Beyond Weimar Newer »


This thread has been archived and is closed to new comments