Self Linking Considered Harmful
October 22, 2006 2:09 AM   Subscribe

CSRF (Cross Site Request Forgery) is starting to become a real issue for many web forums. While the vulnerability has been around for a while, recently it has become more interesting. Luckily the policy against against self linking and some recent fixes should protect readers here.
posted by mock (69 comments total) 29 users marked this as a favorite
 
I'm kidding about the last sentence. IMG tags are probably an issue, and music.metafilter.com could be problematic as well.
posted by mock at 2:11 AM on October 22, 2006


Developers have been crazily complacent about CSRF, it'll take some high profile victims to wake people up.

The MeFi logout link is certainly open to simple CSRF (and should be a form button anyway). If the Preferences page isn't protected then hopefully mathowie will get onto it before someone wreaks havoc...
posted by malevolent at 3:04 AM on October 22, 2006


For those of you too lazy to read the articles, here's a simple example of CSRF. I embed an image tag in this MeFi post pointing to http://www.example.com/delete/my/records. Your web browser does a GET on the image, example.com authenticates you via your cookie, and then your records are deleted. This example is a bit contrived, but while image tags forcing GETs is the simple version with Javascript it's a simple matter to force any GET or POST with any context you want.

I first learned about this vulnerability two or three years ago. I remember thinking "wait, no, that can't be". The flaw is just so fundamental, and most every site using cookie authentication is vulnerable. It's bizarre that there hasn't been more work on solutions. The only one I know of involves polluting the URLs with nonces.

At least now I understand why so many sites require a second login or authentication step before committing a financial transaction.
posted by Nelson at 4:11 AM on October 22, 2006


The nasty thing about CSRF is that it doesn't have to be this web site that is vulnerable. A 302 redirect in another site, or an RSS feed where users from this site are expected to go (for example, an fpp) can use cached credentials (the cookie) to perform GET requests. Making POST only forms will not save you either, and neither will relying on referer, as thanks to actionscript3 and flash, pretty much the entire browser is spoofable assuming a crossdomain.xml can be put on the site. Which is what makes this so dangerous in combination with this.

(Also somehow I managed to slip an extra against into my post... I blame staring at too much Adobe documentation)
posted by mock at 4:12 AM on October 22, 2006


Ah, I see from one of the articles above there are two other recommended solutions. Add a nonce as a hidden form input variable. Or be careful to check referers. I think the second one is fragile, but the first seems workable if a royal PITA.
posted by Nelson at 4:17 AM on October 22, 2006


Oh, and while I'm on the topic of evil things one could do. By putting an IMG tag in the fpp, and some creative use of 302 redirect headers one could cause every one of MeFi's users to beat some poor server to death with TCP connections. Web servers are usually resistant to this, but mail, ssh, and dns will fall over with just a few thousand simultaneous TCP connections.
posted by mock at 4:19 AM on October 22, 2006


Does something that reports each outbound request (like Little Snitch for the Mac) at least give you a heads up that something is going on, or are these requests invisible to such software?
posted by Brandon Blatcher at 5:10 AM on October 22, 2006


It depends. Since it's being done by your browser, if you have your outbound firewall set to ignore stuff done by the browser process, then it should be ignored. If, on the other hand, you're watching for any network activity to ports other than 80, you might notice my above evil DDoS. However most of the time, it's just going to be yet more GET and POST traffic, which unless you make a point of watching every http connection, you probably won't.

BTW, it appears that firefox will not allow connections to low ports (or at least, it doesn't seem to work going to port 25 when I test it). It does seem to work for IE.
posted by mock at 5:22 AM on October 22, 2006


The problem could be fixed in several ways, but for all practical purposes, it's a responsibility of the server's programming personnel. And it is yet further confirmation that a mish-mash of legacy protocols and methods such as support the Web, never truly and coherently designed for security, interactivity, or transaction activity, should be replaced by protocols which are better suited to such ends, much as http itself replaced the older gopher protocol.

And yet, as we see from such foundering war horses as SMTP, as long as a thing can be driven at all, even with its gears grinding and its engine smoking from abuse, it will keep being driven. I'm convinced spam will eventually take down SMTP, but until Joe Sixpack User is routinely getting fleeced on Web services and transactions, the momentum of the Web will be its own worst enemy. Because in a "system" with so many subtle vunerabilities, and so many self-interested players, it is likely to take a long time for the smoking engine and grinding gears to grind completely to a halt. And in the meantime, really invasive "solutions" like DRM will get implemented, as "improvements" on the Tower of Babel that is modern Web design and programming.
posted by paulsc at 6:37 AM on October 22, 2006


This comment should be popular.
posted by Rhomboid at 6:43 AM on October 22, 2006 [128 favorites]


Ouch, you didn't have to do that to make the point, Rhomboid.
posted by malevolent at 7:09 AM on October 22, 2006


Consider it the most benign POC I could think of.
posted by Rhomboid at 7:17 AM on October 22, 2006 [1 favorite]


*wags finger at Rhomboid*

Very interesting stuff, thanks for the post mock. It's incredible how little you hear about this, given how very simple the exploit is, and that it's presumably as old as dynamic webpages.
posted by MetaMonkey at 7:18 AM on October 22, 2006


Another variation of this that would be especially hard to spot would be to host a script that returns a 1x1 transparent pixel GIF 99 out of 100 times, but once in a hundred it redirects to the target URL. It could also discriminate on IP address or browser OS, for example, only redirecting if the user is in Canada and uses Firefox on Windows... Thus an admin in the US running Mac OS would never see it... hypothetically.
posted by Rhomboid at 7:22 AM on October 22, 2006


From what I gather, there is a potentially big and fairly imminent problem facing anyone whose browser uses cookies. I think I also understand that someone could fake a page that would take advantage of the fact that I am logged into MetaFilter to steal my money and post weird stuff in my own page, but it doesn't really makes all that sense to me.

The links provided seem to be aimed at people with a good and extensive knowledge of systems. Is there any chance someone could explain it to me in layman's terms?
posted by micayetoca at 7:23 AM on October 22, 2006


also, why is music.metafilter.com particularly problematic?
posted by micayetoca at 7:26 AM on October 22, 2006


micayetoca that is the potential outcome of traditional XSS attacks. Any site that does not effectively filter out javascript from untrusted users risks having those users' cookies stolen, among all sorts of other malfeasance. But XSS is a well-known and well-understood threat, so there's nothing new there.
posted by Rhomboid at 7:28 AM on October 22, 2006


The problem is, there are several ways of getting web pages. GET is the main one. It's usually used to grab images, html pages. Any time you put a URL in the bar, you are getting a web page.

The problem is, people use GET for things they really shouldn't. Like above, the way Metafilter allows for a person to favorite a link is just a plain URL:
http://www.metafilter.com/contribute/add_favorite.mefi?sitetype_id=2&link_ID=1470320&parent_id=55720&author_id=26222

Like so.

Rhombold took that url, and used it as the url for an img. When a person browses this page, the web browser attempts to grab the image. This request GETs the above url, which favorites the post. Which is why his post has 50 some odd favorites now.

If for example, a website you use, like paypal, had a GET type link for giving someone money, like:

http://www.paypal.com/givemoney.php?userid=Rhombold&amount=1000000

and Rhombold had included that link, he would have been much richer.
posted by zabuni at 7:31 AM on October 22, 2006 [1 favorite]



Oh, I see now, thx Rhomboid and Zabuni. So, the threat would be that someone takes the actual link that, say, Paypal has and replaces it with a link to send money to himself. Right?

Two last questions, a) how is being logged into MetaFilter a factor into the potential theft at other site? and b) why is music.metafilter.com particularly problematic?
posted by micayetoca at 7:38 AM on October 22, 2006


Is there any chance someone could explain it to me in layman's terms?

1. you are logged in to site X

2. you visit site Y

3. site Y has a piece of code that makes action Z on site X.

Because you are logged in to site X, and the request is coming from your computer, site X has no reason to believe it is actually site Y which made the action happen. To site X, it is just as if you had done the action on the site itself, it cannot tell the difference.

The approach to stop this problem is to require a token variable with each submitted form, generated randomly for each form on your site. External sites cannot spoof or discover this token, so this defeats the attack.
posted by MetaMonkey at 7:42 AM on October 22, 2006


how is being logged into MetaFilter a factor into the potential theft at other site

It isn't, the point is any site you are logged on to could potencially have actions done, as if by you, from external sites. So being logged on to mefi means some on some other website you visit could post as you to mefi, or do pretty much anything you could do.
posted by MetaMonkey at 7:44 AM on October 22, 2006


some on
posted by MetaMonkey at 7:45 AM on October 22, 2006


Right. I get it now, and everbody's comments make a lot more sense. I guess the people writing the articles don't really have to consider people like me among their audience, but it wouldn't hurt if they explained it in more simple terms.

Thx y'all for the explanation.
posted by micayetoca at 7:53 AM on October 22, 2006


I think zabuni confuses the issue a little, because it isn't really about using GET instead of POST.

The GET/POST thing is only an issue in Rhomboid's demonstration because he would be unable to make a POST from mefi (because Matt excludes javascript). If you visited Rhomboid's site, he could happily POST as you to mefi with the requisite javascript code.
posted by MetaMonkey at 7:54 AM on October 22, 2006


True enough. The GET is especially grievous though, because most places, even ones that mung the ability for users to add content, allow images, and thus are susceptible.

I wouldn't trust Rhombold's site, but most people trust metafilter.
posted by zabuni at 8:03 AM on October 22, 2006


and the problem with Music?
posted by micayetoca at 8:03 AM on October 22, 2006


I mean, why Music in particular, and not all of the site?
posted by micayetoca at 8:04 AM on October 22, 2006


It appears Rhomboid's little trick up there works even on jessamyn. Imagine the possible fun.

<img src="http://www.metafilter.com/admin/banuser.mefi?user_ID=26222" width=1 height=1>
posted by grouse at 8:15 AM on October 22, 2006


It uses flash, and thereby might be vulnerable to things mentioned in mock's post.
posted by zabuni at 8:16 AM on October 22, 2006


Right, the problem with doing it from an external site is that you'd have to find two sites X and Y such that there would be a high probability that visitors to X were also members of Y (and were currently logged in), with the attacker having administrative control of (or had found an XSS exploit in) X.

Regarding music.mefi, I don't know of anything in particular that would make it vulnerable, only that the topic of being able to script in Flash with Actionscript widens the possibility of attacks and music.mefi uses Flash. However, I don't see a way that third party flash could be injected (unless mathowie was linking to a third party .swf player applet) so offhand I don't understand the implication.
posted by Rhomboid at 8:18 AM on October 22, 2006


Oh and the paypal example would only be valid if paypal was naive enough to allow a transaction to occur with a single GET, which they almost certainly (one hopes!) are not. I know that they allow a simple GET to initiate a transaction but it requires an additional POST or confirmation of some sort so you can't just link to that URL in an offsite IMG or something.
posted by Rhomboid at 8:20 AM on October 22, 2006


Right, I spend there most of my time and I couldn't remember that the player is a flash player. I get it now. Does that link grouse put there bans jessamyn?
posted by micayetoca at 8:24 AM on October 22, 2006


Just clear your cookies often. Or always deny cookies from your financial institutions and other sites where you have accounts you care about.

Someone above said it was the site programmers responsibility to make their site secure. While this is true to an extent, it is YOUR responsibility to know what your computer is doing. Pay attention.

Clear your cookies before you go into the Internet ghetto.
posted by jeffamaphone at 8:26 AM on October 22, 2006


Does that link grouse put there bans jessamyn?

Only if he guessed the right URL. Assuming there is a URL at all, and that jessamyn can run it.
posted by smackfu at 8:46 AM on October 22, 2006


I used Rhomboid's user ID. Turnabout is fair play ;)
posted by grouse at 8:50 AM on October 22, 2006


For those wishing to defavoritize Rhomboid's comment (as smashingly brilliant as it may be): click.

Of course, if you ever reload this page...

Now, if I had put that in an IMG tag, I'd guess most browsers would load it after Rhomboid's, so it would make his comment be each visitor's favorite just for a split second as they load the page. (But I wouldn't want to do that, because hey, maybe some of you really do like the comment!)
posted by whatnotever at 9:01 AM on October 22, 2006


There is no GET action that can ban someone, so no worries there. I'll remove the img tag for a few days, until I can rewrite some of the GETs into POSTs.
posted by mathowie at 9:05 AM on October 22, 2006 [1 favorite]


Oh dear. My link seems to redirect back to this page... But I think my browser cached Rhomboid's "image" (thus not reloading it), because I haven't refavorited it.
posted by whatnotever at 9:06 AM on October 22, 2006


I just want to point out that I have genuinely favorited Rhomboid's comment. Very clever, and an excellent demonstation of the problem's seriousness.
posted by Optimus Chyme at 9:36 AM on October 22, 2006


Thanks for asking the right questions, micayetoca, as well as zabuni, Rhomboid, MetaMonkey, et. al. for explaining. There is no reason this stuff should be hard for an average user to understand!
posted by Chuckles at 10:15 AM on October 22, 2006


For Matt and anyone else reading; GET vs POST isn't the issue here. It's easier to get a browser to do a GET, because you can embed an image. But with Javascript you can force a browser to do a POST to any site you want with any content you want. If you're really worried about this attack, POST isn't enough.

Yeah, I thought Rhomboid was brilliant.
posted by Nelson at 10:19 AM on October 22, 2006


Excellent post and discussion. Thanks.
posted by maxwelton at 10:33 AM on October 22, 2006


I've thought for a while that what Rhomboid did could be used to game Digg and equivalent popularity-based aggregators. If you embed "Digg this" action code in a page, then if people who are logged into Digg visit you they add votes without realizing it.

I've seen quite a few pages recently where the bottom of every post is a sequence of icons, each of which is a "vote" button for a different Digg-style aggregator. Someone who was unscrupulous could just unwrap all the icons and embed the code directly in their page.
posted by Steven C. Den Beste at 11:01 AM on October 22, 2006


SDB, I'm pretty sure someone exploited this about a year ago on digg and they took appropriate steps to block it.
posted by mathowie at 11:28 AM on October 22, 2006


Something similar to this was the subject of a The Daily WTF a while back where Google's spider inadvertantly deleted an entire wiki-esque web site due to very bad authentication coding and lots of 'DeletePage' links...
posted by nielm at 11:31 AM on October 22, 2006


Good grief. Okay, if making me seem to vote for the Republi, er, the popularity of your "this comment should be popular" posting above with that hidden code (some of its guts here)
...
action="http://www.metafilter.com/....
onclick="javascript:comment1470318flag.submit...
title="Flag this comment" class="flag"...
...

Agreeing that was the least bad demo, what's the worst you could have done to us in this thread, and what can we do about it?
posted by hank at 1:14 PM on October 22, 2006


Just so folks know, the reason why music.metafilter.com might have a problem is because it allows uploading of files. It turns out that almost anything that allows uploaded files can be used as a crossdomain.xml file. For example, here's a version in a gif. This becomes a real problem for two reasons. 1) you can do anything that the browser does in actionscript, which makes nonces essentially broken, 2) Adobe added a binary socket operation to actionscript3 which means you can use flash to attack other services on the host.
posted by mock at 1:15 PM on October 22, 2006


Blimey mock, that Flash stuff is horrific; what on earth was Adobe thinking?! I didn't realise they were pursuing such a risky policy model.

I suppose any site allowing uploads is now going to have to check carefully for policy file markup obfuscated in countless ways.
posted by malevolent at 1:37 PM on October 22, 2006


I'm surprised it's taken this long for this to come out, it's so deceptively simple.
posted by bonaldi at 2:21 PM on October 22, 2006


Hmmm in retrospect, I should have given this a 'crossdomain.xml' tag. Too bad adding tags seems to be broken right now
posted by mock at 2:24 PM on October 22, 2006


Nevermind, that's probably not too safe.
posted by mock at 2:31 PM on October 22, 2006


So, if I understand this correctly, one could avoid personal vulnerability by logging off important websites immediately after using them, yes?
posted by solotoro at 3:42 PM on October 22, 2006


You need to make sure that all credentials are cleared. Which means cookies and basic auth (and any client certs). 99% of most websites will let you log off, clearing the cookie's session, and if you also clear cache and history (or just close the browser and reopen) you should be fine. Sometimes a site will cache your credentials in a cookie permanently (save my login and password). In this case you need to delete the cookie yourself to be safe.
posted by mock at 3:49 PM on October 22, 2006


Also, you might want to look into noscript if you're using firefox. It will make you specifically request that malicious javascript and flash run in your browser.
posted by mock at 3:59 PM on October 22, 2006


So to "fix" this, what can be done from the point of view of the browser and protocol?

The idea of adding unique IDs that can't be guessed by remote sites is all fine, but this would require a rewrite of every bit of GET and POST request handing code on the web, wouldn't it?

Disabling cookies, clearing history. All fine. But cookies make the web usuable for me. If I had to log out of Metafilter every time I surfed away from it, then log in again when I came back, I don't know if I'd bother with it. Multiply that by every site I visit.

So how can new versions of browsers fix this problem? What can be changed in the HTTP spec to protect against this? Do we need some new method of authentication, rather than relying on old fashioned login forms and cookies? At the end of the day, this is the only way a solution can be found. Everything else is just a dirty hack.
posted by Jimbob at 4:53 PM on October 22, 2006


The safest way to protect yourself is to kill all browser windows after using an important password-protected site (i.e. your bank). And count to three, and wait for all disk activity to stop.

Then open a new one for whatever you want to browse next.

Jimbob, owners of critical sites can protect against this by requiring revalidation for critical operations.
posted by Steven C. Den Beste at 4:57 PM on October 22, 2006


Hank, the code you're looking at is the normal Metafilter code supporting those functions. The bogus code that Rhomboid created has been deleted long since.
posted by Steven C. Den Beste at 4:59 PM on October 22, 2006


Martin Johns, who will be speaking at pacsec (disclosure, I am a pacsec organizer) in about a month, has a paper on some mitigation techniques for CSRF on the client side which may prove useful.
posted by mock at 6:20 PM on October 22, 2006


I actually use different browsers for different purposes. One could extend this idea by installing, say, Opera, if you don't use it already, and using Opera and only Opera to access secure, sensitive sites that you trust, such as your bank or paypal, and NEVER use it for anything else. Use your favorite browser such as Firefox or IE for everything else and NEVER use it for the critical sites.

In this way your vulnerability will be limited to the possibility of malicious code served by a secure site that you trust, and if that happens you were fucked a long time ago anyway.
posted by George_Spiggott at 6:43 PM on October 22, 2006


Jimbob: What browsers need to do is disable cross-site POSTs. This, coupled with sites making sure GETs are side-effect free, would fix this vulnerability.

Requiring all web applications in existence to be fixed is unrealistic. Having everybody in the world update to their web browsers is only slightly less unrealistic, but at least then savvy users will be able to protect themselves from this exploit instead of having to rely on each site they visit being properly armored.
posted by Khalad at 7:28 PM on October 22, 2006


What browsers need to do is disable cross-site POSTs.

Now I'm a bit ignorant about this, since I haven't been a web developer in about five years now, but bear with me. I'd appreciate an explanation if anyone can help me out.

Anyway...aren't cross-site POSTs some kinda important part of this whole new-fangled Web2.0 thingy? For instance, there are some sites out there that use the Flickr API. You go to these services, and they "check" your Flickr account to see who you are and to authenticate you. I may be totally wrong about this, but I always assumed this involved the site sending a request to Flickr, that then relied on you being logged in and your cookie being present, to authenticate you back on the third party site. Wouldn't diabling cross-site requests like this break a lot of the new, integrated API-based web?
posted by Jimbob at 8:29 PM on October 22, 2006


Jimbob: Yep, and that's the problem. You could fight this by creating intermediate pages that required a captcha to stop an automated request.

It would kill mashups that actually change things on a cross-site. Or at least make them more annoying.
posted by zabuni at 10:18 PM on October 22, 2006


well, theoretically, all those requests should be rest or soap, which has a nice built in authentication. But should the mashup be susceptible, then vulnerability propagates out to the API provider.
posted by Freen at 7:12 AM on October 23, 2006


JimBob (and others) -- this isn't a new problem at all, and I don't really even see how it's anything but a special case of regular XSS vulnerabilities. In 2000, I had some issues of this variety with Scoop, so we added "form keys," which is just a one-time semi-random hash that's added to a user's session info when they load a form, and then is checked for when that user submits the form. So, even though Scoop doesn't distinguish between GET and POST, all actions have to include a valid formkey to be permitted. It's not a very large or difficult piece of programming. Mainly it's just important to remember to check for the key in whatever code handles actions. Setting and checking formkeys is all bundled away in a library, so it's just one line to include it in the form, and another to fail the action if it isn't valid.

So that was all well and good, but ironically the teeth of this formkey stuff got accidentally stripped out of Scoop not very long after it was originally written, and it was basically for show only for the next five years or so. Someone finally discovered this a few months ago, and we had a brief rash of excitement while it was exploited. But it was pretty quick to figure out why the form keys weren't protecting us and fix them again.

So, the upshot is this is an old problem, and does not require captchas on every form. But it can be easily solved with hidden one-time form nonces, and if those are terribly difficult to implement in your vulnerable web app then there's something dreadfully wrong with your web app to begin with, and you should probably give up this programming business and go raise goats in Montana.
posted by rusty at 9:37 AM on October 23, 2006 [1 favorite]


What can be done to protect against this from the perspecitve of a site owner/maintainer?

eg, what can metafilter do to stop someone creating a random page with some JS code to do something malicious (eg posting nasty comments/messing up their profile/removing all faveourites etc) and then posting that as an 'interesting' link?

as far as I can see, nothing! Referrer logging is often turned off (I turn it off myself, unless sites insist on it!), all re-confirmation that can be done via HTML can be bypassed by intelligent scripting...
posted by nielm at 10:29 AM on October 23, 2006


rusty: can a clever script not first request the form (via XmlHttpRequest) then submit it's nastyness with the nice shiny new formKey that it just got...

It makes things a little more difficult for the exploiter, but its not impossible...
posted by nielm at 10:32 AM on October 23, 2006


I think having a page like we do (where you can self-link all you want) helps a great deal.
posted by JPowers at 5:19 PM on October 23, 2006


Clear your cookies before you go into the Internet ghetto. - jeffamaphone

Like, for instance, mistyping the name of a popular website in your address bar? I imagine ebya.com & metafitler.com & paylap.com etc get a lot of hits by accident and could very well be the 'internet ghetto'.

Or maybe I'm the only person left that goes to favourite sites by typing the url in?
posted by raedyn at 2:15 PM on October 25, 2006


nielm; it looks like rusty's suggestion would be the best way to handle this from mathowie's perspective:

When I ask for this thread's page, the server calculates a small random 5-digit number and embeds it as part of the form at the bottom, where I type my comment.

The server remembers this value, and associates it with my username in the database, temporarily.

When I write a comment and press Submit, the server first pulls up the random number from the database, and checks it against the one sent along with my comment text. If they match, the comment is posted.

If instead of actually writing a comment, I had clicked on someone's link to a comment-writing script, that script would not have access to the random number, and so when it submits a fake form with a fake comment, the metafilter server will not be able to validate it with the stored random value, and the fake comment posting will fail.
posted by odinsdream at 1:39 PM on October 26, 2006


« Older Happy Diwali   |   Goodbye "Big John" Newer »


This thread has been archived and is closed to new comments