NYTimes.com has low security

October 13, 2000 4:17 PM   Subscribe

NYTimes.com has low security
Even me, the casual passerby, could access secret documents about the mysterious "partners," while trying to avoid downloading a cookie. Heh, "channel", "partners", the number 10. They're all related somehow? PS: "channel.nytimes.com" doesn't give access to pages without logging in. Any ideas?
posted by rschram (8 comments total)
 
http://channel.nytimes.com/partners
they just basically forgot to turn off directory browsing...

http://www.netcraft.com/whats/?host=channel.nytimes.com
i'd hope that feature was available under this server...
posted by pnevares at 5:06 PM on October 13, 2000


Okay, after just having fun digging through the vast levels of old/unused/demo files fuound within the above URL (I love to be able to directory browse...), I found at least one fairly cool thing - A directory of XML feed partners, like CNET, TheStreet, PalmPilot, IVillage, etc. (http://199.97.97.184/nytimes-partners)

Yeah, it's just a series of links to full stories in NYTimes, but it's live, and if you just wanted to slap together your own private headline homepage based on the XML files, it's there.


posted by kokogiak at 5:49 PM on October 13, 2000



And proving that the Times was on top of things from the start... the xml directory was created June 15, 1998.

But I thought XML was supposed to provide the data in a common format, and let the customer parse it how they wished. So why exactly does the Times provide 40 different versions of their XML?
posted by smackfu at 6:11 PM on October 13, 2000


think it's illegal to use those xml files on your page?
probably
posted by starduck at 7:11 PM on October 13, 2000


I was just talking about a personal-use headline page based on those files, nothing for resale or reuse - but that still might be illegal.
posted by kokogiak at 12:41 AM on October 14, 2000


Illegal to use these XML feeds? I doubt it, they're not secret, they're available to anyone on HTTP. They send them to you when you ask.

Of course they're welcome to restrict access.

>I thought XML was supposed to provide the
> data in a common format

Which files are you refering to?


posted by holloway at 10:23 PM on October 14, 2000


Hmm, so (holloway), what you're suggesting is that if something is freely available via HTTP, and is unrestricted, that it would be free for use to all? I wonder what the definition of "fair use" would be in this case? I think a line would be crossed somewhere between 'using for my own page', 'using for my own public website', and 'using for a for-profit website'. I'm not a legal scholar, but this is interesting territory.
posted by kokogiak at 10:51 PM on October 14, 2000


What I mean is that I'd doubt anyone to call this a backdoor, a private area, when it's publicly available to anyone and they could take measures to restrict access. If I have a URL of "Metafilter.com" and people keep being ever-so-sneaky and downloading the HTML I wouldn't have a legal foot to stand on. Hopefully.

Sites regularly trawl other's HTML for news headlines, I don't see the difference. I would prefer people grab my minimal XML rather than the relatively bloated markup of HTML that has superfelous markup of advertisements, logins, etc... bandwidth being something you want to minimise.

I believe American judges have ruled that deep-linking is legal, so that's why I would say the host wouldn't have any legal defence against you requesting files and they sending it to you.

ps. No I don't think everything available via HTTP should be free for all to use. The example would be copyrighted music comes to mind. I haven't seen any copyright on these XML files though.
posted by holloway at 6:58 AM on October 15, 2000


« Older Crime does pay.   |   Whatever happened to...
Newer »


This thread has been archived and is closed to new comments