Chowned
December 21, 2009 9:22 AM   Subscribe

While many Linux users cite the system's security against malware, the appearance of malware disguised as a screensaver reminded everyone that no system is 100% safe. Ubuntu users were quick to identify the virus, identify the perpetrators, and create a fix, but this isn't the first time this has happened, and will in all likelihood not be the last. The criticism in the community is directed squarely at the user base: "In general the lesson to be learned is if you want a secure system, don't download any software outside the official package sources without at least looking at the source code first."
posted by Marisa Stole the Precious Thing (97 comments total) 5 users marked this as a favorite
 
But what would we look for? I'm an ubuntu linux user (it came with my Dell Mini 9) and I have no idea about computer code.
posted by Hypnotic Chick at 9:30 AM on December 21, 2009 [4 favorites]


Huh. So if Joe Casual User is running an Ubuntu box that his grandson set up for him because he was tired of fixing Grandpa's Windows box every month, would it have been possible for Joe to install this malware? I can't imagine a novice even knowing how to look at the source code, let alone figure out it was malicious.
posted by mr_crash_davis mark II: Jazz Odyssey at 9:30 AM on December 21, 2009


Substitute "Hypnotic Chick" for "Joe Casual User", I guess.
posted by mr_crash_davis mark II: Jazz Odyssey at 9:30 AM on December 21, 2009


But what would we look for?

If it's not in the official repositories, don't touch it. All the x-look.org sites and such. This is probably a bigger problem for KDE4 since there's such a big emphasis on the "community desktop" or whatever it's called. I think it's a bad idea and I hope the KDE4 people take note of this.
posted by fuq at 9:32 AM on December 21, 2009 [3 favorites]


" quick to identify the virus"

What virus?

"malware disguised as a screensaver "

Oh, a trojan? Yeah, people make trojans for all kinds of platforms, popular Linux distributions included alongside the millions of trojan-compromised Windows PCs. There was an OSX trojan not too long back, last year or so. Fortunately for everyone, most trojans are not viruses, and this particular trojan was about as slick as tossing half a telephone pole in the swimming pool.

In general the lesson to be learned is if you perform privileged system operations, know what will happen, or at least know what the risks are. Otherwise do not elevate your privileges.
posted by majick at 9:34 AM on December 21, 2009 [4 favorites]


But what would we look for? I'm an ubuntu linux user (it came with my Dell Mini 9) and I have no idea about computer code.

The screensaver was not approved by Gnome, and was outside the official package list. That's generally your first hint to use caution. Additionally, you can open screensaver packages as you would any archive file. Look inside for an .sh file - these are shell scripts, made to issue commands, and can be opened like text files. If anything else, if it seems dodgy, post about it in the forums and ask the experts.

What virus?

Argh! Good eye. Yes, this screensaver will not replicate itself onto other machines or anything.
posted by Marisa Stole the Precious Thing at 9:36 AM on December 21, 2009


In general the lesson to be learned is if you perform privileged system operations, know what will happen, or at least know what the risks are. Otherwise do not elevate your privileges.

Even if something doesn't require escalated privilege, it can do plenty of damage. Getting your personal documents and photos wiped out is probably more devastating for somebody than anything else a trojan or virus could do. Not to mention local escalation exploits that are far more common than remote exploits.

Really, just stick to the official repos unless you really know what you're doing.

And for people using Windows, you don't need that fucking cute cursor or sparkly clip art. Seriously, you don't.
posted by kmz at 9:48 AM on December 21, 2009 [1 favorite]


There's a weird "i lolled you so" tone to this post. Pretty sure no Linux user has ever claimed that any computer system is 100% safe.
posted by DU at 9:48 AM on December 21, 2009 [7 favorites]


While it's disheartening to hear any OS community blaming the victims for not "looking at" (by which is presumably meant "comprehending") the source code for a screensaver, I think "don't install software from sources you don't trust," applies universally for all computing platforms. Of course, what a trusty source looks like for a Linux distro is different from say, Microsoft or Apple.
posted by GameDesignerBen at 9:53 AM on December 21, 2009


It's actually quite simple, really. Anybody can do it. First, you'll need to open a shell and navigate to /var/wtf/buried/neophyte.fu and decompile the code using luser.blowme. Then you'll have to read cryptic penguin pages online (you do have another computer, don't you?), which tell you how easy it is, just open the file (neglecting to tell you how to get to the file, what program to use to open it, how to even open it, etc.). Then you buy a Mac, because Macs don't ever get viruses and need no protection...and your friend, who runs Windows, just shrugs because they've done it a dozen times themselves.

I love platform flame wars!
posted by Chuffy at 9:58 AM on December 21, 2009 [3 favorites]


There's a weird "i lolled you so" tone to this post. Pretty sure no Linux user has ever claimed that any computer system is 100% safe.

I've been using Linux for three years now. No, I've never heard anyone claim Linux is 100% safe against exploits, nor did I say this was said either, but the security aspect is really often touted when it comes to this or that distro's advantage over Windows. This gives a false sense of security, I think. It's important to understand that with all the advantages, there are still risks entailed. The "tone of the post" is not meant to be lulzy, but to just include a little levity.
posted by Marisa Stole the Precious Thing at 9:59 AM on December 21, 2009 [1 favorite]


If Joe Casual User's grandson didn't give Joe root (i.e., admin privileged) access, Joe couldn't install this. But the grandson would then need to be doing all system upgrades and administrative tasks.

Pretty sure no Linux user has ever claimed that any computer system is 100% safe.

There's a heck of a lot of smug LOLlery and overenthusiastic claims in various Linux forums whenever Windows security is discussed.

But, yeah, for any platform, don't install code from random websites.
posted by Zed at 10:04 AM on December 21, 2009 [1 favorite]


I can just see me explaining this to my Mom:

Hey Mom, you did what? Oh no. Well, I found the answer online, all you have to do is run this command, you ready?

sudo rm -f /usr/bin/Auto.bash /usr/bin/run.bash /etc/profile.d/gnome.sh /usr/bin/index.php /usr/bin/run.bash && sudo dpkg -r app5552

45 minutes later...

OK, Mom, that's it. Didn't you read that copy of Hacker's Quarterly I sent you last month? I know, but you never know when you're going to need to know how to compromise a pay phone...
posted by Chuffy at 10:07 AM on December 21, 2009 [3 favorites]


I wonder how many generations into utter dependence on computers we'll be before "just look at the code" is no longer regarded as a valid response to security issues.

"Not sure if that front door lock you just bought is secure? Take it apart and just look at the works!"
posted by lodurr at 10:17 AM on December 21, 2009 [4 favorites]


It really shouldn't matter where code came from, nor if it was "approved" or not... the real question is why doesn't the OS offer a way to run any code without trusting it with everything?

AppArmor is the first step I've seen taken towards improving the situation on the Linux front. In my opinion, going the rest of the way to a complete implementation of the principle of least privilege would really do a lot to let us use computers as general purpose tools again.
posted by MikeWarot at 10:21 AM on December 21, 2009 [3 favorites]


but the security aspect is really often touted when it comes to this or that distro's advantage over Windows.

There's a heck of a lot of smug LOLlery and overenthusiastic claims in various Linux forums whenever Windows security is discussed.

Indeed, but this post does nothing to prove any of that wrong. When there's a plane crash, do the stories start with "While many airplane advocates cite the method's safety, the crash of Flight 12345 reminded everyone that no system is 100% safe."
posted by DU at 10:23 AM on December 21, 2009 [1 favorite]


The "look at the source" advice needs some perspective here: What is important is that you CAN look at the source. Even if you personally don't, someone you trust can: your distro maintainer, your IT department or your whiz kid nephew. With closed systems, you wouldn't be able to look at the source even if you wanted to.
posted by Dr Dracator at 10:32 AM on December 21, 2009 [1 favorite]


I don't think there's anything that the Ubuntu people could have done differently to prevent this, since the repository was hosted by an outside entity. Users had to add that repository, and then explicitly choose to either run unsigned code or trust a master key from that repo. There's nothing that Ubuntu can do to stop this, unless they lock the system down and prevent you from adding programs you want to your OS. Blaming "Linux" for this problem is wrongheaded, in exactly the same way that blaming Microsoft or Apple would be... this is the fault of the repo in question, not the operating system that's the target. Everything that was under the Ubuntu team's control was done properly, as far as I can see.

The Debian code base (which is what Ubuntu starts from) supports the idea of signed packages, and it will alert you in Most Stern Tones if you try to install code that's not signed... but of course it allows you to do so if you wish. Anytime you add new code to your machine, you're taking a risk. Signed packages mitigate that risk by a fairly large amount, because the signatures should be traceable, at least under ordinary circumstances, to specific people. If you trust the original repository to set up your computing environment, it's generally safe to also trust the people they trust to make new packages for it. But there are no guarantees.

For instance, even if you're running only signed code, it's certainly possible for a given dev machine to be hacked. That happened last year to a Red Hat development machine, which caused the distribution of some trojaned binaries. This was found fairly quickly, but they never released a full analysis of what happened, so the impact of the compromise was never fully disclosed. (NOT a good thing at all.) Apparently they were using some kind of hardware signature engine, which is really good and shows a strong commitment to security, but they were still compromised enough for at least a few bogus packages to get signed and distributed. They never told the community the full story, which is _extremely_ poor form.

But, unlike Red Hat, Ubuntu didn't blow it. There wasn't anything they could do to stop this.

Good security is hard. Linux in general is quite robust, but stupid user tricks can hijack any security layer. In general, don't run code that isn't either signed by someone you trust, or else is simple enough that you can visually inspect it and be certain it's safe. This is true of ANY operating system.
posted by Malor at 10:33 AM on December 21, 2009 [6 favorites]


"Not sure if that front door lock you just bought is secure? Take it apart and just look at the works!"

How about don't buy locks from that shifty looking dude on the corner of the street?
posted by kmz at 10:37 AM on December 21, 2009 [5 favorites]


The key operative phrase is, "outside of the official package sources." Open-source security depends on having communities that engage in peer review of software packages before they are "published." Microsoft and Adobe do the same thing in insuring the security of their releases, albeit through an entirely private auditing system. If you go outside of this auditing system, caveat emptor.

To me, it seems that the system is working as intended: reviewers identified a security exploit before the software was approved for inclusion within a stable Ubuntu branch.
posted by KirkJobSluder at 10:37 AM on December 21, 2009 [1 favorite]


Yah, i wouldn't say "the lesson is, look at the source!"

It's more like "the lesson is, if you're going to add random software you found on a web page somewhere instead of choosing from the umpteen thousand programs we make available to you in an easy to use, trustworthy repository, well, good luck."

The thing with Ubuntu is they have damn near everything in trustworthy, officially or community supported packages.

ALL THAT BEING SAID, MikeWarot is entirely right that it'd be possible to make things a lot safer at the level of the OS, and it's starting to happen, slowly, in the Linux world. But that kind of security flies in the face of a lot of unix tradition and assumptions, so it's not easy to roll out.
posted by edheil at 10:40 AM on December 21, 2009 [1 favorite]


dr. dracator: The "look at the source" advice needs some perspective here: What is important is that you CAN look at the source.

But it's not nearly as important as people think it is. Because if you're not in a corporate environment or don't want to piss away every advantage of automation, you will not have people 'looking at the source' on your behalf, either. (Yes, I know that's what "the community" is supposed to do. Now, who is that community again? How do I know them?)

Malor's point is highly germane. He's describing the beginning of a way of insuring that the code is sound, without having the strange expectation that "the community" will judge that by reviewing everything.
posted by lodurr at 10:41 AM on December 21, 2009 [1 favorite]


But what would we look for? I'm an ubuntu linux user (it came with my Dell Mini 9) and I have no idea about computer code.

If you just don't add outside code repositories, and don't install packages that give you warnings about missing or invalid signatures, only a hacker that can compromise one of Canonical's signing keys can exploit your system by this particular avenue.

Ultimately, all operating systems are about trust, because nobody can inspect all the code they're running anymore. It's just not physically possible. Running only signed packages means that someone with access to the signing key (or signing hardware, if they're really doing it properly) is asserting that they believe the software is safe. As long as they can guard their keys well enough, and you can likely assume they'll do a good job, your system should stay yours.
posted by Malor at 10:42 AM on December 21, 2009


the real question is why doesn't the OS offer a way to run any code without trusting it with everything?

With either a chroot jail (within which you su to an unprivileged user) or a virtual machine, you can run things in sufficient isolation as not to put your data or the rest of your system at risk. But these would make them so isolated, that for a lot of things their utility would be compromised. (And, if you allow the environment net access, then you potentially have something attacking your network from inside your firewall.)

Yup, security is hard.
posted by Zed at 10:45 AM on December 21, 2009


lodurr: Actually, this is part of what Palladium/Trusted Computing is about.... running a cryptographic code chain from first boot, insuring that only trusted code is allowed to execute on your system. Unfortunately, so far, this is only being used as a weapon AGAINST you, instead of as a tool FOR you.... Microsoft is using it, for instance, to guarantee their DRM system.

Trusted Computing, as it's currently designed, means MICROSOFT can trust your computer, not you. But with user access to the root keys, it could also mean that YOU can trust your computer.

Whether that will actually happen, I don't know. User benefit isn't especially high on Microsoft's list of priorities. If it does happen, I suspect it'll be via Linux first.
posted by Malor at 10:47 AM on December 21, 2009 [2 favorites]


lodurr: But it's not nearly as important as people think it is. Because if you're not in a corporate environment or don't want to piss away every advantage of automation, you will not have people 'looking at the source' on your behalf, either. (Yes, I know that's what "the community" is supposed to do. Now, who is that community again? How do I know them?)

Well, this strikes me as pretty much FUD. Pretty much every piece of software, the security is backed by some sort of a community that does code review to identify potential exploits. The question becomes what sort of communities do you trust to do that code review for you. I'm not especially convinced by the argument that a group of people bound by NDAs keeping things secret until the problem is patched is necessarily better than more open processes.
posted by KirkJobSluder at 10:48 AM on December 21, 2009 [1 favorite]


Indeed, but this post does nothing to prove any of that wrong. When there's a plane crash, do the stories start with "While many airplane advocates cite the method's safety, the crash of Flight 12345 reminded everyone that no system is 100% safe."

Most of the general public is aware plane crashes happen. The hyperbole used to describe security in the Linux world, I believe, gives many newcomers a false sense of security that they are impervious to trojans. Hence there is a need to remind people that no system is 100% secure - stick with trusted sources. "Looking at the source" is a solution only if you know what you're doing, of course, but fortunately the community forums are generally loaded with people who do.

I can just see me explaining this to my Mom:

"Open the Terminal, copy this text, paste it into the Terminal, and press Enter" seems a sight less complicated than what I had to deal with when I ended up with SpyAxe on my XP machine, but then I used to run crap on my machine just to see what would happen.
posted by Marisa Stole the Precious Thing at 10:49 AM on December 21, 2009


The hyperbole used to describe security in the Linux world..

And so we are back to my original question: Who is out there claiming that Linux is 100% secure?
posted by DU at 10:53 AM on December 21, 2009


Well, this strikes me as pretty much FUD. Pretty much every piece of software, the security is backed by some sort of a community that does code review to identify potential exploits.

FUD wasn't my intention; my intention was to point out that a source-review "community" is a concept that doesn't make sense to people. They don't know jack shit about no community. When presented with that kind of non-response (from their linguistic frame), they are left with no basis for judgement.
posted by lodurr at 10:55 AM on December 21, 2009


Heck, not only is no system 100% secure, no system is 100% stable.
After having to nuke a friend's XP system for the second time due to viruses/God-knows-what, I decided to have him try Ubuntu. All he did was type notes, watch movies and use the internet. I got it all working (with less driver trouble than XP) and gave it to him in hopes that it would be more stable than his continually wobbly Windows setups.

Within fifteen minutes of getting it up and running he had caused the system to lock up. I stood there, looking at the screen and said, "Dude, you just crashed a Linux system in less than 15min. You are SPECIAL." He demanded XP back.
posted by cimbrog at 10:58 AM on December 21, 2009


The hyperbole used to describe security in the Linux world, I believe, gives many newcomers a false sense of security that they are impervious to trojans.

I dunno, Marisa, I don't think I've ever seen any claims to that effect. I just don't see hyperbolic claims about Linux security. It's pretty good, and a lot of thought has gone into it, but the kernel itself is an ongoing weak point, because they jam code into it so fast.

Users ARE safe from most of the routine drive-by exploits of Flash and Firefox and such. This exploit obviously bit a few people, but there are millions of Windows boxes out there that have been hijacked by one avenue or another. So purely on that basis, you're cutting down your real exposure quite a lot.

But are you actually immune? Of course not. It's like not being susceptible to the common flu viruses. This is a real benefit, but it doesn't make you immune to AIDS.
posted by Malor at 10:58 AM on December 21, 2009


And so we are back to my original question: Who is out there claiming that Linux is 100% secure?

OK, let me try it this way. You're new to Linux, and have just installed your first distro. You go to the forums of said distro and ask what kind of anti-virus software you need. Cue derisive laughter, and assurances that you don't need anti-virus software, mixed in with a few voices saying that yes, it might be possible to exploit your machine, but you really have nothing to worry about. No one has contended that you are bulletproof, but the combined voices create a false sense of security. Hence the reminder that no system is 100% secure. Hope that helps! Beyond that, consider your point taken.
posted by Marisa Stole the Precious Thing at 10:59 AM on December 21, 2009


I dunno, Marisa, I don't think I've ever seen any claims to that effect. I just don't see hyperbolic claims about Linux security.

Hey, I'm not making shit up here. I'm speaking from experience. But again, point taken. This seems to be a sticking point about the post's language for at least one person here, so consider it noted and dropped.
posted by Marisa Stole the Precious Thing at 11:02 AM on December 21, 2009


"Dude, you just crashed a Linux system in less than 15min. You are SPECIAL."

Not that special. My experience has been that as a desktop OS, Linux is no more stable than Windows at its worst (which would be Vista, in my experience to date), and considerably less stable than late-generation XP or Win2K. I've always given up on Linux after two or three months due to a combination of factors: difficulty in doing simple things is usually the biggie, but the system stability was never what I'd hoped it would be, and the negative results of a system crash always seemed to be much greater than on Windows or OS X.

Now, as a server OS, Linux rocks. And my Asus netbook has never crashed, even once, so I know it can be stable as a desktop OS. But I've just never seen this on the distros I've tried.

I've tried 4 times to switch to Linux, on 4 different computers. Mandrake twice, SuSe once, Ubuntu once. I'm not even counting trying to use Ubuntu on my PowerBook and 1st gen Mini.
posted by lodurr at 11:08 AM on December 21, 2009


My experience has been that as a desktop OS, Linux is no more stable than Windows at its worst (which would be Vista, in my experience to date), and considerably less stable than late-generation XP or Win2K. I've always given up on Linux after two or three months due to a combination of factors: difficulty in doing simple things is usually the biggie, but the system stability was never what I'd hoped it would be, and the negative results of a system crash always seemed to be much greater than on Windows or OS X.

My experience is pretty much the exact opposite of this. My Linux desktops have uptimes of months. I reboot for kernel upgrades more often than any other single reason. Periodically I try Windows (on a kid machine, say) and eventually give it up because out of the box Windows does almost nothing. As for system crashes screwing up the machine....Windows has a journaling filesystem now?
posted by DU at 11:14 AM on December 21, 2009 [1 favorite]


In Marisa's story, the advice that the neophyte gets is correct: there is no antivirus software for Linux, and no real market for any. The response to a virus in Windowsland is for the antivirus companies to update their databases; the response to a virus in Linuxland is fix the hole that allowed the infection.

But this story isn't about a virus, it's about malware that the user installed deliberately. Even on windows, an antivirus program will not prevent you from doing this. There are antimalware programs that will warn you, if and only if the malware is well enough established as to be in the vendor's database. Millions of trojans get installed on Windows machines anyway. The best practice is not to be installing obscure unmanaged packages as root in Linux unless you have a pretty good idea where the software came from. It's actually the same best practice as in Windows.
posted by George_Spiggott at 11:20 AM on December 21, 2009 [2 favorites]


Windows has a journaling filesystem now?

I'm not sure why you think that rhetorical question has any weight. If I tell you that Linux system crashes have often left my system un-usable, and Windows system crashes haven't done that in ten years, what difference does it make if I'm using a journaled *nix FS or NTFS? Or we could look at the Mac: Panther used to crash on me at least once a week, but I never had a scenario where it resulted in corrupted data. No journaling.
posted by lodurr at 11:21 AM on December 21, 2009


I think in the past 3 years the only reason why I ever bricked a laptop was because I went ahead and allowed upstream updates that my distro's team hadn't yet tested. But keeping things partitioned so that my file system and data are separate has made even this not really a big deal.
posted by Marisa Stole the Precious Thing at 11:21 AM on December 21, 2009


Actually, my personal experience with Linux has been good, especially with Arch Linux. However, my same friend managed to lock THAT up about a year ago...

Come to think of it, he did both the same way - logging onto Yahoo's fantasy football pages. Maybe I should report this...
posted by cimbrog at 11:25 AM on December 21, 2009


A while ago it was pointed out that the launchers for desktop junk provided by KDE and GNOME are potential vectors for trojans. They really need to knock that cruft off, regardless of how neat it seems to download and run plasmoids or whatever.
posted by a robot made out of meat at 11:26 AM on December 21, 2009


Marisa: Hey, I'm not making shit up here. I'm speaking from experience.

I'm not saying you are! It sounds like you may hang out with people that aren't that technical and think Linux is some kind of panacea, and those just don't exist. As much as you can, stop them from spreading that kind of nonsense, because those unrealistic expectations encourage stupid user tricks.

Any computer that allows you to create and run new code can be compromised, full stop. No OS can fully protect you from "evil" software without preventing you from running ANY software. If you see people claiming otherwise, jump down their throats about it. A false sense of security is the worst possible scenario, because it encourages users to take dumb risks.

But again, point taken. This seems to be a sticking point about the post's language for at least one person here, so consider it noted and dropped.

Well, I'm not too uptight about it, but I do feel like you're blaming the distro for "blaming the users", when it was indeed the users at fault. The Ubuntu team appears to have done everything in their power to prevent this problem; going further would have required that they lock down the system and prevent outside repositories from working. Most users I know would be pretty pissed about that idea.

Another way of putting it: what could they have done differently? Short of outright refusal to install, about all I can think of is big red flashing letters on the warning about adding unsigned packages. And if the user has a false sense of security, it doesn't matter how many red flashing signs you put up. :)

lodurr: My experience has been that as a desktop OS, Linux is no more stable than Windows at its worst (which would be Vista, in my experience to date), and considerably less stable than late-generation XP or Win2K.

It kind of depends on which version you get, and what your hardware is. Some setups can be incredibly robust, others not so much. X in particular has been in very heavy flux of late, and driver and video problems are pretty common. Plus, KDE's 4.0 release was a freaking disaster, and if you were trying to run KDE in the last 18 months, that's probably most of the problem right there. There's also a big push to move away from proprietary drivers for graphic cards, which will probably add instability next year, because the open drivers aren't very good yet.

Personally, I had very good luck with Ubuntu 8.10. I installed 8.04 on my Inspiron laptop to play with it. I didn't like it quite as well as XP, but it never annoyed me enough to make me switch back. It worked well enough that the hassle of the reinstall plus the Great Driver Hunt didn't seem worthwhile. Then 8.10 came out, and that fixed a couple of nagging problems, and I ended up quite happy with it. This year, I've done most of my Unixy stuff on a Mac, but I'd be fine with switching back if I needed to.

It also seems to go through these long cycles... back around 2001 or so, Mandrake was just awful, to the point that I desperately wanted to put XP on my main work machine, but really couldn't. So I toughed it out, and it eventually stabilized, but it sucked for a long time.
posted by Malor at 11:48 AM on December 21, 2009 [1 favorite]


Well, I'm not too uptight about it, but I do feel like you're blaming the distro for "blaming the users", when it was indeed the users at fault.

Actually, I'm not really trying to blame anyone. What I was trying to do was shed light on a fairly topical subject that might help clear up a common misconception among many new users, and that I felt computer users of all systems might have found interesting. I agree completely that the distro does all it can. If you go outside the official repos, you can't really fault the distro.

It sounds like you may hang out with people that aren't that technical and think Linux is some kind of panacea, and those just don't exist.

I don't hang out on those forums so much anymore, although for the first 2 years I certainly did. If I have an issue I can't resolve on my own, I use their search function. Now and then on other tech forums I'll run into that same attitude of the code's near omnipotence, but this is becoming less the case these days.

Plus, KDE's 4.0 release was a freaking disaster, and if you were trying to run KDE in the last 18 months, that's probably most of the problem right there.


Oh wow is this ever true. I gave up on KDE after maybe a month. What a bloaty mess. Thing is, I'm always reluctant to criticize the system for what it cannot do, as it's sometimes a matter of hardware compatibility. On the other hand, systems should be able to run on as many different machines as possible. My 3-year-old HP has been pretty trusty in being able to run everything from Mint to Parsix to Mandriva to Gentoo.
posted by Marisa Stole the Precious Thing at 12:04 PM on December 21, 2009


The screensaver was not approved by Gnome, and was outside the official package list. That's generally your first hint to use caution. Additionally, you can open screensaver packages as you would any archive file. Look inside for an .sh file
A screensaver is a regular program, so you don't need to include a shell script.

The real problem is running code privileged. If a screen saver needs to run as root, that's a clue that you should probably not give it access.

Even in multi-user systems that protect the key system settings, I think there ought to be more emphasis placed on preventing programs from screwing with files created by other programs. So for example, users should get a warning if a program wants to access photos or something like that.

Also, systems should be setup to easily roll back any changes that do get made. Windows Vista has it's 'previous versions' system, so if you accidentally delete something you can get it back.
Hey Mom, you did what? Oh no. Well, I found the answer online, all you have to do is run this command, you ready?

sudo rm -f /usr/bin/...
Or you could log in remotely to run the command yourself. But actually I found with my luddite mom that it's actually much easier to explain command lines to people then it is to explain what to do with a GUI. I mean you just say "type this, then type that". With a GUI you have to describe the Icon, describe everything they need to click on, and invariably there are a lot more clicks then there are commands to run.

Most likely these people would have been familiar with a typewriter so they understand the concept of 'typing'.

But that was back in the '90s and my mom eventually learned how to use a computer properly.

---

As far as claiming that their systems are secure, Apple is a lot worse. They actually advertise that "you don't need to worry about worms and malware". Or whatever.

It sounds like you had to go through a lot of hoops just to get this code (installing extra repositories on your machine to get access to it) so an ordinary user wouldn't have had anything to worry about.
posted by delmoi at 12:05 PM on December 21, 2009 [1 favorite]


But, yeah, for any platform, don't install code from random websites.
posted by Zed at 1:04 PM on December 21


Good advice. I stick with official Microsoft and Adobe software installed from the Pirate Bay.
posted by Pastabagel at 12:06 PM on December 21, 2009


delmoi: Even in multi-user systems that protect the key system settings, I think there ought to be more emphasis placed on preventing programs from screwing with files created by other programs. So for example, users should get a warning if a program wants to access photos or something like that.

Oh please no. I really don't want to go through a warning every time InDesign or LaTeX pulls a linked image file into my document. And on plain text files, I might use any combination of editors and batch processing scripts to do what I need.
posted by KirkJobSluder at 12:11 PM on December 21, 2009


Substitute "Hypnotic Chick" for "Joe Casual User", I guess.

That definitely works better in the film version.
posted by rokusan at 12:15 PM on December 21, 2009


Not sure if that front door lock you just bought is secure? Take it apart and just look at the works!

Been there. Done that. Learned how to get around mushroom, spool and serrated pins.

1<3 lockpicking.
posted by Splunge at 12:33 PM on December 21, 2009 [1 favorite]


The real problem is running code privileged. If a screen saver needs to run as root, that's a clue that you should probably not give it access.

That's true, but root access isn't required to do a hell of a lot of damage. A user-level process, at present, can wipe out all your data, send all your bank account info overseas, and function very nicely in a botnet. As a malware author, you gain some functionality as root, like being able to sniff network traffic, but the biggest thing is the ability to tamper with the kernel to hide your presence in the system. Running as a user means you can do anything with data, but you're visible and can't destroy or alter system files. In this era of single-user machines, those aren't terribly meaningful restrictions.

The solution is probably to expand the SELinux policies to cover more desktop stuff; using that (very very very complex) system, you can say "programs labeled screensaver can only touch these files and change these specific desktop settings". With careful partitioning, it should be possible to allow you to download and run almost any code with a fair degree of safety, which is something that's not even theoretically possible with Windows or Mac OS. SELinux, as far as I know, is completely unique in the OS space.

The problem, of course, is that it's fiendishly complex, and getting the policies both correct and still flexible enough to allow you to do real work is probably going to take several more years of development time. And that, of course, is assuming that anyone wants to even do it, because it's going to be one hellacious slog.
posted by Malor at 12:34 PM on December 21, 2009 [1 favorite]


There's a weird "i lolled you so" tone to this post. Pretty sure no Linux user has ever claimed that any computer system is 100% safe.

Well... not Linux, but technically Macs are Unix these days...
posted by Artw at 12:56 PM on December 21, 2009


And I have to say as a newish Ubuntu user, if you try and Google how to do just about anything you'll generally get a shitload of hits on forums which offer advice on a "look at the source" level of uselessness. Bad old user-hostile Linux nerdery is clearly not buried too far under the new neophyte friendly veneer.
posted by Artw at 1:04 PM on December 21, 2009


I sincerely hope that the linux dev community isn't trying to "blame" this on anybody. Improving user awareness is always a good thing, and good on them for making that a topic. But at the end of the day, Linux becoming a casual user desktop isn't a dream anymore, it's a straight up reality. And with that comes the obligation to consider your standard non-knowledgable user. That the overwhelming majority of new linux users will stick to official packages is true, but that anyone who wants to learn more about linux will start by playing with stuff from wherever they can find it is true, too. they're simply going to have to start baby proofing some things, and blaming the user is without question a security-poor stance on the issue. on top of that, it's antagonistic, and views the user as an enemy who is "stupid" and shouldn't be touching their precious code. you'll always have that contingent in any dev culture, but it cannot be the position of a community or a business. it simply won't help.

but again, user awareness is good, so if that's all they're trying to say, then good on them, and I wish them all the luck in the world in their attempts to navigate the treacherous territory of helping and hindering their user base.
posted by shmegegge at 1:14 PM on December 21, 2009


The sneering is policed a lot better these days, though. I've seen the eye-rollers, the baby-talkers and the perpetually flustered and bothered in such forums get repeatedly asked to treat newcomers with respect. Whereas these people might have been tolerated even a couple years ago, they generally get told to dial it down now.
posted by Marisa Stole the Precious Thing at 1:15 PM on December 21, 2009 [2 favorites]


I've seen the eye-rollers, the baby-talkers and the perpetually flustered and bothered in such forums get repeatedly asked to treat newcomers with respect.

it's been years since I really played with linux, but my experience with actual knowledge and help style linux boards has always always been positive. slashdot is another beast, and boards like it. but when it comes to actual "help i'm lost" type sites, the community has always come out in spades. linux dudes are generally some very solid individuals, and they're why it's such a growing community.
posted by shmegegge at 1:18 PM on December 21, 2009


it's been years since I really played with linux, but my experience with actual knowledge and help style linux boards has always always been positive.

My experience has mostly been positive, definitely. I don't think scorn and ridicule of new users is the norm by any means; just that the types who might've been given a pass to do this a couple years ago get warnings these days.
posted by Marisa Stole the Precious Thing at 1:23 PM on December 21, 2009


Certainly I think that greater care can be taken to sandbox applications in such a way to prevent intentional or unintentional security vulnerabilities. But it seems that most malware these days relies on social engineering attacks, which makes end-user education a necessity unless you completely lock-down systems in ways that are unusable.
posted by KirkJobSluder at 1:26 PM on December 21, 2009


KirkJobSluder: " lock-down systems in ways that are unusable"

There is the crux of the problem... a system which has default permissions to do anything is expected to be the norm. It's possible to build completely usable systems in which the default permissions don't exist and you explicitly supply resources to a program when you run it, thus automatically limiting the amount of change it can cause to your system.
posted by MikeWarot at 1:30 PM on December 21, 2009


MikeWarot: How do you do that in a way that 1) prevents the user from explicitly giving permission to run malware, 2) isn't so intrusive that the user effectively shuts that protection off?

We know two things about users, they won't use strong passwords, and they won't put up with extra and redundant dialogs. Any security scheme that depends on either is snake oil.
posted by KirkJobSluder at 1:37 PM on December 21, 2009


Certainly my experience with my netbook/Ubuntu combo so far has been overwhelmingly positive - installation in particular was a super slick dream that took next to no time or effort, and the result has been a very snappy machine that does all the things I want from it (mainly word processing and websurfing) with very little overhead or cruft. But I haven't really gone beyond the managed packages.

Oh, I had some trouble with permissions when setting it up as a LAMP server, but that may have been my fault for leaping into things. Also I'm not really sure I want a LAMP server on my netbook, I just thought it would be cool to have one.
posted by Artw at 1:40 PM on December 21, 2009


KirkJobSluder: You do it by making the equivalent of a Valet key for software, one that is VERY easy to use, and prevents the software from doing any permanent changes to the system.

You have to have a secure OS to be able to do this properly... which means a microkernel, and a capabilities system backed into the environment. You can't add it on later and expect it to work.

However (and in contradiction to my previous paragraph)... a way to mimic this is to make it easy to sandbox an application, such as SandboxIe which does this for Windows applications.
posted by MikeWarot at 1:59 PM on December 21, 2009


My Ubuntu installed was pwn3d by Ubuntu a few weeks ago. I saw that there were system updates available, I chose to install them (hoping there might be a new driver for my 3G dongle that wasn't working as it should), and I restarted my system to find that the updates had overwritten GRUB with a new version that didn't support the existing configuration file, and I was stuck at a GRUB prompt, unable to boot.

I'm sure there's a fix, but I haven't bothered to try yet. I love Linux to bits, but I haven't had an "unable to boot PC after system update" problem since Windows 95.
posted by Jimbob at 2:07 PM on December 21, 2009


OK, let me try it this way. You're new to Linux, and have just installed your first distro. You go to the forums of said distro and ask what kind of anti-virus software you need. Cue derisive laughter, and assurances that you don't need anti-virus software, mixed in with a few voices saying that yes, it might be possible to exploit your machine, but you really have nothing to worry about.

I don't go to Linux boards, but I certainly know the attitude. I resisted getting a Mac for years and years because of those attitudes. Because a) it's smug and annoying but mostly

b) If you insist everything about the thing you love is the best ever, how i can possibly separate the real benefits of switching from the fanboy bullshit?

I like my Mac because I can run Final Cut and address my 8 gigs of RAM. But if you're going to sit there and tell me that it "never crashes," or that the mouse with no right button or the keyboard with the absurdly skinny keys are worth using, how can I possibly believe anything at all you have to say about this computer?
posted by drjimmy11 at 2:17 PM on December 21, 2009


MikeWarot: You do it by making the equivalent of a Valet key for software, one that is VERY easy to use, and prevents the software from doing any permanent changes to the system.

It's "easy to use" that strikes me as little more than handwavium combined with "explicitly supply resources to a program when you run it." If you give the user the power to explicitly supply resources to a program, you give that user the power to explicitly run malware it politely asks.

We demand that users make permanent changes to the system, both to expand its functionality and every time we have a security update that we insist is mandatory.

Of course a central issue is that it's difficult with current software design to sandbox anything that scatters pieces of itself across the filesystem and demands access to arbitrary files. (Is Firefox under Linux still opened with a bash script?) And then you have issues of concurrency. I can't think of a good way to do web development that doesn't require multiple applications with real-time access to the same filesystem.
posted by KirkJobSluder at 2:31 PM on December 21, 2009


So, "official repository" is like nerd speak for "approved app store"?
I can feel the freedom of Linux freeing my unfree unopen soul.
posted by mr.marx at 3:03 PM on December 21, 2009 [1 favorite]


I'm sure there's a fix, but I haven't bothered to try yet. I love Linux to bits, but I haven't had an "unable to boot PC after system update" problem since Windows 95.

I had a very similar problem when I was experimenting with upgrading from grub to grub2 on Debian. It blew up spectacularly, and I ended up booting off a rescue CD and completely removing all traces of grub2.

It seems to work okay, though, if you do a fresh install. Grub2 on a brand-new system seems fine. But the upgrade process is just.... bad. Really bad. So I don't do upgrades anymore.

Grub2 will be important, as PCs are getting a lot more complex, and it should eventually allow things like booting off RAID volumes and esoteric filesystems, but the upgrade wizard as presently shipping is not in good shape.

Heh, and you guys talking about 'blame the user', oh my god, you have no idea how much better it's gotten. Not so long ago, this was how it usually played out.

If:
  • You want to do something;
  • That something is straightforward in other OSes, and
  • That thing is hard in Linux, then:
  • You are stupid to want that, you loser.
I saw this over and over and over.

Back in the days of yore, along about, oh, 98 or so, I was on Slashdot (this was when it was cool :) ) and talking about using Linux in business, and one of the things I talked about was that the ext2 filesystem was really weak and prone to data loss, and that until Linux got a better filesystem, it wasn't ready for serious business use. I talked about a specific problem I'd had with a DNS server, running an early Red Hat, where it had lost power for some reason. The filesystem took an enormous amount of damage, and it took me a long time to fix the system afterward.

I got back multiple replies that all blamed ME; it was my fault. One guy blamed me for not running the server on a UPS -- this was when they were $1500, and this was a $400 recycled dev box, which was all we had budget for at the time. (we were pretty shoestring early on. :) ) Another one blamed me for not manually hex editing my filesystem. He told me with a perfectly straight online face that I was an idiot because I didn't know how to use a disk editor to manually copy a backup superblock into the right place on the disk. I'm dead serious, he REALLY told me that. The filesystem blew up, but because I didn't know the precise layout of the actual sectors on disk, it was all my fault.

Not ONE PERSON would admit that there was anything wrong with ext2. Not ONE.

Well, some time later, an early journaled filesystem shipped; I think it might have been reiserfs. After experimenting with it for a month or two, I posted that I thought Linux was now ready for the enterprise. (That was quite the buzzword back then, "ready for the enterprise". ) This time around, when I mentioned how bad ext2 had been for me, I got a huge number of posts in agreement, all saying ext2 totally wasn't ready for business use, and that journaling was the only way to go. It wouldn't surprise me if it was the same people, although those early threads seems to have been lost to the mists of time -- I think Slashdot must have had some significant data loss. They're on MySQL, I don't suppose that's terribly surprising. :)

Fortunately, as the system has gotten more solid, you see a lot less of that kind of doublethink. You can usually do what you need to with Linux, although perhaps not quite as elegantly as with commercial packages on Windows. And, interestingly, it seems like people have gotten a lot more honest about its limitations, and a lot less dismissive of the user base.
posted by Malor at 3:10 PM on December 21, 2009 [1 favorite]


So, "official repository" is like nerd speak for "approved app store"?
I can feel the freedom of Linux freeing my unfree unopen soul.


Well, sort of, but it's entirely for your benefit. Apple's garden is walled; they try very very hard to prevent you from leaving, even if you want to, because they want to keep their fingers in your wallet. Those walls are for Apple's benefit, not yours.

Ubuntu's got, like, a chalk line on the ground with a warning sign that A) it can be dangerous to cross this line, and B) okay, if you really want to, here's exactly how to do it.
posted by Malor at 3:14 PM on December 21, 2009


Would now be a good time to bring up the old topic of the network computer? There's a huge category of people don't need a full-function desktop and would probably be a lot happer without it; remote applications and local storage will do everything they want. All that's needed is a decent amount of reliable bandwidth and quality professional-grade apps in the cloud. You don't install compromised software because you don't install software at all; you run it from a well known and trusted source.

Obviously that's not for everyone; those of us who like administering our own machines would of course continue to do so. But it would eliminate the problem of what kind of machine to set up for your parents; or even businesses: I always find it slightly ridicuouls when I see workstations in professional offices running local XP installations. What's the point in all that overhead? You don't need a dozen PCs in a dentist's office, you need one application server and 18 thin clients.
posted by George_Spiggott at 3:26 PM on December 21, 2009


SELinux, as far as I know, is completely unique in the OS space.

The problem, of course, is that it's fiendishly complex...


Isn't it just? I haven't yet done an install where I haven't eventually just turned it off for one reason or another. Something, somewhere along the line, just won't work with it turned on (even in 'report only' mode).

Another way of putting it: what could they have done differently? Short of outright refusal to install, about all I can think of is big red flashing letters on the warning about adding unsigned packages.

Fedora 12 had an interesting approach to this which was widely criticised at the time, but may be worth considering for the single user desktop scenario: users can install signed packages from the repository with no further authentication. I do think this is a bad idea for a few other reasons, but from the point of view of differentiating a nice, safe install from a trusted repo compared to a scary install of a random unsigned RPM off a website it would be quite good. At the moment, on my Fedora 11 laptop, an unsigned package install has exactly one extra password dialogue to go through compared to the 'safe' install.
posted by robertc at 3:54 PM on December 21, 2009


Ubuntu's got, like, a chalk line on the ground with a warning sign that A) it can be dangerous to cross this line, and B) okay, if you really want to, here's exactly how to do it.

This is pretty much what Mint is doing. Their update app classifies updates on five levels:

1: Certified packages - directly maintained by the distro developers.
2: Recommended packages - tested and approved by the Mint team.
3: Safe packages - not tested but believed safe (as they usually come from trustworthy sources, e.g., Firefox)
4: Unsafe packages - could potentially affect system stability.
5: Dangerous packages - known to affect system stability depending on your hardware.

By default, levels 4 and 5 will not appear unless you configure the update manager to show them. Dist upgrades also need to be specifically requested. The newest version allows you to remove packages from the update list, in case you have and want to keep the older versions of certain apps (as I do for WINE).

This, to me, is about as fair and stable an update system as I've seen. Their package manager already specifies where your apps come from, and whether or not they are from a trusted source. The added benefit of an update manager that allows for a second filter, while still giving the freedom to more experienced users to go ahead and update whatever they please, is quite refreshing.
posted by Marisa Stole the Precious Thing at 4:27 PM on December 21, 2009


"Not sure if that front door lock you just bought is secure? Take it apart and just look at the works!"

How about don't buy locks from that shifty looking dude on the corner of the street?

The problem with software is that the picture you put on your nightstand is as much a security concern as the lock you put on your front door.
posted by Phredward at 6:18 PM on December 21, 2009


"Not sure if that front door lock you just bought is secure? Take it apart and just look at the works!"

How about don't buy locks from that shifty looking dude on the corner of the street?


This story seems more like: 'Not sure if your Official Gnome lock is secure? How about you don't buy decorations from that shifty looking dude on the corner of the street?'
posted by jacalata at 6:39 PM on December 21, 2009


Dude, you just crashed a Linux system in less than 15min. You are SPECIAL.

puh! I'll crash a linux system in ones of minutes. No problem. It's easy because I have the root password and a livecd and a Saturday. Linux is particularly easy for a user to decimate because it allows you (some would say it give you the freedom) to change fundamental aspects of the OS. Hell, I always crash linuxes several times on any computer before I'm done installing linuxes on it.

Then again, I AM special because I've set up linux boxes for computer labs in housing projects and for hood kids and they've run for the past two years and have only stopped working because of hardware failure (kicked to dead in both instances).

Xubuntu 8.10 running for two year for 1) a public access computer in a community center and 2) a personal computer for someone knows NO GOD DAMN THING about computer. Both: Motherboard failure due to young children. Pre-1999 bios.
posted by fuq at 6:40 PM on December 21, 2009


Damn, really late to this party. Stupid Christmas errands. I guess I'll just pretend none of the above conversation happened:

1) There have been cases where people promote Linux (and Ubuntu) as if it were somehow safer than Windows. "No viruses! No spyware! Switch today!" For the most part we're just less popular. And yet I see people insisting it's more secure, or even just "secure". These people tend not to notice that the primary author behind AppArmor quit and works for Microsoft now.

2) It's 2009; every monitor sold today supports EnergyStar commands like putting the display to sleep, so why do we even have options for screensavers beyond "put display to sleep after X minutes of inactivity"? I wonder how much Windows spyware comes from downloadable screensavers and cursor themes...

3) We shouldn't be promoting the activity of downloading and installing random crap from the internet. This means in particular, getdeb and to some extent, Medibuntu. Medibuntu in particular contains binary code we don't have source to, and hence sits outside the Ubuntu repos. It also bears less scrutiny, so it's feasible for someone to join and do a lot of damage. Like say, inject malicious software and conflict with a medibuntu sources.d.list file to prevent apt from finding patches.

4) Historical UNIX security got one some things right, but a lot of things wrong. Yes, running things as root is bad. I'm glad we've solved that. But most of the files root has access to are system copies. By definition, I have write permission on my personal data, which is far harder to rescue from attack. Most amusingly, "setuid nobody" could perhaps allow programs to drop permissions, but requires substantial configuration to allow this. As far as UNIX concerned, "nobody" is just another user.

5) Requiring people to inspect programs for safety is insane. We can't make computers do it right, how the hell does anyone else have a chance? The obvious fix is to prevent untrusted software from running. Unfortunately, the technology that would allow Trusted Computing is too easily corrupted towards existing big players. Other technology that undoes Turing's overpowered machine like NX helps, but any system that admits Sovereign Computing admits people as a potential weak link.
posted by pwnguin at 6:44 PM on December 21, 2009 [2 favorites]


The added benefit of an update manager that allows for a second filter, while still giving the freedom to more experienced users to go ahead and update whatever they please, is quite refreshing.

I've long been expecting dodgy third-party additions that do something like sneak http://security.welcometomybotnet.com/ubuntu into sources.list and start doing their own updates and installs under the pretense of installing dependencies.

Running Linux is interesting. I do (did -- I'm sufficiently scared now) download and install stuff from sites like gnome-look, thinking "Linux is safe. Download whatever you want!"

The idea that linux users think their systems are invulnerable is pretty much a straw man argument. Nobody's really claiming that. I think the linux community has been more secure largely because it hasn't been common and it's user base has been made up mostly of the geekier types who already know what to look out for. Prepare to see problems as that user-base migrates more and more to everyday users.

That being said, even the geeks don't carefully check all the code before installing, they may just have a little more of a 'radar' for something that looks fishy. "Hey, my new world-clock desktop applet shouldn't need to install postgres and be listening on all those ports. That's strange..."
posted by Avelwood at 7:13 PM on December 21, 2009


"setuid nobody"

I'm not going to stick around to play the "who's got worse security?" game but I saw this pair of words together and need to intervene:

DO NOT UNDER ANY CIRCUMSTANCES EVER ASSIGN OWNERSHIP OF FILES TO nobody, EVER AT ALL

The entire purpose of the "nobody" user is that its processes are guaranteed to have the least filesystem privilege. Assigning binaries to "nobody" violates that constraint. Assigning suid binaries to "nobody" is worse. Assigning binaries that you actually expect to invoke regularly to "nobody" is like trying to sail a boat made out of Swiss cheese.

I'm sorry to say, the setuid mechanism is a bit too primitive for least-privilege delegation.
posted by majick at 8:04 PM on December 21, 2009 [2 favorites]


George_Spiggott: Would now be a good time to bring up the old topic of the network computer?

That actually appears to be the direction the Net as a whole is headed, that cloud computing metaphor. I'm not real comfortable with it myself, because I don't like my data in other people's control. It would make sense for small offices to run terminals, and in fact Unix supports that very, very well, but terminals are usually ridiculously expensive. It seems like the dumber they are, the more they cost. :) But you can set up many modern PCs to netboot over Ethernet, and they make nice cheap little terminals without the exorbitant cost of the real thing. You run just enough local OS to drive the screen, keyboard, and mouse, and run all your programs remotely from a central server. This can be very comfortable and very easy to administer... you have one machine to patch instead of twenty. And it shouldn't even take a very large machine to run twenty users, depending on what you're doing... typically RAM is the big limitation, and RAM is very cheap. A Dell workstation-class computer is usually 3 or 4 thousand dollars, and i7-flavor Xeons will usually have enough memory slots to run 192 gigs of RAM, which should easily support fifty to a hundred users that aren't doing CPU-intensive stuff. (word processing, data entry, that kind of thing.) The RAM would cost a pile, but you can expand slowly as you add people.

A modern expensive PC-class machine is really stunningly powerful.... eight 3Ghz processors with 192 gigs of RAM is serious horsepower.

robertc: but may be worth considering for the single user desktop scenario: users can install signed packages from the repository with no further authentication

Well, for a desktop, yes that could work. But even so, you'll still have services running under different user names, and theoretically they're not supposed to be able to make system changes. They could differentiate this by putting the initial user in an 'admin' group, and then requiring admin group membership to install software, but in managed environments, sysadmins are pretty horrified at the idea of users installing new software. I think the furor was more about the change than the idea...doing that kind of thing unannounced can open holes that sysadmins don't realize are there.

I think they could probably roll out that feature pretty easily, and even end up with it being popular, but it needs to be announced very loudly and mentioned in the first couple of paragraphs in the README.

Marisa: Their update app classifies updates on five levels:

That's a nice setup. Most folks would never go past level 3, but simply exposing them to the idea that levels 4 and 5 exist is very smart. It reminds them that not everyone shipping code for Linux has their best interests at heart. Almost everyone does, but the bad eggs could really mess up your life.

The newest version allows you to remove packages from the update list, in case you have and want to keep the older versions of certain apps (as I do for WINE).

This is actually a feature of the underlying Debian, but it's not well explained... I found out about this through word of mouth. If you do a dpkg --get-selections, you'll get a full list of all your installed software, with the name of the software and then 'install'. If you then echo "packagename hold" | dpkg --set-selections, apt will hold that version in place if at all possible during upgrades. I'm not sure how well the GUI utilities in Ubuntu and Mint support that, but I've found it to be damn handy for managing servers from the command line. Debian is designed around remote management over ssh.

Avelwood: I think the linux community has been more secure largely because it hasn't been common and it's user base has been made up mostly of the geekier types who already know what to look out for.

Another point is that there's a lot of different distros, and thus many different possible attack surface combinations; a Firefox exploit that nails Ubuntu might or might not work on Debian, and Fedora and Gentoo are sufficiently different that the chance of success with the same exploit package is fairly low. And if you run on a different chip altogether, like ARM or PowerPC, you're fairly hackproof... it's not that you can't be exploited, it's just that nobody will bother attacking the five hundred of you in the world. :)

But ALL flavors of Unix are susceptible to targeted exploits, like repository corruption. And if the compromise is deep enough in the code, it could happily be ported to multiple systems with nobody noticing. Who's to say that all the bugs that get found all the time are accidental, yanno?

All that said, they do tend to think about security in Linux. I think it's probably about the same security level as Windows, and better than OS X, which has little in the way of defensive coding strategies. This harmless exploit that attacked relatively few people is BIG NEWS, where Windows trojans and viruses have to exploit tens of thousands of machines to even get noticed by the mainstream.

And you're right, of course, that the technical people that tend to run Linux are much more likely to notice exploits, which in and of itself makes it a less attractive target.

More generally, not aimed at you specifically: if you run Linux, you are safer, perhaps MUCH safer, but you are not safe. As long as you understand the difference, you should be fine. Mostly, remember that the community as a whole is really trying to help you, trying to give you software that gives you true ownership of your computer, but not all of them actually have hearts of gold. As a whole, they're some of the most ethical people I know, but no group is perfect. You can trust them, I certainly do, but requiring that software be either signed, or distributed by a known corporate entity, is a very good idea.

pwnguin: magick is very insistent there about never assigning ownership of binaries to 'nobody', but doesn't really explain why. If binaries are owned by nobody, and being forced to run as nobody by suid, they can replace themselves with evil versions. You want all your system binaries owned by root. They should manually drop privileges to nobody with special system calls; typically they'll have a configuration file where you tell the program what user to run as. Programs should never have permission to overwrite themselves, and the nobody user shouldn't have write permission to anything on your system, period.

I'm not aware of any distros that do this, so I don't think it's something you have to WATCH for, just don't do it yourself. :)
posted by Malor at 9:05 PM on December 21, 2009 [1 favorite]


I guess the problem is the implementation of setuid doesn't allow for a separate owner and setUID, a crucial detail. Thanks for the clear explanation of why it's no good as long as that holds true.

I've tried the 'programs should do this themselves with special system calls' approach, it's not pretty:

DESCRIPTION
seteuid() sets the effective user ID of the calling process. Unprivi-
leged user processes may only set the effective user ID to the real
user ID, the effective user ID or the saved set-user-ID.


So basically, you can step down from root, but there's not even a concept of "further down." Which supports my point that UNIX doesn't allow priv dropping. If I understand this correctly, the only workable approach is to run your program as root and immediately drop down.

Originally I looked at this was when considering how the Google Chrome security model might work in Linux, and the answer seems to be 'unlikely'. As best I can tell, you'd have to make the program setuid root and then de-escalate immediately, except you also want to make some named pipes and network connections. I suppose something like PolicyKit or SELinux might make things happen but really I just let the experts do the doing.
posted by pwnguin at 9:46 PM on December 21, 2009


George_Spiggot: All that's needed is a decent amount of reliable bandwidth and quality professional-grade apps in the cloud.

The former is a first-world luxury, and the latter doesn't exist outside of a limited handful of applications. And of course, you just shift the problem space from the operating system to your virtual machine.

pwnguin: We shouldn't be promoting the activity of downloading and installing random crap from the internet.

The problem here of course is that while the official community and repository reflects the needs of most users, it doesn't necessarily reflect the needs of all users. So you always have some esoteric package that's going to lurk just outside the edge of a review process that has limited manpower and resources, because how many people are really interested in social network analysis APIs for R anyway?

pwnguin: Requiring people to inspect programs for safety is insane. We can't make computers do it right, how the hell does anyone else have a chance?

That's nice. However, exactly no one is arguing individual inspections of programs should be a routine practice. What is suggested is that people be aware of the risks in going outside existing peer review practices, which isn't that much different from the way shrinkwrap software publishers point out the risks of warez.

Malor: That actually appears to be the direction the Net as a whole is headed, that cloud computing metaphor.

Actually, the Net as a whole seems to be headed in the opposite direction with smarter and smarter clients. ECMA-script and XmlHttpRequest pretty much makes the dumb client dead in the water. Which overall is a good thing.
posted by KirkJobSluder at 5:28 AM on December 22, 2009


"you'd have to make the program setuid root and then de-escalate immediately"

That's currently how things are done, yes, and it's non-ideal. All of this stems from the fact that in the standard UNIX security model there is one and only one privileged user, and everything else is uniformly unprivileged. SELinux and other capabilities-based approaches mitigate this; Solaris has a very powerful mechanism for providing fine grained control over capabilities although it's very, very Sun-esque.

The owner/group/world system can and does work well for the most part, but there are some common-sense things that were left out of the design by the simplification of MULTICS and all that came before: a process dropping its own privileges is just one of those things. To its credit, though, we're talking about a design that is nigh on 40 years old and we're only now starting to really outgrow it.

ACLs and nested identity containers are one of the handful of things Microsoft got really, really right in NT. It's too bad that so much widely available software barely works with said Windows security features when they're employed; a lot of it is written on the assumption that the user can write to privileged bits of the filesystem or registry, and the workarounds tend to be things like "run as Administrator" or "make it Everyone:Full Control" rather than stabbing the vendor in the eye.
posted by majick at 5:31 AM on December 22, 2009


And while it may be true that persons over the age of 60 can be considered late adopters of information technology, if my family is indication that just means that they are now adopting image and video editing, music composition, videoconferencing, and document scanning into their daily practice.
posted by KirkJobSluder at 5:47 AM on December 22, 2009


pwnguin: So basically, you can step down from root, but there's not even a concept of "further down." Which supports my point that UNIX doesn't allow priv dropping. If I understand this correctly, the only workable approach is to run your program as root and immediately drop down.

Yeah, actually, it occurred to me later as I was falling asleep that this approach means that if you want to isolate particular code from being able to affect your system, you must first give it carte-blanche access to ANYTHING on the computer. NOT a good idea for trying to sandbox code. :)

Where it's actually useful is for services that are exposed to the outside world, where they start as root, open the files and sockets they need to work, and then drop privileges to nobody. That means, if they get exploited, they should only have the same access that "nobody" does. That's why you make sure "nobody" can't write to your system. :)

Your overall observation that the Unix security model is too simple is completely correct, and you're also right that NT did a fantastic job with it. NTFS ACLs are awesome. NT and later can be highly secure, as it has a truly excellent and fine-grained permissions system, but the problem is that it was always turned off. Microsoft got smart and turned it on in Vista, but then butchered it back into near-uselessness with Win7. Idjits.
posted by Malor at 8:31 AM on December 22, 2009


It is true that there are better and better implementations of code signing, and other things to prevent untrusted applications from running with privileged access, those steps offer incremental improvements, but fail to address the underlying structural problem.

If retail worked the way our current OS models work, this would be a typical retail transaction:

1. You go to pay for something at checkout.
2. You hand your wallet (and the keys to all of your worldly possessions) to the clerk (who is trusted by the store, etc)
3. The clerk takes your wallet into another room to remove the appropriate amount of cash... and eventually returns (you hope)
4. The clerk gives you your wallet back.


There is no way to limit the actions of the clerk... they could copy your house keys, walk away with everything, or put something bad into your wallet (a trojan horse). They could have been ordered (or tricked) into doing any of these things, just like any software is likely to contain a bug or two.


A cleaner way is to use a non-permissive model, which we all know and love...

1. You go to pay for something at checkout.
2. The clerk tells you how much you owe.
3. You remove that amount (or the next larger convenient amount) to the clerk.
4. You optionally receive change.

The maximum damage is that the clerk could fail to give change... but the don't have permission to root around in your wallet, copy your ID, house keys, etc. In this scenario, the money you hand over determines the limits of what the clerk can do, regardless of trust.

I hope this helps to show the problem with a default permissive environment (one where the clerk is allowed to do anything they want to all of your worldly possessions)
posted by MikeWarot at 8:58 AM on December 22, 2009


If retail worked the way our current OS models work...

Except that model quickly becomes unwieldy when applied to every task. Do I have to explicitly negotiate trust and permissions when I sit in my desk chair? Can I rummage through my kitchen for a banana or granola without a request-response framework? Add to that the issue that applications are not monolithic entities by any means. We have to give broad permissions for our web browser to load its "chrome" if we want to have a user interface more advanced than lynx.

The problem here isn't one of permissions. This particular exploit, like most malware, required for a user to grant explicit permission for its operation. Most malware is already excluded by the default security status of the dominant operating systems. Sandboxing individual applications is a great idea, but you need some way to deal with the fact that most applications have their utility because they can open arbitrary files tagged/sorted/filed in arbitrary ways.
posted by KirkJobSluder at 9:35 AM on December 22, 2009


I hope this helps to show the problem with a default permissive environment (one where the clerk is allowed to do anything they want to all of your worldly possessions)

What you're saying is true, but getting to a default-deny paradigm from here will take a long time... certainly years, probably decades. There is a huge amount of software that assumes default allow, and rewriting or replacing all that software will take a very long time. Plus, we don't have the OS infrastructure yet to support default deny. We have the beginnings of these things in NT and Linux, but they both have a long way to go.

And, as KirkJobSluder is pointing out, if the user has control over his or her environment, he or she can bypass any security layer available. Only by removing control from the user can you prevent social engineering attacks, but most commercial entities that remove control from the user then use that power against the user, not for him or her.

Freedom means risk; it means the ability to make mistakes. If you want to be free, you don't get to be safe. You can improve safety levels by removing freedom, but you can never perfect them.

This is true for more than just computers.
posted by Malor at 9:49 AM on December 22, 2009 [1 favorite]


Just as an example, I have six applications with concurrent access to work in progress in a single file hierarchy. That's the sort of problem space that we need to figure out in regards to making more finer-grained permissions usable.
posted by KirkJobSluder at 9:52 AM on December 22, 2009


The problem isn't that the user will override security measures we put in place, but rather that we don't provide adequate tools to do the job, which means they ignore them, because they don't help in a meaningful way.

If you give the user the ability to somehow specify that they don't trust a program, and make it easy for them to run it in a sandbox, they will take that option, and might even pay for it to be available. The current systems don't make it easy... any permissions based sandbox is going to have holes, lots of them.

The only way to sandbox things these days in a relatively safe manner is to run them inside a virtual machine, which is one of smaller factors driving the adoption of virtualization.

Don't blame the users because the tools suck, it's rude and offends the users.
posted by MikeWarot at 10:26 AM on December 22, 2009


KirkJobSluder: "The problem here of course is that while the official community and repository reflects the needs of most users, it doesn't necessarily reflect the needs of all users."

Debian and Ubuntu are completely capable of reflecting the needs of users. I admit there are times where visual impairment accessibility could be improved and so on, but generally speaking, it should at least be possible to customize a system for the impaired within the Debian community framework. And so on. If you need DFSG software, there's an RFP and ITP process that can be done to bring software you need into Debian. If you want SNA, contact the Debian R Team and let them know you'd want it. For non-developers, I can see not knowing process, it taking too long or requiring undesirable effort on their part. But especially in this case, developers should know better. If you want it, file an RFP. Even if nobody jumps on it, you've created historical evidence that someone does in fact want it; when the next person goes looking for this, they can find this ITP request and realize they aren't alone in wanting it.

The challenge as I see it is the languages that reinvent packaging poorly. Ruby Gems in particular seem to be in constant conflict with Debian Policy, so most people skip the official Debian repos all together. I'm not sure how PEAR and CPAN work but they seem to manage. One might consider PEAR, CPAN and CRAN an adhoc community capable of peer review across distributions.

KirkJobSluder: "That's nice. However, exactly no one is arguing individual inspections of programs should be a routine practice."

No one except the last link in the post?
posted by pwnguin at 10:33 AM on December 22, 2009


pwnguin: The key operative phrase is "outside the official package sources."

Debian and Ubuntu are completely capable of reflecting the needs of users.

Oh please. Users are not obligated to put projects on hold for weeks or months to push a package of last-month's release through Debian's notoriously political, conservative, and sluggish process (*) of bringing software into the stable branch. Sometimes people make use of systems like CPAN, CRAN (R), and CTAN (TeX). And sometimes, people might have a need to work from SVN or CVS development branches. A person might even want to install non-free software like Acrobat Reader or Opera.

Of course, this entails some risk, but it's a risk that should be up to the user. Because gawsh darn it, developers and testers are users too.

(*) One of the initial inspirations for Ubuntu actually.

MikeWarot: Don't blame the users because the tools suck, it's rude and offends the users.

I'm not. I'm blaming proposals that offer nothing more than handwavium as a solution for what are real conflicts and trade-offs between safety and usability. Any proposal for improving the security of software needs to deal with the basic facts that people won't use strong passwords, and will default to broad permissions to avoid repeated nagging from software.
posted by KirkJobSluder at 12:08 PM on December 22, 2009


Exactly. Like it or not, MikeWarot, most users don't understand implications of the permissions they grant. There are some clearly very educated and smart people in this very thread that didn't really piece together that using repositories outside the ones provided by the distro carries some extra risk.... how is anyone supposed to explain that to someone who barely understands what a mouse does?

Computers are very hard to use unless you have a very specific kind of intelligence. When the tools are there to take full control over the computer, and you give them to someone who isn't really technical, it's like handing a chainsaw to a blind person. They may be able to cut the wood they intended, but there's a pretty good chance there will be collateral damage.
posted by Malor at 12:27 PM on December 22, 2009


Dammit, I hit post instead of preview. Continuing....


Yes, you can blame the chainsaw for the user being blind, and you can argue that chainsaws should be re-engineered to support blind people, but in so doing, you will make them substantially less useful for the sighted.

Powerful tools are dangerous, and that's another truism that applies to both computers and real life.
posted by Malor at 12:30 PM on December 22, 2009


And IMNSHO, the Debian community process shouldn't change, because it delivers well-tested full systems. But that doesn't mean everyone should stay within the universe provided by debian-stable.
posted by KirkJobSluder at 12:43 PM on December 22, 2009


Kirk,
I believe it's possible to make a system both usable and safe. What we have to do is make it very transparent to all involved what the possible side effects are of taking an action with a tool.

The default setup imposes no limits on side effects.

It would be nice to offer a right-click and "try this in a sandbox" tool for users. A more sophisticated setup would allow you to add other things, like somewhere to permanently store changes in that sandbox, if the user decided they liked the application.
posted by MikeWarot at 12:58 PM on December 22, 2009


What we have to do is make it very transparent to all involved what the possible side effects are of taking an action with a tool.

As Kirk said up above, that's "handwavium" (great word, by the way, I plan to steal it!). Even experts don't know all the possible side effects of taking an action with a tool. Modern operating systems are incredibly complex, and we keep finding whole new classes of security problems and stability bugs, like the "format string vulnerabilities" that were so big a few years back. It had never occurred to anyone that the C library function to decode and print strings could be hijacked in strange ways, so there was a whole BUNCH of side effects that were suddenly discovered, almost all of which were unpleasant. Suddenly, your perfectly safe program X became dangerous program X. The program itself didn't change, just the knowledge of unexpected interactions with other software.

It's not like all these security problems we keep having are deliberate. If experts truly understood all the side effects, we wouldn't have security holes.

The sandbox idea is a good one, but remember that a program that's perfectly safe in a sandbox can instantly become dangerous as soon as it leaves that sandbox, because now it can interact in new ways with new software. Or might just carry a latent vulnerability that doesn't spring up until some future, as-yet-unwritten software gets installed.
posted by Malor at 4:31 PM on December 22, 2009


A capability based system defaults to NO permissions on anything. In effect every single program ever run is in a sandbox by itself, with the exception of resources you give to it explicitly. This means you never have to worry about side effects, unless you give the same resource to multiple programs, and even then, the system is still safe.

In a capability based system, you could still have word processors, etc... the only difference is that the dialog boxes used to acquire and store resources would be under the sole control of the user, and would only return the necessary "capability" (file handle) to read or write the file the user selected.

It's completely feasible, but it requires a lot of details, and teamwork to pull all the parts together. I think it will eventually happen, but I'd like to help it happen sooner.
posted by MikeWarot at 6:44 PM on December 22, 2009


Sure, and that's kind of what SELinux is, but because we're trying to retrofit existing systems, this is an immensely complex undertaking. It's going to take a very long time. It would be MUCH easier if the OS and system software could be written from scratch, but with the sheer immensity of the computing universe these days, that's just not feasible anymore... retrofit is the only realistic path to getting capabilities deployed on any kind of broad scale.

If you're serious about wanting to help, RedHat is more or less leading the charge on SELinux on desktops. Getting involved in that project will help accelerate the process of getting the required capabilities documented and encoded into system policies. It's going to be painstaking and terribly boring work, and it's going to take a whole heck of a lot of it. Once RedHat has it fairly solid, it should spread into Linux systems everywhere, which would improve security for end users quite substantially.

Even that won't be perfect, but an SELinux-based desktop with good policies should be much, much more difficult to subvert. It still won't fix the problem of users granting too much permission, but once the capability system is in place, it should be possible to translate what they mean into something resembling English for normal people, so at least they'll have a better chance of understanding what's being asked for.
posted by Malor at 8:58 PM on December 22, 2009


Ran across this neat unix shell that attempts to do sandboxing; haven't tried it yet, but it sounds a fair deal smarter than my original 'suid nobody' idea.
posted by pwnguin at 11:22 PM on January 5, 2010


See also SELinux Sandbox, isolate.
posted by Zed at 8:17 AM on January 6, 2010


Yeah, I was originally going to mention isolate, but then I looked at the revision history. Thirteen commits total and not touched in a year, until a LWN article on it, and the extra attention apparently led to it's first vulnerability patch. First of many?
posted by pwnguin at 11:55 AM on January 6, 2010


« Older DIY Computational Photography   |   Somebody should have left their brain in San... Newer »


This thread has been archived and is closed to new comments