Espionage
November 1, 2011 11:34 PM   Subscribe

There is a growing realization that U.S. cyberwar efforts resemble all its other 'war' rhetoric in being a boondogle aimed primarily at limiting its own citizens civil rights. China's breathlessly vaunted capsbilities are "fairly rudimentary" in particular (pdf, campus, previously).
posted by jeffburdges (108 comments total) 14 users marked this as a favorite
 
I firmly believe that computer security can be solved, which would mean the end of virus scanners, firewalls, and many other layers of crap that really don't quite work. They key to all of this is something called POLA, Principle Of Least Access.

The way it works is to assume that the user doesn't want to give any resource to a program that they are about to run, unless it is explicitly stated at run time.

This allows the user to decide what he's willing to risk, and makes it far more transparent as to what the possible implications of a given action are.

Making this the default instead of the way it is now is an Apollo scale project, but needs to be done.

If we do get it, suddenly we would find that we can trust users after all. We would also enjoy much faster and more reliable computing in general.
posted by MikeWarot at 11:44 PM on November 1, 2011 [2 favorites]


You can never trust users.
posted by iamabot at 11:45 PM on November 1, 2011 [46 favorites]


You would just train the user to click "OK" on all dialog boxes as soon as they appear. A naive user (which is most of them) doesn't have any idea if it's reasonable for any given program to have access to the file system, registry, Internet connection or anything else.
posted by Harald74 at 11:47 PM on November 1, 2011 [18 favorites]


Clearly I'm not making my point clear...

Instead of asking "do you really want to do this"? or lots of other things, it's the user who decides what they want to hand to a program... and the OS enforces that choice.

There's no reason an accounting program should have access to your email contacts, nor to a web site in bulgaria.

When you hand someone money, you don't hand them your wallet with instructions to give it back minus the proper amount...

But that's exactly what we do with computers... we're forced to TRUST each and every program we ever run to do so without mistake or corruption. We should never have to trust programs, only the OS.

Does that clarify it some?
posted by MikeWarot at 11:54 PM on November 1, 2011


I think the point you're missing, Mike, is that you're stating you can trust users to make the right decision about what can be trusted.

This is demonstrably false.
posted by coriolisdave at 12:02 AM on November 2, 2011 [27 favorites]


This conversation is weird, because you're treating a whole field of study as if it can be solved by some odd trust principal applied to what I think is desktop users, which I'm sure you think solves challenges in information security but is so tinfoilly I don't even know where to begin.

Information security, and how it ties to the posts subject relates to propaganda and the erosion of freedom when it comes to interacting and corresponding with digital assets. The post is about how various governments view it advantageous to hype up an information war to gain traction in funding and enable surveillance of their own populace.
posted by iamabot at 12:05 AM on November 2, 2011 [4 favorites]


Let me restate it like this.... imagine we treated our wallets like we treat computer programs... in a bizzarro world like this..


"Ok... that coke will be $1.99"

Here's my wallet... [Loud safety voice comes on...]
HALT, We haven't verified the identity of this store, nor the salesman.
Do you want to pay anyway?
You then say yes... it it lets you then hand your wallet to the salesman...

The salesmen then goes in the back room (with your wallet) for a few hours, and eventually returns. (After all, computers are thousands of times quicker than we are)

If you're lucky... you get your wallet back with exactly the right amount of money.

If you're not... you find that your wallet was used to sell your car and home, and you now owe hundreds of thousands in credit card debt that isn't really yours.

... end of scene...

Now, in this world, we would end up blaming the person if they got ripped off because they didn't use the right store, right checkout person, etc.

Instead of blaming the user, why not give them a sane choice instead? Let them decide how to pay, and only give access to the least amount of resources possible to the checkout person (instead of your wallet, and 4 hours of waiting).


Chiding the person for handing over their wallet is what we're doing... because we always have to hand it over, doesn't mean it's a sane choice.
posted by MikeWarot at 12:05 AM on November 2, 2011 [1 favorite]


China is a useful boogeyman for the military-industrial complex in other regards as well. China refits one small ex-Soviet aircraft carrrier for limited operations, and there's much wailing and gnashing of teeth in US naval circles, despite the US Navy's fleet of 11 supercarriers, each with a complement of aircraft rivalling most smaller countries' entire air force.

And some grainy photos of the prototype of the Chengdu J-20 stealth fighter has been used as an excuse to further the aims of the extremely expensive US fighter aircraft programs, despite most of the rest of the PLAAF comprising obsolescent aircraft.
posted by Harald74 at 12:05 AM on November 2, 2011 [8 favorites]


I'd agree that computer security can be vastly improved, mostly by outlawing closed source software.

I suppose all the advertising, finance, etc. companies could keep their analysis applications secret, but all public browser apps must be open source too, heck you'd even outlaw obfuscated javascript.

At minimum, close source software should not be granted copyright protection because software is a utilitarian item like clothing. Open source software should however be granted copyright protection in exchange for making the source code public. In other words, the source code has a copyright as an artistic work which should not extend to the utilitarian item of compiled code unless the source code has been made available.
posted by jeffburdges at 12:07 AM on November 2, 2011 [3 favorites]


I'm not stating that the users can make the right decision about what programs to trust...

NObody can make that decision right 100% of the time, and it only takes 1 time for things to get wiped out.

Isn't it far more sane to NEVER TRUST a program? This eliminates entire classes of security problems in one fell swoop (albeit a very hard to accomplish swoop)

Does that make sense?
posted by MikeWarot at 12:08 AM on November 2, 2011


At the hazard of feeding the troll, you have such a fundamentally immature understanding of information security that we're talking directly past each other.
posted by iamabot at 12:08 AM on November 2, 2011 [15 favorites]


Great, a bot - troll fight! bring it on!
posted by marienbad at 12:10 AM on November 2, 2011 [10 favorites]


The problem is that the set of capabilities a program can ask for is going to be so complex and dependency-laden that an average user won't be able to make good decisions.

Also, this is only one aspect of computer security. A much bigger problem is making sure the implementations of the models are actually correct; your fancy POLA capability system is useless if it has a bug that I can exploit to give myself full access.
posted by destrius at 12:10 AM on November 2, 2011 [2 favorites]


I realize I left the US Army out of my last comment. They of course want a big piece of the pie as well, regardless of shrinking commitments in Iraq and Afghanistan:

Wired.com: Army’s Vision of the Future: Mostly Doom, Some Idiocy

The US taxpayer is of course going to pay for all of this: Big Army, big Air Force, big Navy and now big Cybersecturitywhatchagonnacallit.
posted by Harald74 at 12:15 AM on November 2, 2011 [3 favorites]


MikeWarot: Isn't it far more sane to NEVER TRUST a program? This eliminates entire classes of security problems in one fell swoop (albeit a very hard to accomplish swoop)

Does that make sense?


If you don't trust a program with any data at all, it can't do anything. So you have to give it some things it asks for, right? Will an ordinary user know full well the implications of giving the requested data to the program? Android has a permission system that sort of works in this way, but in the end most users happily give apps whatever permissions they ask for. Granted, the Android permissions are fairly coarse-grained, which leads to many apps asking for more than they need, but if it were too fine-grained, I think users wouldn't be able to comprehend what was being asked.

Anyway this is really tangential to the topic at hand.
posted by destrius at 12:15 AM on November 2, 2011 [3 favorites]


So, I just put my money in this slot on the side of my computer?
posted by obscurator at 12:18 AM on November 2, 2011 [1 favorite]


Iamabot... I'm not a troll... I'm just a very tired IT guy waiting for some very important backups to finish....

The fundamental problem is the unsecured nodes at the ends of the internet, our home computers, tablets, phones, etc. The underlying security model was ok when it was created for the first interactive time sharing systems in the 1970s, but is totally inappropriate for the modern era.

Capability based security (using POLA everywhere) gives you a system that requires you trust only the OS. You never have to trust an application again.

This then makes it possible for the home users to have secure computing. Which they pulls the threat of cyber-armageddon off the table, and many big brother programs have to go find a new excuse.

As I said, I believe that computer security can be solved, it requires massive amounts of coding because we have to migrate to a new OS, one based on a microkernel (to have the absolute minimum about of "trusted" code possible).

All the apps have to be ported, because they aren't designed to fit in such a system. Think about opening a document in a text editor, for example, the editor assumes that IT will be the one to determine the file name once... instead of only getting handles to resources.

So, I know the implications in terms of scale, and the layers of things that have to get shifted... I realize the layers of justifications that are being slowly accreeted right now for ever more intrusive filtering, and licensing of internet users and computers.

If this goes on long enough, you could end up with the government choosing your OS for you, to make computing safe. (Of course that would be insane)
posted by MikeWarot at 12:19 AM on November 2, 2011


I note the OP doesn't quote the sentence in the link before the "fairly rudimentary" quote. This reads "China had carried out a number of high-profile and successful hacks, denial of service attacks and website defacements in recent years." These include attacks like "Titan Rain" if anyone wants to look further.

More generally, why do you assume that only the American government has agency in this world, as if any other government or people are merely cardboard cutouts in your paranoid shadow play? What about the concerted Kremlin sponsored 2007 cyber attacks against Estonia, were they really carried out by Don Rumsfelt under his bedcovers?

Just a couple of days ago the head of Britains GCHQ, Iain Lobban, warned of a “disturbing” increase in attacks that could jeopardise Britain’s “economic well-being”, citing a “significant” attack on the Foreign Office this summer. Britain's Foreign Secretary William Hague also pointed to an “exponential rise” in the number of incidents.

It doesn't matter if an attack is 'fairly rudimentary'. A car bomb is 'fairy rudimentary', that doesn't stop it blowing people up. Governments have a clear duty to protect their country's interests and citizens from military, industrial and criminal cyber attacks and pretending such things are mere delusions is simply delusional itself.
posted by joannemullen at 12:22 AM on November 2, 2011 [3 favorites]


Fine grained access to money isn't a problem, people seem to be able to pick the right files to open and edit, etc... so why should this make things harder for users?

If the OS handles the file open /save as implementation (using a PowerBox), then you really don't have to trust the programs with anything other than the few files you let them access.
posted by MikeWarot at 12:22 AM on November 2, 2011


This discussion is sort of useless without a revisit of stuxnet and some of the targeted scada attacks which appear to be nationstate driven and very much targeted. Not to mention the RSA compromise reasonably recently attributed to spear phishing and APT, that compromise alone could and probably is (I have not reread the papers on it recently) regarded and a nation state effort.

I believe there is some part of governments around the world interested in eroding the independence of the internet, but much of that ship sailed years ago. The larger problems facing governments today are breaches of air gaps, poor user training, complexity risk, etc. The internet is a vehicle of commerce now and becomes less about education and access daily.
posted by iamabot at 12:29 AM on November 2, 2011 [1 favorite]


Hang on, we're worried about money being stolen out of our bank accounts by the Chinese and not Wall Street?

(yes, I realize the problem is nationstate espionage)
posted by arcticseal at 12:38 AM on November 2, 2011


Mod note: MikeWarot, you appear to be wanting to have some other conversation than what this post is about. Maybe we can rerail this?
posted by taz (staff) at 12:39 AM on November 2, 2011 [14 favorites]


Isn't it far more sane to NEVER TRUST a program?

Well, it's been said that the only secure computer is one that's unplugged from the outlet, in a safe, at the bottom of the ocean, and even then it might be crackable. "NEVER TRUST" also means "NEVER USE", which probably isn't quite what you had in mind.

Perhaps what you're missing is that bits are just bits. There's nothing magic about any particular set of bits, it's just information. Computers massage bits, changing them from one form to another... that's pretty much all they do. There's nothing inherent about your Social Security number that makes it different, bitwise, than a picture of your dog. There's no fundamental way of making some bits secure and some bits not secure. In the end, it's all just bits.

So we make up rules about which programs can access which bits at what times. We create a security model, which is an abstraction of real life, and it's a grossly imperfect one. An incorrect security model is a form of bug, and to some degree, all security models are incorrect. For instance, you might keep your Social Security number only on encrypted storage that you briefly unlock when you need access, but that doesn't mean that anyone you transmit it to will do the same. They could be very sloppy with it. You trust them, but you can't know what their security measures are. And even if they TRY to do the same thing you're doing, there's no guarantee they're doing it RIGHT. Even if you came up with a protocol that stated "you must treat this data with this set of protections", and even if they did their damndest to comply, they could make a mistake or have a bug in their model somewhere.

It is extremely easy to leak bits to programs you don't want to leak them to. It is really, really hard to keep them partitioned, both logically and physically. As Bruce Schneier has said, trying to make bits not copiable is like trying to make water not wet. In a way, every security model in the world is dependent on keeping water dry. It can be achieved in limited ways, but the tiniest mistake and voila, the water is wet again. And we humans are very poor at achieving perfection.
posted by Malor at 12:44 AM on November 2, 2011 [2 favorites]


TAZ: Cyber-war is hyped out about as far as it can go... but the reality is that most machines on the internet can be compromised. There are more and more industrial control systems hooked to the internet every day. This makes a big target, but not in the sense the military is used to thinking in terms of.

Cyber-war won't happen like 3rd generation warfare (tanks vs tanks, etc)... when it does happen, we might not even notice it for a while. Because we've building layers of our civilization on top of loose sand, instead of a solid foundation... we're going to be vulnerable.

Nobody really thought in terms of airplanes as bombs before 9/11.... and then it changed. I'm sure there are lots of subtle yet powerful shifts that can be used to leverage our increased dependence on computing to take out some nodes that prove to be critical to things, but in a different way.

To tie it all together again... if we get rid of the soft underbelly that makes this possible, we solve lots of issued that layer on top of it.
posted by MikeWarot at 12:50 AM on November 2, 2011 [1 favorite]


MikeWarot: Fine grained access to money isn't a problem, people seem to be able to pick the right files to open and edit, etc... so why should this make things harder for users?

People understand the value of money very well. Comparatively, they don't really understand the complex dependencies underlying their digital data very well at all. One example I like to use that people sometimes overlook are Facebook's privacy permissions. One of the options is to share your data only to "friends of friends".

On the surface, this seems quite simple to understand. But really, what you are doing when you enable this option is that you are delegating the authorization control of your data to your friends; since they are the ones who decide who their own friends are, they can thus decide who sees your data. That's a higher level of trust assigned to your friends than most people might imagine.

MikeWarot: Microkernel based systems reduce the attack surface by orders of magnitude. Running device drivers in user mode, and not trusting them helps as well. If we can get secure computing into the hands of users, this greatly reduces the number of already compromised or vulnerable targets for use as leverage.

Such systems make it harder to break in, but they don't make it impossible. If there's a vulnerability somewhere, it can be exploited. It multiple layers of security exist, you need to break through them one by one. But as long as your software isn't 100% bug-free, eventually somebody can get full access. Look at iOS; apps are locked down, the OS has many protection mechanisms, code signing is in place. People still manage to write jailbreaks for it, even remote jailbreaks like JailBreakMe.

And even with 100% bug-free code, there might still be hardware bugs or trojans, or side-channel attacks, or even weirder things we've not heard about yet...
posted by destrius at 12:51 AM on November 2, 2011 [2 favorites]


A lot of very interesting links here, I'd love it if Mike took his grandfather clocks debate to the grandfather clocks discussion forum...
posted by Meatbomb at 12:53 AM on November 2, 2011 [6 favorites]


I mean, theoretically, computer security can be "solved"; its just a matter of coming up with full verification of all hardware and software from the transistors up, and specifications with no logical flaws at all. But practically, that's pretty much impossible except for certain special (and expensive) cases.

POLA might help to limit the damage, in a defence in depth way by increasing the number of loops an attacker much jump through to access your data. It definitely is an improvement over current authorization systems, but by itself it doesn't prevent attacks, it just makes them more difficult. And ultimately you still have to face the usability question; how do you make sure users don't just give full access to that random game they downloaded because they want to play it really badly?
posted by destrius at 12:55 AM on November 2, 2011 [1 favorite]


For anyone who knows anything about computers and security the whole "Cyberwar" thing seemed like a joke.

I mean for anyone who wasn't trying to make money off the thing, that is.

The hilarious thing is people talking about a "Cyber Perl Harbor" nowadays, which I guess means an 'attack' by China or Iran that would, what? Annoy and inconvenience us into submission?

That said, once 'real' weapons are fully automated, then there are real risks at play.
The way it works is to assume that the user doesn't want to give any resource to a program that they are about to run, unless it is explicitly stated at run time.

This allows the user to decide what he's willing to risk, and makes it far more transparent as to what the possible implications of a given action are.
That's how it works on android, except you grant access rights at install time, not run time.

Anyway, that will never work because most users are too dumb even to handle that. Look what happened with Windows Vista and the UAC. Everyone hated it and turned it off.

It also depends on the underlying OS being secure. If there are vulnerabilities in the OS then the UI doesn't matter at all.

Look at Stuxnet. The only "Cyberweapon" ever created. Not only did it work on machines that had never even been connected to the internet the user never ran a program at all It was totally silent and transparent to the user, and took advantage OS vulnerabilities.

But more importantly, the kinds of vulnerabilities people are worried about in "cyberwar" are not the same as the risks to your own systems. It's corporate and government systems where the "User" isn't the same as the "Stakeholder"

So I don't want to give hackers, or anyone access to all the pictures on my phone. If an App asked to have access to my photos I can choose not to install it, or not run it, or whatever.

But what if the "user" is a developer at some hypothetical cloud storage company that your cellphone is pre-configured to use. Let's say he installs a program on his desktop that uses a keylogger to get his password. Even if the spyware asks "permission" to get the keystrokes he may click "OK" because he just happens to be drunk at the time.

Now, the attacker has the password and can use it to get into your pictures even though you didn't authorize anything, the devloper did. But he was drunk.

That's what we're talking about. You can't trust "users" because the "users" aren't the ones who have to worry about this stuff.

The problem with your 'wallet' example is that it's not the user's wallet. It's someone else's wallet. The actual 'owner' of the wallet may be an amorphous group of people or all of society. It's not clear cut at all.
posted by delmoi at 12:56 AM on November 2, 2011 [1 favorite]


Nobody really thought in terms of airplanes as bombs before 9/11.... and then it changed.
Lots of people thought about it. It even happened in a Tom Clancy novel.
posted by delmoi at 12:58 AM on November 2, 2011 [8 favorites]


Isn't that exactly the sort of hyperbole the reason and techdirt are talking about, joannemullen?

It's dumb and dangerous to recast simple industrial espionage as warfare, ala the "War on Drugs", "War on Terror", etc. Internet espionage cannot kill people like car bombs often do.

Russia and China have developed some domestic hacking capability largely by ignoring their own hacker culture. I suppose they focus upon respectively murdering and imprisoning journalists, reducing how much they care what gets hacked internally.

America otoh focusses upon pumping up some aging derelicts with DoD funds while suppressing the wider greyhat and cypherpunk culture which actually trains corporate security professionals. And GCHQ has zero credibility complaining while Britain bends over backwards to extradite people like Gary McKinnon.

posted by jeffburdges at 1:18 AM on November 2, 2011 [2 favorites]


Look at iOS; apps are locked down, the OS has many protection mechanisms, code signing is in place. People still manage to write jailbreaks for it, even remote jailbreaks like JailBreakMe.

And really, thank god for that. When system security starting being, sometimes, hackers and owners against the manufacturers, we started heading down a dark path.
posted by JHarris at 1:52 AM on November 2, 2011 [2 favorites]


MikeWarot: "The way it works is to assume that the user doesn't want to give any resource to a program that they are about to run, unless it is explicitly stated at run time."

"Given a choice between dancing pigs and security, users will pick dancing pigs every time."
posted by vanar sena at 2:24 AM on November 2, 2011 [14 favorites]


There's no reason an accounting program should have access to your email contacts, nor to a web site in bulgaria.

How would a user know if such access isn't necessary? Especially as we move everything back to, essentially, dumb terminals and the godalmighty Cloud™, the normal user really has no idea whether the request for access is legitimate or not. A request to a Bulgarian url? Well...It's possible that might a GoogleDocs server. They're a global corporation, right? I'll approve it...

And, quite frankly, app developers have a bad habit of believing their creation is the special snowflake among apps, and it absolutely needs access to your email contacts, or Facebook account, or whatever.

I occasionally run Little Snitch on my Mac, and the types of arcane connection requests made by various programs is actually pretty staggering. I can't imagine the average user being able to parse the requests and choose wisely. It's a scenario where you're essentially training the user to either approve all requests, or deny all requests.
posted by Thorzdad at 3:14 AM on November 2, 2011 [4 favorites]


Boy, we really have derailed ourselves very thoroughly from the real topic here, haven't we?

Note that, like all the other Wars, they end up being waged on US. Not some enemy out there, but citizens. Us. Americans.

Isn't it time we stopped making war on ourselves?
posted by Malor at 3:24 AM on November 2, 2011 [4 favorites]


Nobody really thought in terms of airplanes as bombs before 9/11.... and then it changed.

The Running Man was first published in 1982.
posted by Mister Moofoo at 4:01 AM on November 2, 2011 [3 favorites]


Nobody really thought in terms of airplanes as bombs before 9/11.... and then it changed.

ummm...
posted by Thorzdad at 4:15 AM on November 2, 2011 [9 favorites]


Nobody really thought in terms of airplanes as bombs before 9/11.... and then it changed.

I'm pretty sure the Japanese Kamikazi pilots would disagree with you...
posted by KGMoney at 4:15 AM on November 2, 2011 [2 favorites]


See, this is what happens when I preview first; somebody else posts the same thing 5 seconds sooner.
posted by KGMoney at 4:16 AM on November 2, 2011


I firmly believe that computer security can be solved, which would mean the end of virus scanners, firewalls, and many other layers of crap that really don't quite work.

If computer security could be solved, then biological security (i.e. real viruses, etc) could be solved. 4.5 billions years of research has not found that solution.
posted by DU at 4:18 AM on November 2, 2011 [9 favorites]


Android has a permission system that sort of works in this way, but in the end most users happily give apps whatever permissions they ask for. Granted, the Android permissions are fairly coarse-grained, which leads to many apps asking for more than they need, but if it were too fine-grained, I think users wouldn't be able to comprehend what was being asked.

I was just going to type this. Android asks me to grant permissions all the time whenever I try to install something new or an update to an existing app already on my phone. Most of the time I have no idea if the app is requesting just the minimum level of access it needs to get its tasks done or is sending way more information than is absolutely necessary back to its developers for data-mining (or other) purposes. I go by reviews most of the time in the hopes that apps that request too much info will be dinged by users who actually know more than I do about whether the requested permissions are appropriate or not. I appreciate Google's attempt to give the end user more control over what apps are doing on their phones, but I can assure you that the vast majority of users are just guessing (or don't care at all) when they approve permissions. You'll need to solve the problem of "naive" users first before you expect people to be adequately equipped to make the decisions your "ask first" system requires.
posted by longdaysjourney at 4:34 AM on November 2, 2011 [3 favorites]


The name of the game is Advanced Persistent Threat. The US is great at very showy cyberwarfare that's really just really, really good sigint - your soviet-made anti-aircraft system doesn't seem to be able to see Israeli jets all of a sudden - and less so at APT.

Totalitarian regimes with deep pocketbooks are good at intelligence gathering and collation. The threat isn't that they'll blow up a nuclear reactor, but that they'll slowly learn the names of all the scientists on a top-secret research project involving a next-generation reactor, and use that intel to gain access to the plans and engineering documents, and then beat the US company to market. They use custom-crafted viruses, trojans and botnets to gather endless trickles of trivial data into vast volumes of hard intelligence. They know who to target and how to target them, because that's the primary goal - actually compromising high value targets or shutting them down with DOS is way down the list of priorities.

For now.

After all, we're currently at peace with the major players - tho the EU and Iran seem to be going after each other with subtle attacks designed to cause embarrassment and financial expense. I'll take that over cruise missiles and airline hijackings every day and twice on Sundays.

This is a pretty contentious field - there are those who believe that Russia is the biggest danger, to the point where they unrealistically downplay China's capability, others believe domestic organized crime is the most serious player, and still more believe that because we've caught China red handed so many times, they're overrated, and it's the people we're not catching in the act at all we should be worried about - and they have a point.

But on the flipside, how many of these are distractions and false fronts to disguise actual capability? There's some evidence that they're playing rope-a-dope, and they're not even trying to disguise the fact that PLA-owned corporations are part-owners of top-tier software and hardware vendors and suppliers.
posted by Slap*Happy at 4:34 AM on November 2, 2011 [5 favorites]


If you think this problem is solvable then you need to do more research. If there was some magic design paradigm that solved all vulnerabilities we all would have adopted them decades ago.
posted by humanfont at 4:57 AM on November 2, 2011 [1 favorite]


...but the reality is that most machines on the internet can be compromised.

Umm. No. That's mostly science fiction. There are weak points, but they are systemic and specific rather than universal.

Computer and network security is a very broad, very old field with roots in bunko and espionage going back millenia. There is no single factor that can be corrected or taken into account - as long as there's a system, there will be errors in the system, and hence an opportunity (or risk, depending on which side you're on) for attack.
posted by Slap*Happy at 5:00 AM on November 2, 2011 [1 favorite]


The two aren't mutually exclusive - there isn't one universal weakness that would allow someone to become the villain of Die Hard 4, but that doesn't mean most computers don't have a weakness somewhere. It just means they're not all the same weakness.
posted by Holy Zarquon's Singing Fish at 5:36 AM on November 2, 2011


"More generally, why do you assume that only the American government has agency in this world, as if any other government or people are merely cardboard cutouts in your paranoid shadow play? What about the concerted Kremlin sponsored 2007 cyber attacks against Estonia, were they really carried out by Don Rumsfelt under his bedcovers?"

They were never shown to be Kremlin sponsored, regardless of Estonia's fairly lurid claims. You know the Kuwaitis also talked about babies pulled out of incubators, toravich?
posted by jaduncan at 5:47 AM on November 2, 2011 [1 favorite]


You don't think people in the real world "hand over their wallet?" Is that why I've been having so much trouble getting my 2.7 million USD out of Nigeria without your help?
posted by Obscure Reference at 6:27 AM on November 2, 2011


Internet espionage cannot kill people like car bombs often do.

Given that Stuxnet was designed to work on SCADA software, which doesn't just control those Siemens machines the Iranians were using for uranium enrichment but also is used in control systems for a lot of industrial and infrastructure processes, I'd say that's a large overstatement. For instance, if a hacker (spy or terrorist) decided to bring down a petroleum pipeline, people could die. We know this because people already die occasionally in pipeline accidents. It hasn't happened yet and I certainly hope it never does, but saying it couldn't happen is an overstatement. Stuxnet was extremely targeted but as a proof of concept, it's pretty scary.

(And the security issue here is not one that operators can be expected to confront; it's do I trust the readings on this instrument panel and screen that were designed and programmed by other people? so individual user decisions don't even come into it.)
posted by immlass at 6:32 AM on November 2, 2011 [3 favorites]




Hey guys, another IT Security person here, and I've got an even better idea.

The evil bit. See, all packets that are bad will have to use an evil bit in their header. For people who want to not be hacked, you just have your firewall drop those packets. Problem solved!
posted by Threeway Handshake at 6:58 AM on November 2, 2011 [11 favorites]


Evil is never black and white, Threeway Handshake. You'd want to use an evil byte, so that you could properly represent grayscales.
posted by Malor at 7:04 AM on November 2, 2011 [1 favorite]


(Meaning, of course, that the least significant part of the byte would be the naughty bits.)
posted by Malor at 7:07 AM on November 2, 2011 [9 favorites]


Computer security is quite like biological security and users can understand it much more easily when it is framed that way because we're hard-wired for "disgust" of unclean sources.

Don't click attachments = Don't eat that, you don't know where it's been.

Don't click on ads on sketchy websites = It's a great place to visit but don't drink the water.

Don't leave your firewall down when traveling = Don't take your clothes off in a bus terminal.

Don't download stuff from unreliable sources = Don't eat food from unlicensed restaurants

Basically, if you can get them to understand that data and programs are just as significant as food and water, they can correlate disease, infection and taint much more easily.

"Sure those dancing pigs might be fun to look at, but consider that in order to show them to you, your computer is going to have to eat that pork and you don't know if it was properly cooked."
posted by seanmpuckett at 7:08 AM on November 2, 2011 [4 favorites]


Permission systems are great in theory, but fail in practice. Documentation is usually weak and developers are lazy which means everyone just asks for their app to have all permissions and users click ok. The result is a lot of overhead with zero benefit.
posted by humanfont at 7:19 AM on November 2, 2011


There's no reason an accounting program should have access to your email contacts

Uh then how does it send out my invoices?

The evil bit is specified in RFC 3514, by the way.
posted by These Premises Are Alarmed at 7:21 AM on November 2, 2011 [1 favorite]


And speaking of the evil bit, at work we've been getting beaten up lately about scheduling our attack and penetration activities through the company's change management system. A reasonable request, one with which I'm sure the hackers will comply...
posted by These Premises Are Alarmed at 7:22 AM on November 2, 2011


aimed primarily at limiting its own citizens civil rights

Jeff, you're being so uncharitable. Surely the aim of lining the pockets of the defense industry (public and private) counts as a key one.
posted by grobstein at 7:30 AM on November 2, 2011


Permission systems are great in theory, but fail in practice.

They're great when they're ruthlessly enforced. For political and practical reasons, they almost never are.

It's going to be a stone soup solution, where everything adds a little flavor - Firewalls are getting smarter and more specific about what they're protecting and how it's being attacked, network and host profiling and its attendant compliance tools have come a long way and work nicely with other players, devs and system architects are getting religion about only allowing encrypted traffic, and helpdesks are getting better about supergluing plugs into USB ports, because DLP still sucks.

I'm beginning to think that DLP needs to be baked-in at the database and filesystem level and turned on by default... client side solutions aren't making it.
posted by Slap*Happy at 7:32 AM on November 2, 2011


One of the more interesting and less-appreciated talks at Defcon this year was "Balancing the Pwn Trade Deficit". It presented some great statistical groupings indicating the number and relative sophistication of the foreign (mostly but not completely) chinese groups attacking the Taiwanese government and education sectors. An anecdote I recall is that one professor had a meeting in the morning and by mid-afternoon had received an email appearing to be meeting minutes, actually containing a malicious PDF.

Massive technical sophistication (a la having a thousand Mark Dowds in a bunker somewhere) isn't the only or best way to make a 'cyber' attack. It's not hard (for a nation-state) to have a handful of 0-days, the success comes from having some human intelligence to make targeted attacks using the attack du jour combined with the humint.

Cyber-war or APT is, without a doubt, the new hotness in getting your project or startup funded. Every fucking time I hear 'Cyber', of course, I think about masturbation.
posted by These Premises Are Alarmed at 7:39 AM on November 2, 2011 [3 favorites]


Internet espionage cannot kill people like car bombs often do.

I've never heard of internet espionage actually killing anyone, but I can think of some ways that it could. Messing with insulin machines for instance.

I suppose one might not consider this "espionage" exactly, but spies can be assassins too.
posted by Winnemac at 7:47 AM on November 2, 2011


MikeWarot I firmly believe that computer security can be solved, which would mean the end of virus scanners, firewalls, and many other layers of crap that really don't quite work. They key to all of this is something called POLA, Principle Of Least Access.

Even if you could fix the social aspect of it (dancing pigs,etc), this is a massive technical challenge. How do you specify your access policies? Can you handle delegation ("Bob is authorized to use the company credit card"), or confidentiality ("Alice can't read section three of the document"), or group membership ("the board of directors is a subset of shareholders")? Are programs also principals? ("Charlie using Firefox may access the web site, but Alice cannot unless she's using IE.") By the time you have an authorization logic sufficient to describe rich security policies, there's a good chance it's equivalent to first-order logic, and therefore undecidable---meaning that a computer could stall out forever trying to answer basic security questions.

And then there's the technical aspect: It's incredibly hard to write a bug-free computer program any more complicated than "hello world". Practically impossible. This applies to the operating system and/or virtual machine monitor that you are relying on to enforce your access policies. The security breaches you read about in the news are often caused by something like a stack buffer overflow followed by escalation of privileges by exploiting a flaw in the operating system or even the CPU.

There are software engineering and programming languages techniques to make this better: Use a strongly typed language, pay attention to specifications and test thoroughly, run existing analysis tools to search for memory leaks and other security problems. But these aren't foolproof. The holy grail is program verification: You write a formal specification of a program's security properties, and then you analyze the program to mathematically prove that it complies with the specification. Unfortunately, this kind of analysis takes so much computing power to run that it cannot currently be run on something as complicated as a web server or an OS kernel. And even if it could be run, it turns out to be incredibly hard to write the security specification correctly for a complex program...
posted by qxntpqbbbqxl at 7:48 AM on November 2, 2011 [1 favorite]


It's not hard (for a nation-state) to have a handful of 0-days, the success comes from having some human intelligence to make targeted attacks using the attack du jour combined with the humint.

And that's what's going on now - Stuxnet was just a warm up. Duqu was a targeted attack using a window kernel zero day. for all the talk about training users up-thread, nothing they could have done would prevent this sort of attack.

Internet espionage cannot kill people like car bombs often do.

The US airforce attack drone center was hit by a virus. This time it appeared to be just a keylogger designed to steal Mafiawars passwords (or so they would have you believe), but give the attackers some time.
posted by anti social order at 7:53 AM on November 2, 2011 [1 favorite]


Picked up a trojan a couple weeks back and, trying to act quickly, fingered an active, CPU-intensive process called "trustedinstaller.exe". Those wags, I thought. What I did: terminate the process and cage the file for inspection. What many, many other people did: delete it. Those people are now looking for copies of "trustedinstaller.exe", required by Windows 7. Whoops.

Most software is designed to be safe to use by expressly limiting the user's ability to muck things up. But, you know, some of us work at it.
posted by Durn Bronzefist at 7:55 AM on November 2, 2011


Also, the more I read about stuxnet, the more I learn (starting with the wiki). Fascinating stuff.
posted by Durn Bronzefist at 7:56 AM on November 2, 2011


Instead of asking "do you really want to do this"? or lots of other things, it's the user who decides what they want to hand to a program... and the OS enforces that choice.

The problem: We've very carefully taught users to not care. Every web page on the net, it seems, has a "share" button that sends info to, well, everywhere. They post everything on FB/G+. Helpdesks tell them to click OK to the scary warning when they're doing something legitimate, and thus, they learn that it is okay to click OK when that scary warning comes up.

Worse: Nobody gets paid to be secure. Everyone gets paid to ship features faster. As long as software companies can completely disclaim warranty and fitness for purpose, they have no motiviation to actually write things securly.

And then there's the technical aspect: It's incredibly hard to write a bug-free computer program any more complicated than "hello world". Practically impossible.

Even worse, even if you code perfectly, the current realm of abstraction upon abstraction means you are also counting on everyone who's touched the system or any library you are using to also have written perfectly secure code.

Programmers are so divorced from the actual instructions being executed that it's basically impossible to program securely -- and of course, if you don't, you have legions of fall guys to blame.
posted by eriko at 7:58 AM on November 2, 2011


Ruthless enforcement practices also fail and create a shitty user experience which then leads to zero technology adoption or people just email shit around, because they can't get te ultra secure site to work.
posted by humanfont at 8:04 AM on November 2, 2011


I agree, TPAA, we need more parallels with cybersex to mock cyberwar.

I'm happy the derail suggested an intuitive parallel between insecure activities and gross activities, like say categorizing the relative dangers of webpages according to imagines commonly forwarded around the internet. We must carry forward such bold advancements in human computer interfaces in memory of Steve Jobs!
posted by jeffburdges at 8:14 AM on November 2, 2011 [1 favorite]


Programmers are so divorced from the actual instructions being executed that it's basically impossible to program securely -- and of course, if you don't, you have legions of fall guys to blame.

While I agree that the social problems are probably unsolvable, the technical problems are not insurmountable. HiStar sandboxes code and eliminates the notion of superuser without much actual code (15k LoC). The sandbox escape component is 110 lines. Your existing code base can run because they wrap all the UNIX syscalls with taint checks.

What this gets you is primarily process isolation though. Your browser is big and important enough that it would need to be rewritten closer to be closer to the Chrome design, which frankly needs to be done anyways. And the paper relies on a virus scanner to declare code safe to read/write to the network or files, which is questionable.

To bring this back on topic, I figure though, that this sort of stuff just makes the various National Security Agencies' job harder.
posted by pwnguin at 8:22 AM on November 2, 2011


So how is it that Macs and iPads aren't being owned all over the place without running any virus scanners? It can't be because there aren't enough of them out there any more. Apple owns a big and growing chunk of the laptop market and 90% of the tablet market.
posted by empath at 8:24 AM on November 2, 2011


Lol empath
posted by CautionToTheWind at 8:27 AM on November 2, 2011


How many OSX and IOS viruses are out there in the wild right now?
posted by empath at 8:28 AM on November 2, 2011


Lol empath don't derail the thread
posted by CautionToTheWind at 8:33 AM on November 2, 2011 [3 favorites]


So how is it that Macs and iPads aren't being owned all over the place without running any virus scanners?

Macs have virus scanners and our policy requires they be installed and running. Even without the threat of any existing major worms, having anti-virus gives you a way to respond to new threats. By the time the malware rolls around, it's too late.

The same policy also requires Linux boxes to run ClamAV, which to my knowledge only scans mail queues, because committees are idiots. This is a pure ass covering move, so when my Ubuntu Desktop gets pwned they can blame me for not running ClamAV, and if it is, they can point to proactive measures like that as proof that they're taking the problem seriously.
posted by pwnguin at 8:35 AM on November 2, 2011


I'm torn, regarding macs. I run mine at home without AV. If you install AV on your mac, you're adding another program with deep system access that could be vulnerable to exploitation - you're increasing your attack surface. I'm not sure I'd do it today, if I was comfortable with other controls in place.
posted by These Premises Are Alarmed at 8:39 AM on November 2, 2011


Yeah, don't make me regret suggesting that Steve Jobs would've wanted us using shock sites for security warnings, empath, but : I'd assume virus authors earn the most money by writing more or better windows viruses, rather than expanding to Mac OS X's small market share with all the associated retooling costs. Ditto Android vs. iOS.
posted by jeffburdges at 8:40 AM on November 2, 2011


OpenBSD 5.0, the BSL-4 of operating systems, was released yesterday.
Only two remote holes in the default install, in a heck of a long time!
posted by SyntacticSugar at 8:41 AM on November 2, 2011


Empath, macs have built in anti-malware maintained by Apple themselves. It's rough and not really complete but it has been sufficient to quash most malware that has shown up since 10.6

Ipads aren't getting owned all over the plac because IOS is a closed ecosystem, Apple has to allow you to run the applications unless you've totally circumvented their security model with a jailbreak. Of course if you've broken that model you're far more open to malware because you'll no longer be within the walled garden.
posted by iamabot at 8:41 AM on November 2, 2011 [1 favorite]


In related [Cyber] War on Terror, the Metropolitan Police have purchased their own cell-phone surveillance system. No doubt, currently deployed to capture Chinese Fifth Columnists camped outside St. Paul's Cathedral.
posted by SyntacticSugar at 8:44 AM on November 2, 2011


Yeah, don't make me regret suggesting that Steve Jobs would've wanted us using shock sites for security warnings, empath, but : I'd assume virus authors earn the most money by writing more or better windows viruses, rather than expanding to Mac OS X's small market share with all the associated retooling costs. Ditto Android vs. iOS.

It's also occurred to me that most malware development seems to be done on small budgets in poorer countries -- they may not have easy access to Mac dev platforms.
posted by grobstein at 8:46 AM on November 2, 2011


It's also occurred to me that most malware development seems to be done on small budgets in poorer countries -- they may not have easy access to Mac dev platforms.

A mac as a dev platform is simply a vmware fusion instance away now with Lion.
posted by iamabot at 8:47 AM on November 2, 2011 [1 favorite]


Oops, don't need fusion to run the lion virtual machine.
posted by iamabot at 8:48 AM on November 2, 2011


So how is it that Macs and iPads aren't being owned all over the place without running any virus scanners?

Apple, and really, almost any OS dev these days, selects sensible defaults, implements security checks at a very low level, and makes the user and developer base put up with them. Microsoft, to keep its installed base placated who insist on all manner of insane configurations, does not.

Macs are not immune from phishing attacks. Macs are not immune to online services like google being compromised. Sites hosted on macs are not immune to malicious javascript and other web-server attacks. Macs are not immune from trojans.

Be careful - just because your platform fares well against remote exploits, rogue PDFs and viruses, doesn't mean other attack vectors are covered as well.

In related [Cyber] War on Terror, the Metropolitan Police have purchased their own cell-phone surveillance system. No doubt, currently deployed to capture Chinese Fifth Columnists camped outside St. Paul's Cathedral.

Cell phones are popular remote control systems for explosives. London has had first hand experience with that.

Sometimes anti-terrorism technology is actually used to combat terrorism. It's important to learn the difference, as you can then get really ticked some sheriff in Texas bought a weaponized UAV with homeland security money... money that should have gone toward cellphone monitoring tech in vulnerable urban areas.
posted by Slap*Happy at 9:01 AM on November 2, 2011 [1 favorite]


Cell phones are popular remote control systems for explosives
So is a Dead Man's Switch.
posted by SyntacticSugar at 9:06 AM on November 2, 2011


I find the data breach report published annually to be an excellent read:

http://www.verizonbusiness.com/resources/reports/rp_data-breach-investigations-report-2011_en_xg.pdf
posted by iamabot at 9:07 AM on November 2, 2011 [1 favorite]


And the Met are probably more interested in the possibilities of the system as an IMSI-Catcher, rather than some improbable '24' style scenario.
posted by SyntacticSugar at 9:12 AM on November 2, 2011


I heard something last night on the BBC regarding "cybersecurity" -- some sort of conference in the UK(?) It was a discussion of the potential threats from cyberwar issues (and IP issues of course).

They discussed it with some British guy and they were saying "you say that *some countries* are clearly engaged in cyberwar - is that Russia and China?" and he wouldn't fall for their trick -- but yeah, obviously the underlying sentiment was there. I wish I could recall more. But it bothers me to hear this kind of talk. I'm not saying there isn't a real issue, but... I don't know.

Also - it bothers me that military installations can have easy access to the outside network and get hit. It seems that they'd have a better system in place than just tossing it into the wider civvy internet. I'm sure there's all kinds of controls, but should a supposedly secure command infrastructure really be able to be hit by a virus? I suppose if you have any sort of need for communication with the outside world, that opens a door. Hrmm. :\
posted by symbioid at 9:18 AM on November 2, 2011


Also - it bothers me that military installations can have easy access to the outside network and get hit. It seems that they'd have a better system in place than just tossing it into the wider civvy internet. I'm sure there's all kinds of controls, but should a supposedly secure command infrastructure really be able to be hit by a virus? I suppose if you have any sort of need for communication with the outside world, that opens a door. Hrmm. :\

Secure networks are supposed to be airgapped, most of them are, but an airgap is generally only as good as the users who bridge it....and users cannot be trusted.
posted by iamabot at 9:21 AM on November 2, 2011 [2 favorites]


Lots to reply to... could write a book of them.... but off the top of my head...

Yes... computers are like biological systems in terms of infections, for the same reasons... they both trust mobile code (machine code vs dna/rna). Unlike cells, we can build operating systems that don't evolve, and don't trust mobile code, but still function well.

Application programmers shouldn't have to worry about their code being "secure", they should have to worry about functionality and meeting the needs of the users. It's the OS that is supposed to enforce security.

Most computers on the internet ARE vulnerable... that's why they are hidden behind firewalls, virus scanners, etc... yet zero day exploits, like prime numbers, are infinite in supply if you have the resources to find them.

Being able to subvert industrial control technology can kill people, both immediately, and by disrupting supply chains, don't kid yourself into believing otherwise.

Capabilities based security/POLA isn't a magic bullet, but it would fix a lot of things if it actually got used properly. The fact that it hasn't does not prove it won't work.... it wasn't used in the past because the conditions then made it unnecessary... things have changed in the last 40 years.

This is one of the first good conversations about computer security I've had in a while.... we stayed away from blaming the users and religious wars about operating systems, for the most part. Thanks everyone!
posted by MikeWarot at 9:49 AM on November 2, 2011




I am not at all a fan of the "cyberwar" term, and nobody I know who works in information security likes or uses it. With maybe, possibly the exception of Stuxnet, most stuff we have seen so far (Aurora, Titan Rain, Ghostnet, the DDOS on Georgia, etc.) can't really be called acts of war. Espionage/sabotage, or preparing for sabotage, sure. But not really an act of war that require a kinetic response, no matter what DoD says.

That doesn't mean that there aren't cyber attacks on a regular basis (I've talked before about the sort of stuff I see where I work). But Slap*Happy is correct that it's the intelligence they are after, and it's a process of slowly gathering data from soft targets that can then be used to later go after more juicy targets like defense contractors and the U.S. government. It really doesn't matter that China's cyber capabilities may be "fairly rudimentary" when all it takes is a well-crafted email payload that ultimately drops a common Taidoor or PoisonIvy backdoor into the targeted system, opening it up for them to siphon data at their leisure. If they can social-engineer their way into someone's computer, why on earth would they waste their "good" exploits and 0-day vulnerabilities, if they have them?

Educating users is a major line of defense. But it only takes one slip-up, which is highly likely when they just... keep... coming... The "Persistent" part of APT is what makes it so insidious, not necessarily the sophistication of the malware they are using.
posted by gemmy at 10:10 AM on November 2, 2011 [1 favorite]


Unlike cells, we can build operating systems that don't evolve, and don't trust mobile code, but still function well.

Alright everyone out of the digital pool! (This is why we can't have nice things.)
posted by mek at 11:28 AM on November 2, 2011 [1 favorite]


So how is it that Macs and iPads aren't being owned all over the place without running any virus scanners? It can't be because there aren't enough of them out there any more. Apple owns a big and growing chunk of the laptop market and 90% of the tablet market.

Malware developers are like other software programmers. Once they learn a particular platform they continue to write more stuff for it because they know the tricks. As long as malware developers can continue to make money cracking windows XP boxes, why are they going to switch.
posted by humanfont at 12:48 PM on November 2, 2011 [1 favorite]


it requires massive amounts of coding because we have to migrate to a new OS, one based on a microkernel (to have the absolute minimum about of "trusted" code possible).

I wish I was here for the earlier part of this, but anyhow...

We had this conversation just last week. There is precisely one proven micro kernel out there and the team that checked it managed to test 250 lines of code per man year. Since the failure probably doesn't happen in the line of code but in the interaction between the lines of code, that number is just going to get worse as the OS actually becomes good for something.

Also, principal of least access gets used all the time in the business world. The way it works is that admins give users just enough capabilities to not quite do their job, so the users end up developing workarounds that typically leave the system more exposed than it would have been if they users had sufficient access to begin with. In the system I describe in the previous thread, our local admins had to make everyone an admin so they could actually save their data.
posted by Kid Charlemagne at 6:18 PM on November 2, 2011 [1 favorite]


Kid Charlemagne - The whole point of a microkernel is that it doesn't need to be any bigger, nor does it need to trust anything else. Drivers, and all programs run in user mode, the kernel protects itself and manages memory.

You're correct in your assertion about least access being used in the business world. However, none of the current crop of OS offerings give the user a way to decide at run-time which of their access rights they wish to delegate to a program. That's traditionally an administrator function.

The idea of users being admins gets close to the idea of users being better able to control things, but it's still a default permit environment, which is like having a trigger guard on a sledge hammer.

The user should be able to pick a file, folder, email address, web url, or any number of items, drop it into a program, and have the program operate on those resources. If the program needs to read or write a file on behalf of the user, it would request a file dialog, which would run completely outside the control of the program, and would then pass the new resources to the program in question.

The program would not be able to just randomly open some file or folder, ever.
posted by MikeWarot at 7:09 PM on November 2, 2011


My favorite computer security ghost story is that someone actually compromised some early branch in the compiler family tree like so decades ago and leveraged it so that by now every compiler (and decompiler and operating system et cetera) is irrevocably and undetectably tainted. The best part is it's completely unfalsifiable.

I guess the point is as a system's complexity exceeds our capacity to understand it magical thinking invariably and necessarily fills the gap.

Cyberwarfare is still a particularly embarrassing farce though.
posted by Ictus at 8:22 PM on November 2, 2011


If the paid (and presumably trained) specialists in a DJ30 company cock it up so badly that everyone had to a have local admin rights to to do their job, what chance does your great aunt Tilly have of consistently getting it right? Particularly when phishing is all about convincing you giving thing B access to resources you would only give to thing A if it was obvious to you what the hell you were doing.

But the real issue that such an OS faces is that it would have to either be backwards compatible with everything, or we'd have to redo all the coding (and more importantly, code patching) that humanity has cranked out since FLOW-MATIC. Given the number of companies that spent huge sums to patch their decrepit COBOL code prior to Y2K, let's just say - shakes... ALL SIGNS POINT TO NO.
posted by Kid Charlemagne at 8:33 PM on November 2, 2011


Kid Charlemagne - THAT is exactly why capability based security hasn't taken off... it's not because it's a bad idea, nor because it's complicated, hard, or any of that... it's the simple inertia of everything out there.

I'm going to have to start writing code, demos, etc... to get this and some other ideas of the ground. The good news is that I can use the source code from a lot of good tools, and tweak them just enough to deal with dialog boxes, etc.

There's nothing stopping someone from building a sandbox that implements capabilities, and using it to run another (different) set of applications there. Java does this, albeit with poor security, as does flash.

Again, thanks to all for a great discussion!
posted by MikeWarot at 11:34 PM on November 2, 2011


Typical Linux-based desktop systems are less liable to drive-by and trojan malware infections than any other desktop system for a number of reasons:

1: Market share. There's not much bang for the development buck for malware authors in attacking such an unpopular system. Some people think this is all there is to it, and I've been reprimanded for encouraging people to migrate from Windows to Linux when they're sick of dodging malware on the grounds that if enough people do this then Linux will lose its present immunity. I don't take these reprimands seriously, as I think Microsoft's vendor-lock-in juggernaut is more than capable of seeing off any such threat.

1a: Diversity. Linux ain't Linux, and something coded to get hooks deep into Ubuntu might well have trouble with Red Hat or Gentoo or Arch or Debian and vice versa and so forth.

2: POLA culture. Most Unix-based application software never requires write access to files outside the user's own home folder. As a consequence, Unix users are not clicker-trained to turn on superuser access just because some random app has asked for it. This is a cultural POLA of the crudest, coarsest-grained kind imaginable, but by and large it works.

By contrast, even in 2011, Windows is weighed down by an overwhelming mass of useful application software acculturated to DOS, a system with no meaningful form of access control at all. MS has tried to work around this legacy with assorted horrible sandboxing hacks (User Account Control, virtual files, IE Protected Mode and so forth) but these are mostly hamstrung by the marketing imperative to make every new version of Windows compatible with everything that has ever run on any MS OS since before Windows even existed. There will always remain enough of this shameful software in common use that most Windows users will reflexively allow just about anything past their OS's otherwise perfectly competent security barriers on request.

3: Centralized software repositories. This is a big one, in my view. The normal method for installing new software on a typical Linux box involves pulling a signed package from a central (though widely mirrored) repository maintained by the Linux distributor. This is in sharp contrast to the commercial world, where installing new software involves grabbing a .dmg (or, worse, an executable installer requiring superuser privileges) from some third-party website. It's way, way easier for a Trojan author to make an installer pass as legit when the user has (a) been trained by years of ignoring indecipherable EULAs not to read anything inside an installer dialog and (b) doesn't have any expectations about how a software installation is going to go beyond clicking Next -> Next -> Next -> Next -> Done.

3a: Centralized update mechanisms. These come as part of the typical Linux desktop's centralized package management system, and are generally the only kind of updater the typical Linux user will encounter (extension updates for Firefox are a notable exception). On Windows, every vendor (and from some vendors, every app) has its own update mechanism; they all look different, and learning which "updaters" are trustworthy is non-trivial.

It seems to me that if the US government took the same attitude toward widescale replacement of Windows-based desktop systems as various European governments have done, it would probably achieve a far greater reduction in malware damage risk than any amount of spending on new and/or stronger TLAs. It also seems to me that this is massively unlikely to happen (see: MS vendor-lock-in juggernaut).
posted by flabdablet at 3:44 AM on November 3, 2011 [2 favorites]


Central repositories, are a good thing (tm) but are the clunkiest most centralized version of an approach described as enumerating goodness, which is the opposite of how modern anti-virus software works and, where by definition, if someone writes a new exploit, my anti-virus software has a zero chance of catching it.

There are maybe 150 pieces of software on my computer. Of those, 75 of those are some component of windows (about ten of which I used regularly and 50 of which I've never used), 50 are games (mostly from steam that I mostly picked up on sale because $2.00? Why the hell not!) and then 25 or so other things like Libre Office and Abstract Spoon's "To Do List". You could pretty much go on line and find a review of every piece of software on my computer (sometimes in animated form where a guy talks fast and with imps, but I digress).

If somebody out there had a big list of all the trustworthy software (along with some hash keys so I couldn't just call my malware Microsoft Word) that would go a long way to doing what centralized repositories do, but without requiring the physical infrastructure (servers, et al) of an actual centralized repository.

Oh, and changing the default autorun settings on USB ports, CDs and so on from "Please beat me savagely and then sodomize my corpse" to "Off" would probably help.
posted by Kid Charlemagne at 4:50 AM on November 3, 2011


I will go so far as to agree that if the Windows ecosystem were not horribly broken in countless interlocking ways there would be no compelling reason to switch to something that works better :-)
posted by flabdablet at 5:33 AM on November 3, 2011 [1 favorite]


There are presently 4,033 EXE and 21,933 DLL files on the C drive of the Windows 7 box I'm using to type this reply. A flaw in any of them could be used to compromise this system. I only have 30 items in my entire start menu, which reflects the fact that it is fairly new.
posted by MikeWarot at 2:26 PM on November 3, 2011




Mac vs. Windows? Really? Windows XP is still more targeted then Vista or Windows 7 from what I understand and There is Malware for the mac now.

As far as actual hackers hacking things as opposed to 'mass' attacks by malware, worms and stuff like that historically OSX has been easier, rather then more difficult, to hack.
posted by delmoi at 8:24 PM on November 3, 2011


Mac OS has long had a far greater reputation than it deserves on purely technical grounds as The System Without Malware. That very reputation has, in this instance, an actual protective effect. The threat posed by Mac malware that presents itself as a malware removal tool strikes me as being a bit like fairies: only real if you believe in it, which I think Mac users will in general be far less likely to do than Windows users.

It further seems to me that systems like Ubuntu and Debian, where the normal software installation procedure involves the inbuilt Software Center or other locally-accessible package manager rather than the explicit use of random stuff downloaded from the Web, act to train their users in such a way as to make that kind of social engineering even harder. The Apple app store strikes me as a clear attempt to get the same kind of culture going for iOS, but the iron-fisted control that Apple exercises over whose stuff gets in there and the consequent incentive to jailbreak your iThing and install who-knows-what-from-wherever spoils it somewhat.
posted by flabdablet at 1:38 AM on November 4, 2011 [1 favorite]


We've started discussing Apple's Mac App Store in a patent war thread, flabdablet. Ain't looking good.

I'm hopeful that Apple's sand box effectively forbids all useful software like Growl from their App Store, forcing those developers into using traditional distribution channels, or even better MacPorts and Fink.
posted by jeffburdges at 6:33 AM on November 4, 2011 [1 favorite]




delmoi: "Mac vs. Windows? Really? Windows XP is still more targeted then Vista or Windows 7 from what I understand and There is Malware for the mac now.

As far as actual hackers hacking things as opposed to 'mass' attacks by malware, worms and stuff like that historically OSX has been easier, rather then more difficult, to hack.
"

I just had my first malware attempt at work today - this was on an Win 7 system w/MSE installed. It kind of scares me. It came from a search for an innocuous term and one of the sites (which I had opened in the tab -- hrmm.. I wonder, now, if perhaps, opening a tab shouldn't pre-load a page -- it might prevent tab-jacking or whatever the term was)... I have had some nasty shit happen due to, ahem... "adult" surfing, at home on an XP system (with MSE installed - which has been, usually, the best system I've used for such detection)... But never before have I experienced an attack on either an XP or a Win7 system with MSE installed. I don't know if this means the Win7 system was targeted or what. It did come from a Java vector, that's all I know. I'm too drunk to post any more. HAVE A GOOD NIGHT Y'ALL!
posted by symbioid at 4:47 PM on November 4, 2011


I was working on a customer's computer in his home this afternoon when his phone rang. I picked it up, handed it over and went on working while he talked to his caller. Then I heard him say "Oh, that's handy; we've got the computer man over here working on it right now. I'll put him on."

By the time he'd finished handing me the phone, the caller had mysteriously hung up. Pity. I'd have liked to hear exactly how "Microsoft" found out about his "errors" and what they proposed he should do about them.

The really scary part is that had I not happened to be there at that time, he would quite likely have fallen for the scam and there would now be who the hell knows what trojan crap running in his PC. It had in fact been unresponsive to the point of uncontrollability for the last few days, and even though that fault had spontaneously resolved itself before I got there, was still running very very slowly.

All the best attacks involve social engineering. It seems to me that an anti-cyber-warfare department should be a branch of the education department rather than the military.
posted by flabdablet at 7:33 AM on November 5, 2011 [5 favorites]


There is a Bloomberg article on Palantir Technologies who were involved with HBGary attempt on WikiLeaks.
posted by jeffburdges at 12:56 PM on November 26, 2011


« Older Devotional snail mail in the PRC   |   LET'S BLOW SOME SHIT UP Newer »


This thread has been archived and is closed to new comments