Microsoft wastes electricity.
September 24, 2000 4:22 PM   Subscribe

Microsoft wastes electricity. OSOpinion has an interesting article discussing how the increase of home computer usage has put a dent in the overall power available to America. Who's to blame? Guess.
posted by cCranium (18 comments total)
 
Despite the over-riding stench of Anti-Microsoft sentiments apparent in this article, the author raises some interesting points.

Before hardware become as cheap as it is, programmers fine-tuned their code to use every bit efficiently. With the massive availability of resources, there's been a fairly massive amount of bloat in software. One of the prime offenders in this is Microsoft, whom the author argues is more concerned with features and options than with efficiency.

Not a new argument. What is, though, is that computer manufacturers in general are hitting a completely different kind of resource - electricity.

I pretty much love anything that argues for clean, efficient code, and this is a great argument.
posted by cCranium at 4:26 PM on September 24, 2000


Well, since I'm in the mood for a good rant, I'll take this out to the bleeding edge... The fault is US. The world has been in a state of blind infatuation regarding what we call computers for 30 years. Since the mid-1980's, business and industry have spent untold billions buying, upgrading and maintaining them. They are everywhere, on every desk, tasked with often inane, make-work processes, they are always crashing, munging data and being unreliable. They are credited with a huge, unmeasured and likely non-existent increase in productivity, while in the experience of the average joe, they are always causing a problem. At times I think that the late 20th century "computer" craze will ultimately be recognized as the greatest episode of mass mania ever. Its as though we never really stopped wishing we could be George Jetson or embark on the Enterprise. The "dot.com" frenzy is just a recent example of the concensual loss of rationality computers invoke in us. Ch00 think?
posted by quonsar at 4:38 PM on September 24, 2000


You wouldn't like what happened if everyone optimized for "clean, efficient code" -- 7 year development cycles, prices five times what they are, other side effects.

The reason programmers have gotten sloppy and software has bloated is that programmers are getting more and more expensive and RAM is getting cheaper and cheaper. It simply makes more economic sense to use more memory and less programmer time.

I know all about this because I come from an industry where memory is dear and we do have to work to create "clean efficient code", and I know how much extra work we have to put into our code to make it fit. If you tried to develop standard commercial apps the same way, the result would take too long and cost too much. You can basically figure that "clean efficient code" costs about ten times as much per function to develop as "bloated sloppy code" and takes about five times as long.

Code used to be written "clean and efficient" for PCs, too, back in the bad old days, but it was only because resources were tight. As soon as resources loosened up, the economic pressure of fast product release cycles and terribly expensive programmers (like me) caused an irreversible switch to the other model. Nostalgia is fine, but don't expect anyone to listen to you and actually do anything about it.


posted by Steven Den Beste at 4:41 PM on September 24, 2000


Doesn't it depend on what you mean by "clean and efficient"? Surely a program designed and written by a small number of geniuses will be better than a prgram written by a huge number of semi literate nerds who think that being able to hack C++ or Java makes them smart?
posted by davidgentle at 5:04 PM on September 24, 2000



You wouldn't like what happened if everyone optimized for "clean, efficient code" -- 7 year development cycles, prices five times what they are, other side effects.


I think there's a bit of a difference between a bunch of assembler gurus bunging out every last cycle and spending more time on design to avoid gratuitous waste. Most large applications I see are heavily bloated because nobody cares enough to spend even a little time on tuning; if it was communicated that this was important, application vendors could trim out a significant portion of the bloat without much of an impact.

As an example on an extreme case, RISKS Digest subscribers might recall the degree of bloat demonstrated in Microsoft's RegClean utility, where something like 75% of the EXE size was used by resources and code which was never used.

This would fit closely with the idea of encouraging companies to produce reliable software and better interfaces. Nobody *wants* frequent applications releases; what they want are to be able to get something useful done. If a vendor spent an extra 2 months on better planning, optimizing and testing their application over the life of the project, the delay would almost certainly be factored out by the advantage of having a better product. It's widely acknowledged that most shops don't spend as much time on application design as they should; more time there translates into tighter, faster, more reliable code for all future releases, which again is almost always a much larger return than a faster time to market.
posted by adamsc at 7:59 PM on September 24, 2000


Well, like cCranium, I felt this article had some good points that were tainted by the extreme anti-Microsoft slant.

Whenever I set up default profiles, for example, I make sure that they're on Energy Saver mode. A very relaxed one for a desktop machine, of course, but one that at least will drop power consumption overnight if it's left on -- as, increasingly, PCs are. Yes, Microsoft's sometimes insane startup times can share some of the blame here, but I'm not certain that there's that much difference. How many Linuxheads leave their machines on constantly? I'll bet it's quite a bit.

It's the prisoner's dilemma. If the marginal cost to each individual is almost negligible, but the larger cost to society is substantial, good luck in getting the individuals to cooperate.

Some PCs come from the vendor with power-save modes turned on. Laptops make it easy. But I know a lot of people who turn OFF power-save mode because they consider even the modest spin-up times an annoyance.
posted by dhartung at 8:17 PM on September 24, 2000


There is a company called Newdeal ( you can visit them at www.newdealinc.com ) , they have written an OS and an office suite all in assembly code. Allthough the office suite doesn't work on my wonderful new pc running Windows 2000 it is only 5mb in size. I was thoroughly impressed when i saw this. Remember greater size doesn't necessarily mean greater performance.
posted by Zool at 10:30 PM on September 24, 2000


Oh I don't know. I think that 90%+ of the desktop market market being a badly programmed bloated piece of tripe is good for hardware.

Does shoddy software that requires fast hardware to make it bearable ensure that fast hardware is cheaper? (scales of economics). Or would hardware developers continue at their current development speed regardless of the software that runs atop of it?

Click for more...
posted by holloway at 12:59 AM on September 25, 2000


Steven: adamsc summed up my viewpoint quite well.

I agree with you, a certain amount of slap & patch coding is necessary for quick software cycles. And quick software cycles are necessary to get and keep desktop share.

I don't, however, feel that an overly bloated program is necessary. IMs, for instance. I used to use ICQ religiously (hell, I've got a 5-digit ICQ number! I'm supah-cool!) but it got too damn big and bloated, and now I use Yahoo! messenger, with everything turned off.

Kludged functionality that bloats a program just makes me cringe. The fact that the kludged functionality makes for a weaker all-around product (large disk space and memory usage, less stable, etc. etc) makes me just want to scream profanities for a few hours. :-)
posted by cCranium at 6:07 AM on September 25, 2000


There will come a point when CPUs can't go at a more substantial clock rate with current technology variants. Heat dissipation has to do with the number of transistors and how fast/ofter you're requiring them to switch.

As an aside, clock rates are already approaching microwave wavelengths which scares the bejeezus out of me.

I believe that at the point when CPUs are in the range of diminshing returns for clock rate increases, work will commence to drastically reduce power consumption to compensate for having to have 4 CPUS in the machine.

Here's the deal: at the base level, all current computers can do is this:
1) move memory from point a to point b
2) perform limited arithmetic and boolean logic on memory
3) based on the outcome of 1 and 2 (or unconditionally), change
the control path of the program.
4) tickle IO devices (although this can be done by 1).
5) be tickled by IO devices (this can be done by 1, but it typically
more efficient to let it be done on an interrupt basis and not
a polling basis.
This is not a whole lot. What gets expensive is trying to make these things run more efficiently (or trying to keep backward compatability).

Within this is the biggest argument for RISC processors: if you strive to do just those 5 things well, then the effort can be put into pipelining and caching which give you a huge win in performance. It's easier to pipeline and cache on a RISC machine than a CISC machine which means fewer transistors, which means less power.

The thing is, the most popular computing devices are based on a model which includes dozens of questionably necessary variations on these 5 operations. Although the 68K variant which is in the Palm/Handspring is somewhat cleaner than x86, it's still firmly in CISC, especially with respect to the address/data register separation. At least the variant in the Palm is low-power.

If you want to save power today, turn your machinery off. I used to run at least one machine 24/7 and my power bill dropped by 25% by only running the machines when I used them. It's a habit that you can choose to adopt into your work style. Leaving work--even just for lunch? Shut your machine off. Go get coffee while it boots. Heading to a meeting? Power down the monitor at least.

posted by plinth at 6:12 AM on September 25, 2000


[craydrygu] It means things like not having a 3d graphics engine built into your spreadsheet program (the famous Excel easter egg). It means your email program doesn't have the ability to write and render HTML, and run ActiveX programs (Outlook/Outlook Express). It means that, instead of a kludgy OS (Win98) built on a buggy OS (Win95) built to be compatable with an obsolete OS (Win3.1) that was just a shell for a stolen, inadequate OS (DOS),

Wow, such vitriol! You realize, of course, that the features you mention as "built-in" to the applications are actually services provided by the operating system? Microsoft's component architecture makes such things possible. Such an architecture has advantages and disadvantages, just like the monolithic application paradigm.

As for the backwards compatibility of Windows 95/98/ME all the way back to "inadequate" DOS, a lot of customers perceive this as a huge value-add. To be able to easily run nearly any program written for the IBM-compatible PC in the last twenty years in the same operating environment is a good thing. It adds a bunch of overhead to the OS, and for some people that's a bad thing, but if backwards-compatibility is important, Windows 9x can't be beat!

[plinth] Within this is the biggest argument for RISC processors: if you strive to do just those 5 things well, then the effort can be put into pipelining and caching which give you a huge win in performance. It's easier to pipeline and cache on a RISC machine than a CISC machine which means fewer transistors, which means less power.

You make some good points in favor of RISC chips. But again, the advantages come down to software. RISC architecture requires compilers to be a lot more exact and fastidious about the way they create machine code in order to take advantage of the RISC architecture. In other words, to get the most out of RISC, your software (the compiler) has to be highly efficient and well-refined. That takes more time and effort than producing a similar compiler for a CISC architecture. It's a trade-off, as all decisions are. CISC isn't inherently bad, it's just different.

BTW, the new VLIW architectures such as Intel's EPIC which will be used in the IA-64 Itanium processor is similar to RISC in that it requires the compiler to do a lot of work in order to take full advantage of the processor's capabilities. I think ultimately architectures like RISC and VLIW will finally take over from CISC if only because as processors get faster and smaller, RISC and VLIW make for cheaper chips.
posted by daveadams at 7:29 AM on September 25, 2000


Toss in the fact that power saver drivers are available mostly for wintel architecture, not OS/2 Linux, so Toms foot goes deeper into his mouth.

Um, start chewing those toenails.

The real ecological problem, though, is that chip production is probably one of the most power-hungry modern manufacturing processes. Add to that the luxury -- and to Europeans, it is a luxury -- of unmetered access, turning consumer desktops into 24-hour servers, and there's yer blip.
posted by holgate at 7:33 AM on September 25, 2000



Just because it's possible for my word processor to surf the web, or my email client to be an HTML editor, doesn't mean they should


Alan Cooper expounds upon this point in "The Inmates are running the Asylum" and points out that while developers tend to see new features as free if you have the programmer time, they really aren't. Just adding a bunch of menu items will confuse people and every new feature is something that takes up resources and must be tested rigourously. Consider how many people use more than a small subset of the functionality in Word or Excel - I've seen more than a few who are intimidated by the number of choices, even if they don't need to use them. Reducing bloat is good from a usability standpoint, let alone security, stability and resource conservation.

A real component-based system would allow vendors to provide versions with the functionality the user wants and priced accordingly. Most people buy Word so they can read files other people send them, even though their word-processing needs are rarely more than changing the font or adding a bulleted list. Does anyone want to bet that this wouldn't also be easier to learn?

As far as power consumption goes, Windows compares poorly just because of the software (We'll ignore Win9x's little HLT deficiency and almost everything supports APM). If nothing else, pushing all those pixels around unnecessarily is wasteful (esp. on servers); because the platform has a history of bloated, inefficient programs, most developers follow business as usual and produce more bloat. We're reaching the point where Rambus *memory* has heatsinks and power-management software. If the software wasn't so bloated, most people could get by with an efficient system with a P200-level processor which could probably use a 1-10W (obviously gamers, 3D modellers and the like would continue to need/want significantly more capable equipment). In most cases, they'd probably never notice the difference.

Skallas - being "able to easily run nearly any program written for the IBM-compatible PC in the last twenty years" isn't worth much to most people. How many people do you know who still need to run non-Win32 programs? I'd venture to say that it's well under 5% of the Win32 install-base these days; it'd make more sense for Microsoft to support Linux or MacOS binaries. Besides, wasn't everyone downplaying the value of this back when Microsoft was worried about how OS/2 did a better job of it?
posted by adamsc at 10:07 AM on September 25, 2000


Skallas - being "able to easily run nearly any program written for the IBM-compatible PC in the last twenty years" isn't worth much to most people. How many people do you know who still need to run non-Win32 programs?

I think you were addressing my comments, not Skallas's. Anyway, not many users still have to rely on pre-Win32 programs any more, thankfully, but when Microsoft introduced Windows 95 that was a huge concern. Now that most of that older stuff has been replaced by Win32 programs, the move is on to migrate users to the NT/2000 OS base. Could most users have migrated straight from DOS/Win3.1 to NT right away in 1993? I doubt it. If Microsoft had just forgotten about backwards-compatibility, they would have lost tremendous market share. All the old legacy stuff couldn't be replaced that fast. Sure the instability of the 9x series is probably due to the efforts to build in backwards compatibility, but without that transitional period, who could or would have stayed with Microsoft? A lot fewer people. Not that that's a good thing, but it is good for Microsoft.

I'd venture to say that it's well under 5% of the Win32 install-base these days; it'd make more sense for Microsoft to support Linux or MacOS binaries.

Again, that's the case now, but it wasn't when Win95 was released, obviously. As for supporting Linux or Mac binaries, obviously it would be a lot more difficult to build in support for platforms that are structured completely differently than to evolve their current platform.

Besides, wasn't everyone downplaying the value of this back when Microsoft was worried about how OS/2 did a better job of it?

I don't think I was. But what does that matter? My ultimate point was that Microsoft's OS direction did have some value. Maybe not to you in the present context, but to a lot of people and corporations that had the power to decide Microsoft's fate way back in the mid 90s, that direction was key to keeping their support and moving Windows forward. I'm sure there were better ways of approaching it, but that's not the point.

Apple did something similar with the move from 68k to PowerPC I recall. But wouldn't it have made more sense for them to support Windows binaries? :-)
posted by daveadams at 11:31 AM on September 25, 2000


Just FYI, the compiler design argument is also specious. It takes a lot of effort to write a good optimizing compiler, period. I just deleted a huge paragraph as to why as I figure that's only interesting to gearheads like me. To sum: each processor provides its own challenges. That aside, code efficiency won't save power (that's what we're talking about right?). Making machines that use less power will help and that's where RISC will win hands-down. Fewer transistors that get beaten on less often will use less power.

Or simply beating on your existing processors less by, say, turning them off.
posted by plinth at 12:38 PM on September 25, 2000


While the using less power bit is a major part of the discussion, I also like hearing peoples' views on efficient coding practices in general.

And I'd have loved to have read that paragraph plinth. :-) (Don't feel you have to retype it though)


posted by cCranium at 1:10 PM on September 25, 2000


plinth: add my vote to those who'd be interested in reading That Missing Paragraph.

-Mars, former compiler hacker
posted by Mars Saxman at 4:37 PM on September 25, 2000


I'd like to hear it too, plinth. Anyway, the way I understand the situation is that RISC chips don't build any optimization logic or figuring-out-which-instructions-can-be-executed-simultaneously logic. All those decisions are left up to the compiler. Intel's architecture attempts to do a lot of that on chip, instead of requiring the compiler to put instructions in a particular order or use particular registers. But your point is taken that that writing an optimizing compiler is difficult on any platform, and perhaps the work involved to efficiently utilize a RISC platform versus something like Intel's architecture is trivial compared to that. I don't know, because I've never written a compiler for a RISC system.

I'll tell you one thing, though. Writing assembler is a lot more fun on CISC because you've got tons of instructions to play with and some fun figuring out how best to utilize the four main general purpose registers. RISC just gives you arithmetic and memory operations and a whole bunch of registers. Boooring! :)
posted by daveadams at 5:38 AM on September 26, 2000


« Older Barenaked Ladies use ingenuity instead of lawyers...   |   Taking theatre in to a new level. Newer »


This thread has been archived and is closed to new comments