WinFS - what it could have been, and why it failed
December 14, 2013 8:41 PM   Subscribe

Hal Berenson discusses the motivation, technical details, and death of WinFS.
Longhorn itself turned out to be too aggressive an effort and have too many dependencies. For example, if the new Windows Shell was built on WinFS and the .NET CLR, and WinFS itself was built on the CLR, and the CLR was a new technology itself that needed a lot of work to function “inside” Windows, then how could you develop all three concurrently? One story I heard was that when it became clear that Longhorn was failing and they initiated the reset it started with removing CLR. Then everyone was told to take a look at the impact of that move and what they could deliver without CLR by a specified date. WinFS had bet so heavily on CLR that it couldn’t rewrite around its removal in time and so WinFS was dropped from Longhorn as well.
posted by a snickering nuthatch (52 comments total) 29 users marked this as a favorite
 
I've only chewed my way through the second post, but it's very interesting to see how MS was effectively trying to redefine the file system. My view of this level is pretty narrow, but I do touch on filesystems upon occasion, and the thought process behind how they're laid out is always interesting. Thanks.
posted by KGMoney at 9:09 PM on December 14, 2013


never trust a company that doesn't trust itself.
posted by gorestainedrunes at 9:39 PM on December 14, 2013 [4 favorites]


On part II as well, and I’m reminded of Apple’s resource fork concept from the earliest days of the Mac. My impression is that filesystems tend to move away from metadata and invisible extras, toward simple trees of directories and files and dumb bags of bits at the other end. I don’t think this a problem—it’s a simple model to reason about, and compatible with every POS thumb drive and digital camera in the universe. The only path forward I see is connected to the way I use GDocs, and in GDocs findability is a nightmare unless you know some hint or fact about the document you’re searching for.
posted by migurski at 9:50 PM on December 14, 2013 [1 favorite]


I couldn't arrive at an understanding of what WinFS actually was when reading the motivation article before my eyes glazed over. From what I managed to glean, it seems like it would have been one of those overdesigned technologies (like OLE) that doesn't really solve a problem that people have and would require such detailed metadata management that it would be far from useful in any scenario.

Probably a failure of imagination on my part, but if Microsoft added tagging to NTFS, they would have just about everything I would ever want out of a file system.
posted by zixyer at 10:06 PM on December 14, 2013 [8 favorites]


By comparison, MS Research created Singularity, an operating system built upon the design principles for managed code that found more common expression at MS in the whole .NET initiative. You couldn't call it .NET OS, but in some sense it was: the lowest level interrupt code was C and assembly, and then a microkernal system was built on top of that using managed C++ and Sing#, which is an extended version of C#, compiled to the .NET VM's CIL. The extensions made for, among other things, perfectly isolated processes and strong static analysis of code for safety reasons. Basically, they created a vastly more secure and potentially more performant and reliable OS.

It's a little too much to say that politics kills all good things at MS, but certainly the size and inter-departmental competition do serve to neuter anything that's, in itself, straightforwardly good. And sadly, they create a lot of things that are, by themselves, straightforwardly good.
posted by fatbird at 10:08 PM on December 14, 2013 [5 favorites]


Why is there all this effort put into creating or sustaining proprietary filesystems when there are perfectly good open-sourced ones? Is it a matter of pride that NTFS and HFS are aged pieces of crap?
posted by pashdown at 10:12 PM on December 14, 2013 [6 favorites]


Conceptually, when you combine a filesystem with rich metadata in a common framework, you get something where it's a lot easier to share data across applications. The working model, in small-scale, is Office, where you can embed spreadsheets in a Word doc that's pulling columns of data from an Access database, to be emailed to everyone in your Outlook contacts list.

How well that worked, and the fact that several generations of security issues and genuses of viruses exist because of it, kind of argues in favor of open source filesystems that are much clearer about what they do and don't do, but you can see how a software architect might get all wet extending the Office metaphor to the whole OS.
posted by fatbird at 10:30 PM on December 14, 2013 [1 favorite]


The idea of it isn't bad at all -- a database-like file system makes a lot of sense. They already have to hack on indexing and replication, so you might as well do it right.

I was really looking forward to WinFS primarily because it may have led to proper sandboxing. In order to run legacy apps on there, you could create a small subsystem that was properly isolated from the rest of the OS.

Is it a matter of pride that NTFS and HFS are aged pieces of crap?

No, it's to keep customers locked into their ecosystem. Same reason their API has so many "reserved" spots that are actually proprietary loopholes to make it difficult to replicate the interface.
posted by spiderskull at 10:32 PM on December 14, 2013 [1 favorite]


Also, this from Part III is a good, general-purpose observation:
One customer was using (XML) in an insurance claims processing app to address an age old problem. The claims processing guys were evolving their application extremely rapidly, must more rapidly than the Database Administration department could evolve the corporate schema. So what they would do is store new artifacts as XML in a BLOB they’d gotten the DBA’s to give them and have their apps work on the XML. As soon as the DBA’s formalized the storage for an artifact in the corporate schema they would migrate that part out of the XML. This way they could move as fast as they wanted to meet business needs, but still be good corporate citizens (and share data corporate-wide) when the rest of the organization was ready.
This speed/schema tradeoff is something that a lot of the developers I work with come up against. They want to use tools like MongoDB for free-form document storage, but I think it’s important to treat it as a milestone on the way to more-structured database.
posted by migurski at 10:35 PM on December 14, 2013 [7 favorites]


Why is there all this effort put into creating or sustaining proprietary filesystems when there are perfectly good open-sourced ones? Is it a matter of pride that NTFS and HFS are aged pieces of crap?
You will not find a more crusty critic of Microsoft than I, but even I get that there really needs to be a new way to organize files on computer systems. There are NOT any good ways to do it, nor will there be for the foreseeable future.

If you want a ton of files in folders, yes... ZFS is awesome, but it doesn't fix the metadata issues, nor do any other systems, no matter how open/closed.

This is only one of a class of big problems in computers that isn't likely to be solved anytime soon. Security is another one.
posted by MikeWarot at 10:35 PM on December 14, 2013 [4 favorites]


While I love the idea of contextual, descriptive user-defined metadata tags, thinking about the mistagged mp3 nightmare of Limewire circa 2001 extended to the entire filesystem is like a glass of cold water to the face. I have a hard enough time fitting other people's current file naming schemes (or lack of) into some semblance of a system that's compatible with my own preferences, and when you throw in descriptive tags too... yeesh.
posted by jason_steakums at 10:50 PM on December 14, 2013 [11 favorites]


MetaFilter: dumb bags of bits at the other end
posted by mwhybark at 11:30 PM on December 14, 2013 [3 favorites]


Reading the article I feel like Microsoft just got far to ambitious with the filesystem redesign. They essentially tried to solve every problem, create some new ones, and maintain backwards compatibility. No wonder it failed not once but four times.

I had to chortle a bit when he talked about the performance penalties of going user space to kernel space, back to user space, back to kernel space, and then reverse the trip. This is exactly what happens with almost every FUSE module yet FUSE is still extremely useful. Sometimes performance isn't the more important thing! In the area of multicore systems with 8GB+ RAM and 50mbps home internet, MS should take another look at the FUSE type concepts.

Why is there all this effort put into creating or sustaining proprietary filesystems when there are perfectly good open-sourced ones? Is it a matter of pride that NTFS and HFS are aged pieces of crap?

IMHO there is nothing wrong or shitty with NTFS other than the fact that it lacks an open spec. HFS+ on the other hand is crap and should have been replaced years ago.

I'm not sure what better open source FS you think is out there. ZFS is technically great, but it's a FS for sys admins not end users. Integrating it with an established operating system is a challenge as it redefines many of the common concepts and includes complete rewrites of almost the entire related kernel subsystems (from the block layer to multidisk support to volume management to end user filesystem). There's a reason Linux snubbed it and went their own way (BTFS).

I personally love XFS on my Linux machines but I wouldn't recommend replacing NTFS with it. It's solid, well design for concurrency and reliability, but it's old and offers no benefits against other modern FS. ext2/3/4 are on their way out. reiserfs lost its last fan boys years ago. What's left in the open source world?
posted by sbutler at 11:32 PM on December 14, 2013 [5 favorites]


My impression is that filesystems tend to move away from metadata and invisible extras, toward simple trees of directories and files and dumb bags of bits at the other end.

To some extent, clients of Amazon S3 file services show what happens when a filesystem goes nearly as simple as possible — to a flat key-value pairing that uses a key naming scheme to name folders and point to files.
posted by Blazecock Pileon at 11:37 PM on December 14, 2013 [1 favorite]


Now do Monad! Longhorn is going to be so great.
posted by thelonius at 12:03 AM on December 15, 2013


Why is there all this effort put into creating or sustaining proprietary filesystems when there are perfectly good open-sourced ones? Is it a matter of pride that NTFS and HFS are aged pieces of crap?

Not Invented Here is a much more powerful force than you might think.
posted by Tell Me No Lies at 12:39 AM on December 15, 2013 [3 favorites]


I think that it's probably relevant that most of winfs's features are in fact delivered and in common use today- running on tcp/ip. Local filesystems are getting smaller, not bigger, if you count all the little unix boxes in everyone's pockets.
posted by jenkinsEar at 4:23 AM on December 15, 2013 [2 favorites]


It does seem like the project was quixotically over-ambitious. They wanted a single means of storing data that includes everything that is now scattered among files, databases, emails etc; where each item of data had its own metadata, tags and privacy settings.

It would have been a wonderful thing if it had worked though. Today the user problems he talks about seem well on their way to being solved... but at the cost of abandoning all privacy. You put your documents and databases in "The Cloud", you let Google index everything, and everything is now easily searchable in one place. But everything is now outside your control. The cloud provider knows everything about you, and may deliberately or accidentally disclose it to third parties you don't necessarily approve of.

If it had worked, you would have had easily searchable, accessible data entirely under your control on your hardware, and your privacy would be intact. Today you have to choose between privacy and convenience.
posted by TheophileEscargot at 4:39 AM on December 15, 2013 [2 favorites]



Digital Asset Management (DAM) systems are the main way corporations are working out new ways to organize files. These systems use metadata heavy records systems with complex referencing, workflows, and parent/child relationships. In the DAM world, proprietary records structures are frowned on, as this makes system migration difficult. Therefore, software vendors depend on licensing fees and SaS (Software as a Service) contracts. SaS gives you a help desk to call whenever, and is like having an outsourced IT team just for the DAM.

DAMs have become so popular that Microsoft and Adobe labeled two of their products (Sharepoint and CQ5, respectively) as DAMs in their marketing materials; neither are true DAMs. Sharepoint is a file transfer and workflow tool without the file structure issues resolved and many other key DAM features missing; Adobe's product is just a WCM (web content manager). They're both awful and I worry many will think that this is what DAMs are like, b/c big name brands say so.

File structures can be built any way you like with a DAM. The really good ones are primarily XML based.
posted by EinAtlanta at 5:01 AM on December 15, 2013 [5 favorites]


Try to build a system that does everything, and it will do nothing well.
posted by localroger at 5:36 AM on December 15, 2013 [1 favorite]




Now do Monad! Longhorn is going to be so great.

It could have been. It should have been.

I always thought MS was screwed by its own success. And the fact that way too many MBAs had positions that let them kill products that would disrupt other products in their arsenal.

Well, that and they should have voluntarily split themselves into a few different companies in the mid 90s. (OS/Office/Server/Developer/Consumer)
posted by DigDoug at 6:30 AM on December 15, 2013


Heh. I'm not completely getting why Sharepoint *isn't* a DAM - on the other hand it's a hell of no return so if you like the idea I can see why you'd insist it isn't one.
posted by Artw at 7:16 AM on December 15, 2013


My God, it's full of buzzwords.
posted by localroger at 7:16 AM on December 15, 2013 [2 favorites]


I work in a MongoDB heavy enviroment. I do enjoy saying heretical things like "SQL probably wouldn't have died on its ass doing that".

(SQL would have died on its ass doing a different thing, to be fair)
posted by Artw at 7:20 AM on December 15, 2013 [3 favorites]


If it had worked, you would have had easily searchable, accessible data entirely under your control on your hardware, and your privacy would be intact.

I doubt that things would be any different. There would still be the issue of high-availability local storage and all of the associated firewall traversal and security issues of connecting to it from other devices and from the Internet. The Cloud is as much, if not more of a byproduct of IPv4, asymmetric bandwidth, and network security than a lack of good search on the desktop.

Granted, you might have been able to run a nicely indexed WinFS-enabled NAS, but like any given Internet-enabled home thermostat, it would probably still be accessed indirectly through a Cloud-based service because it's just easier and more secure for most users.
posted by RonButNotStupid at 7:29 AM on December 15, 2013 [1 favorite]


> DAMs have become so popular that Microsoft and Adobe labeled two of their products (Sharepoint and CQ5, respectively) as DAMs in their marketing materials; neither are true DAMs.

I see a lot of businesses asking for contractors to graft DAMs into their CQ and Sharepoint environments. I haven't heard of any asking to implement CQ as a DAM. Sharepoint gets used as a DAM in intranet environments but that's a product of accretion. Users are instructed, "UPLOAD EVERY PROJECT FILE TO SHAREPOINT!" so they do, but the platform wasn't implemented in a DAM-ish way, so it becomes an unnavigable black hole within a year.
posted by at by at 7:33 AM on December 15, 2013


There is no way this would have been anything but a complete disaster if MS had actually tried to roll it out. It would have made the Windows 8 fiasco look like a fart in a phone booth by comparison.

I am completely boggled at how this went down at nearly every level. The institutional flippancy at something which, if it shipped, would touch nearly everything else is astonishing. The total lack of concern over what might possibly go wrong in the process of gluing everything together with these data-expensive, previously unrelated technologies. The utter cluelessness about the role of an operating system in the support of new and unpredicted applications.

I always wondered what genius decided it was a good idea to make Windows' native search unable to find simple text strings in files whose extensions it doesn't recognize. Now it's clear.

How did these idiots not notice that two of the most popular data formats they were dealing with, mp3 and jpg, didn't even exist a decade earlier? The philosophy behind this makes everything dependent on the OS recognizing every data type it will ever touch, which pretty well screws you over if you want to deploy an application that uses a new data type MS isn't familiar with. There is a reason the OS guys he talks about in part 1 want to regard files as bags of bits which are handled as quickly as possible, and that's because the purpose of the OS is to get those bags of bits to applications that know how to deal with them even if those applications didn't exist when the OS is written.

Of course this IS consistent with MS' philosophy of rolling everything into an MS-dependent ecosystem which forces you to stay current with the latest version of all their products, so I can see why there was a certain level of encouragement.

But really, good programmers don't start coding once they have a specification. First they ask themselves what will go wrong, and make sure the specification itself isn't full of holes or singularities. MS has historically sucked at this, all the way back to the bloatification of DOS, and it's now obvious why they haven't shipped a product I've considered usable since Office 97, Windows XP, and VS6.
posted by localroger at 7:43 AM on December 15, 2013 [3 favorites]


There's a reason Linux snubbed it and went their own way (BTFS).
The other reason Linux "snubbed" zfs is that its license (CDDL) makes it unsuitable for inclusion in the Linux kernel mainline (GPL). This does not preclude delivering it separately (zfs-on-linux and zfs-fuse), since the kernel module linking exception is well-established these days.

For me, zfs was such an attractive technology that I selected Debian GNU/kFreeBSD for my most recent home server (instead of GNU/Linux). My biggest nightmare is accidentally losing data, whether due to hardware failure or the good old fat-fingered "accidentally delete all files in home directory" gambit. Snapshots provide an excellent defense against the latter, and raidz offers an excellent defense against the former. And for offsite backups, encrypted removable disks and zfs send/recv beat the hell out of DAT 160/320 tapes for cost and speed, and again thanks to snapshots you can retrieve a deleted or damaged file from a wide range of past dates from a single drive, instead of having to retain one piece of backup media for each date you might want to restore.

So bringing this back around to WinFS, I barely even grasp how any of its proposed features would help me, and of course I'd never actually have benefited from it since unlike that glorious few years of OpenSolaris there would never have been a first-party implementation of WinFS that was free enough to incorporate in Linux or FreeBSD and you can bet they'd pursue patent claims on WinFS implementations since they have on (V)FAT implementations, where FAT is a whole lot less interesting and innovative than WinFS would have been. So, big raspberry / sigh of relief that WinFS never came to market.
posted by jepler at 8:39 AM on December 15, 2013 [1 favorite]


I started with the article, got to this line:

> I’d spent 5 years in death march mode.

...and then, well, I wasn't able to read any more.

I really don't see why I should take these people seriously after that astonishing statement.

It's not just that the quality of your code dramatically falls after the first week in death march mode, to the point that you're not gaining any improvement in your work output - it's the human aspect of "How could you treat your workers that way? How could you allow yourself to be treated that way?"

Overall, the whole thing gets summed up in my head as "politics this, politics that, thousands of man-years wasted." Really sad.
posted by lupus_yonderboy at 9:09 AM on December 15, 2013 [6 favorites]


I think that for the first time I sort of understand WinFS a bit more. It's interesting to contrast with OS X, which has implemented some features, piecemeal, and in completely different ways. The two main requirements of WinFS seem to be 1) database access to file metadata, and 2) file like access to database information. OS X has solved 1) extremely elegantly but has skipped 2) because it really only comes up in some sorts of enterprisey database-driven app situations, which is not something OS X has ever worried about.

Starting with iTunes, iPhoto, etc, and all the way to Mail.app, OS X maintains extensive, file-type-specific metadata about every single file. So each application has its own data store in the form of a structured database, but there's also a global datastore that allows for searching everything. This is all built on a traditional filesystem.

OS X's changes have been far simpler and cleaner, following the Unix tradition. Everything is done in userspace by small well defined and extensible programs. That is, except for a single OS-level modification: a notification system for when files change. And by making the change there, all sorts of other application possibilities also open up, because that file-changed notification makes incremental backups fast and efficient. (I don't know the intimated details of inotify on Linux, but does anybody know if it enables something similar? I've only used that for single directories in the past.) And by having a general service for this, new applications can be developed on it which the original designers would never have thought of.

Once you have OS-provided global notification when something changes, it becomes easy to design an efficient and modular system for having a global-database of standard and unstructured metadata about files. It's easy to automatically index each file after every change, without having to rewrite any existing application. And OS X does this, with file-type specific metadata-generation programs. If you come up with a new file type, you can add your own. The built in Mail.app indexes not only the senders, recipients, dates, and subjects of emails, but it also recursively indexes any attachments. So if you're not sure where that file came from, a 300ms search will give you a list of all the directories and emails that contain a particular file name. The monolithic, unchangeable apps like Photoshop and Word are not a problem because you don't need to change those programs at all, you provide other, new code, that could be from completely unrelated third parties, that provide the desired functionality. Everything is decoupled, and does only its job. It's a lot like the 'file' command line utility which is used to determine what type of file a BLOB is without having to use extensions: anybody can extend it with detectors for their particular file type, and it's a general system built on top of instead into the filesystem itself, and ends up being much more reliable because of it.

This may be a much less cohesive approach than the "let's rewrite all the functions providing the APIs" approach of the WinFS teams, but by building an incremental system on an existing foundation, it's been far more successful, with (almost certainly?) far less developer time, and with far fewer embarrassing high-profile failures.

It doesn't provide a way to peak at BLOBs within SQL databases. But if OS X wanted to solve something like that, a FUSE filesystem would probably be an easy extension. And the FUSE filesystem could be adapted to any database backend, not just SQL server.
posted by Llama-Lime at 9:49 AM on December 15, 2013 [8 favorites]


The total lack of concern over what might possibly go wrong in the process of gluing everything together with these data-expensive, previously unrelated technologies.

I never understand how executives like Gates and Allchin don't see efforts like this in proper, stepwise terms. For any interesting technology someone comes up with, build it and then sell it or order it to be used internally. You can't view MS as anything but a collection of fiefdoms, and the was true during the antitrust trial, so the idea that you could say "build your software on that other department's to-be-released platform" just seems obviously braindead. If the strategic vision dictates something like WinFS, then have a group build WinFS, and by executive fiat if necessary, start moving groups onto it once it's available. Have a next-three-OSes iteration plan.

One of the quietly brilliant things Apple has done is move to a faster, smaller OS release schedule that de-emphasizes giant changes and concommitant marketing efforts, and makes it much easier to schedule the evolution (and deprecation) of core features. Superficially MS is addicted to big-launch, big-money OS fees, but they've been talking forever about moving to a subscription basis for Windows and Office, and that only makes sense in the context of a steady drip of improvements. In 2002 when I was running an IT department and our vendor of MS licences was warning us about the impending move to annual licencing fees, I could easily push back with "they haven't released a new version of Office for three years... WTF would I be paying for then?"
posted by fatbird at 9:59 AM on December 15, 2013 [1 favorite]


There's a reason Linux snubbed it and went their own way (BTFS).
The other reason Linux "snubbed" zfs is that its license (CDDL) makes it unsuitable for inclusion in the Linux kernel mainline (GPL). This does not preclude delivering it separately (zfs-on-linux and zfs-fuse), since the kernel module linking exception is well-established these days.
The thing that really pisses me off about this standoff is that btrfs doesn't even attempt to match ZFS. It does a few things, like raid0 and raid1, but I want raid-z2 (raid6 equivalent) and really we should probably be using raid-z3 (no standard raid equivalent? exist) with these new 4TB drives. And btrfs has been in development years upon years while ZFS has been rock-solid in production forever. But in general btrfs is a decade behind ZFS, with little hope of catching up. The top Google hit has a feature comparison (somewhat out of date, send/receive equiv is now in btrfs I think, btrfs claims dedupe now), but I think it goes to show just how amazing ZFS was for its time and what a shame that all this effort is being duplicated over silly license incompatibilities between two supposedly "free" licenses.

IMHO, there are no good open source filesystems out there except for ZFS. I end up using XFS, and prior to that ext4 because it was default, but ZFS raised the bar. And I've been getting terrible performance with zfs-on-linux for reasons that I haven't been able to diagnose within the few hours that I could spend on it.

This is only tangentially related to WinFS, but I love a good filesystem rant.
posted by Llama-Lime at 10:07 AM on December 15, 2013 [2 favorites]


I don't know the intimated details of inotify on Linux, but does anybody know if it enables something similar?

It does.

However, judging by its history with Linux, file system event notification is a hard problem. Inotify is neither the first nor most recent such facility Linux has offered.
posted by scatter gather at 10:10 AM on December 15, 2013


Yeah, as an outsider with no technical expertise to offer, I have conclusions about btrfs similar to Llama-Lime's.

On the other hand, on kFreeBSD some zfs operations are not speedy (zfs recv can take tens of seconds to transfer an empty snapshot, for instance; I think it's doing something synchronusly which is no doubt required for integrity) and in a colleague's experimentation, zfs deduplication was simply not practical when dealing with 30-50TB pools due to memory needs (btrfs claims its deduplication method does not require the same magnitude of RAM due to being offline, but I don't know how it works in practice)
posted by jepler at 10:28 AM on December 15, 2013


I was on the Windows team during this debacle. I was my sub-team's representative to the Weekly Shell+WinFS team meeting. It was depressing. I was pretty junior and I had no idea how to solve the problems they were trying to solve, but I was pretty sure the stuff they were talking about wasn't going to work. For example metadata object inheritance was a join, so something like seven joins to get at the metadata for one Word document. I have lots of other tidbits like that, but the big take away from this time is that almost all the leaf-node developers were saying it ("it" being WinFS, CLR integration, basically the whole LH package) was never going to work, and VPs at the top saying oh yeah, it'll work we're going to do it all, and a bunch of layers in the middle—how shall I phrase it?—"communicating poorly."
posted by jeffamaphone at 10:29 AM on December 15, 2013 [8 favorites]


I think the most interesting part of the series for me was the idea that the file system should be able to encounter an unknown file type and have the logic to determine enough things about it to know, in general terms, what it was and what likely needed to be done about it. The idea that might get rid of the anachronism which is the windows registry was an amazing idea.

I was also at MS in this time period--but on completely different projects (and as a contractor)--and the description of the work process brought back lots of memories, nearly all exasperating. The idea that you can come along in mid-dev process and re-org everyone into new teams, with your new top-level guy sans understanding and with different agendas is not a way to nurture a project. That said, it should be a requirement on any major dev project to have someone smart but completely unfamiliar with the effort show up frequently to say "um, what's this?"

I know there is a huge anti-MS bias here, but they have done cool stuff...and lots of really cool stuff has been shit-canned along the way. They also have a knack for completely and utterly missing some important things, and doubling down on some things which are beyond belief. My windows 7 machine has been an absolute workhorse for several years now, and my hardware costs under windows are a fraction of what they were when I was Apple only. So for that I'm thankful.
posted by maxwelton at 11:59 AM on December 15, 2013 [2 favorites]


XFS seemed cool enough, but it kept fucking me like a cheap whore by zeroing out files that I needed to boot and log in successfully with. I know they had their reasons. And eventually that behavior could be turned off. But I recovering from having /etc/passwd zapped is not a cool activity for either home not business purposes.
posted by wotsac at 12:15 PM on December 15, 2013


you really prefer an ancient version of VS over the newer ones?

It really doesn't matter how nice .NET is; it's not the same product as 4-6 and hardly any of my then 10 years deep code library could be translated without redoing it from the ground up. One of my most sophisticated projects couldn't be ported at all, because it depended heavily on VB6 being able to return an array of user typed variables as the result of a function, something the first versions of .NET simply did not support at all.

MS's real reason for shivving the authors of trillions of lines of code was, similar to their current gambit to wangle their way into the mobile space through Windows 8, was to wangle their way into the cross-platform JIT compiled space which was leaning toward Java. The problem is that .NET is not in any way a successor to previous versions of VS. And I saw immediately that they would eventually get around to doing to anyone foolish enough to invest in .NET what they had just done to me. Don't believe me? .NET can't be used to dev for Metro. I am proficient in VS6 and it does what I need, and I will never put that level of effort into mastering another Microsoft product no matter how attractive it seems.

Meanwhile, I have invested in building my own functions to go straight to the API for things like serial, TCP/IP, Windows dialogs, and so on. I use no plugins at all, I don't use the registry, and I keep all my data in flat files under the application folder. As a result my applications can be dropped onto any Windows 2000 or newer box and they will just run without installation. VS6 applications integrate better into windows without the massive and version-dependent framework, and in most cases are faster and smaller.

Furthermore, since the Wine guys have put quite an effort into VS6 compatibility and I don't use any potentially troublesome add-ins, both my development system and the code I've been writing for the last 15 years are now magically cross-platform.

I don't need my applications to scale to the enterprise or to avail themselves of any of the fancy widgets that .NET automates for you; I just need them to be reliable and easily maintained, including transfer to new machines. That's much easier with VS6 than with any version of .NET.

I'm sure that for many styles of development and deployment .NET offers advantages, but I don't see enough to justify going back to 1994 and rewriting everything I've ever done under Windows.
posted by localroger at 1:09 PM on December 15, 2013 [1 favorite]


> MS's real reason ... was to wangle their way into the cross-platform JIT compiled space which was leaning toward Java....And I saw immediately that they would eventually get around to doing to anyone foolish enough to invest in .NET what they had just done to me. Don't believe me? .NET can't be used to dev for Metro.

Truth. As a Java hack in 2003, I was able to jump onto a C#.NET project and get productive very quickly because C#... is just about Java, with a different library. And a big, in your face IDE, of course.

You're making me feel better about a recent decision to not stay current with MS dev languages. I've picked up Python, I'm putting more time into Java, and someday thinking of scaling that big C++ mountain...

(but VB... really? Gaaah.)
posted by Artful Codger at 2:14 PM on December 15, 2013 [1 favorite]


Looking into this more, I'm again puzzled about what WinFS was supposed to be.
Windows Search collectively refers to the indexed search on Windows Vista and later versions of Windows (also referred to as Instant Search[1][dead link]) as well as Windows Desktop Search, a standalone add-on for Windows 2000, Windows XP and Windows Server 2003 made available as freeware. All incarnations of Windows Search share a common architecture and indexing technology and use a compatible application programming interface (API).
What shipped in Windows Vista met at least half of the features of WinFS? What was missing in Vista, feature-wise, that should have been there? It seems that complex metadata ontologies plus inheritance were planned, which is something that has worked never in the history of the world and would not have worked for WinFS. The transparent access to BLOBs in SQL Server weren't there, and SQL Server itself wasn't there. But, IMHO, these are not important features that really do much for users. Full-text instantaneous search is the killer feature that was there. They should have declared victory on WinFS but with reduced features, and slapped that label on Windows Search. WinFS is not as much of a failure as it is panned for.

Out of curiosity, I wondered how much overhead there was in OS X's metadata and fulltext indexes. For 530,000 files in 184GB, it takes almost 5GB:
🐪:~ llama-lime$ sudo du -sh /.Spotlight-V100/
4.9G	/.Spotlight-V100/
🐪:~ llama-lime$ mdfind -count .
543117
🐪:~ llama-lime$ df -h /
Filesystem   Size   Used  Avail Capacity  iused    ifree %iused  Mounted on
/dev/disk1  233Gi  184Gi   48Gi    80% 48419165 12568354   79%   /
That's about 9KB per file, but only 2.7% of used disk. It's far far larger than I thought it would have been, but I think that's a tiny price to pay for a full text index, and I don't know if I could navigate my files at all without such a thing.
posted by Llama-Lime at 2:59 PM on December 15, 2013 [1 favorite]


I remember WinFS and the rumours about what was coming at the time. Looking back I see it like this: This was all pre-Google and RDBMSs were thought to be the pinnacle of data technology. Microsoft were excited to have a fairly good RDBMS which could scale from small to big (SQL 2000) and decided to run a file system on top of it. Great idea from the perspective of about 1999->2001 - it means you get all that groovy RDBMS stuff like queries, transactions etc all applied to file storage. Stupid idea from our perspective in 2013, when RDBMSs are seen as a good solution for a specific set of problems, but that all the exciting work in data is now happening far from RDBMSs, with MapReduce and massive parallelisation and huge data-centers that were unimaginable from a 1998 RDBMS perspective.
posted by memebake at 3:57 PM on December 15, 2013 [3 favorites]


I can't imagine what anyone would prefer about an ancient version of Visual Studio. The latest couple of versions seem very nice to me.

If you hang around on HackerNews long enough, you'll notice that although they're generally Ruby or JS or whatever programmers, and are generally dismissive of Microsoft, there's a general consensus that no other IDE comes close to Visual Studio and that C# is a pretty reasonable all-rounder sort of language.
posted by memebake at 4:07 PM on December 15, 2013


I don't know if I could navigate my files at all without such a thing.

I find this a bit mystifying because I have virtually never searched my filesystem except for grepping in a codebase for a particular thing. I've got lots of pictures and documents and records and emails and I virtually never search for content in it. I either know where it is, can quickly find it, or it doesn't really come up. That's all temperament, though.

There's a general consensus that no other IDE comes close to Visual Studio and that C# is a pretty reasonable all-rounder sort of language

On slashdot as well. I've never heard anyone make a serious case after 2000 that MS dev tools were anything but excellent, and in many cases the gold standard. For my part, I've never found any GUI for interacting with a serious RDBMS to as good as SQL Server Manager.
posted by fatbird at 4:19 PM on December 15, 2013


I suppose your approach with your VB6 code is fine as long as it never needs to work on the web.

And this is indeed the case; I'm mostly doing industrial automation (and really mostly without even PC level computers) and when something needs to be on the web, static HTML (which I can write out perfectly fine) does the job 95% of the time. The other 5% there's usually another portal that I can feed the data to which presents the web interface. My stuff tends to be more about making sure processes run and results are recorded correctly.

If I really had to start doing serious web-based HMI I would find a non-Microsoft solution. At least half my customers are using PLC interfaces to SCADA systems. So far it hasn't been an issue for me.
posted by localroger at 5:09 PM on December 15, 2013


Oh, and aside -- the QuickBasic IDE was indeed incomplete enough to be more annoying than no IDE at all, and when I switched from a more primitive compiler to QB (it was forward compatible but had much better memory management) I mostly used the command line compiler and batch files to compile and link.

This is the thing about my experience; I'm used to coding in environments where there is no debugger at all, and sometimes not even an quivalent to printf, so you end up writing debug info to the screen buffer or nonvolatile RAM where you can pick it up after the crash and reset. I really cannot imagine what features VS.NET has that VS6 doesn't that I would find helpful instead of annoying. Frankly even VS6 annoys me when I'm editing and I need to leave an incomplete line to grab something from elsewhere to paste into it and as soon as the cursor leaves the incomplete line OH MY GOD DUDE YOU'VE GOT A NASTY RED ERROR IN YOUR LINE OF CODE. Seriously I knew that stop bothering me and do what I tell you, stupid computer.
posted by localroger at 6:37 PM on December 15, 2013


So, how much more ambitious was this than the extended file attributes in the BeOS FS? Cause that's what I'm sad about having died.
posted by i_am_joe's_spleen at 7:16 PM on December 15, 2013 [3 favorites]


I really cannot imagine what features VS.NET has that VS6 doesn't that I would find helpful instead of annoying. Frankly even VS6 annoys me when I'm editing and I need to leave an incomplete line to grab something from elsewhere to paste into it and as soon as the cursor leaves the incomplete line OH MY GOD DUDE YOU'VE GOT A NASTY RED ERROR IN YOUR LINE OF CODE. Seriously I knew that stop bothering me and do what I tell you, stupid computer.

I agree, this behavior on VB6's part is infuriating. If you're looking for something .NET versions of VS have that VB6 does not, it's the blessed absence of modally yelling at you for syntax errors. Syntax errors are listed below your editing window, and while in VB.NET projects the list of errors is silently updated in real-time, you can get around to fixing them when you want to.

The specific content of warnings and errors are much improved in later versions of VS. The type system of VB.NET and C# is both more powerful and more convenient than that of VB6, so you can catch more errors at compile time instead of run time. As a developer who spends most of his time in .NET, whenever I go back to VB6 I am always surprised and annoyed at how many of my mistakes are only discoverable at run-time.

Seriously give .NET a try. It's far more pleasant than VB6, and the only thing that I imagine would annoy you more about it is that you don't control the memory layout of structs by default (but if you need to, you still can with a little extra syntax).
posted by a snickering nuthatch at 8:31 PM on December 15, 2013 [1 favorite]


So, how much more ambitious was this than the extended file attributes in the BeOS FS? Cause that's what I'm sad about having died.

Extended file attributes still live in Haiku, BeOS's successor.
posted by a snickering nuthatch at 8:32 PM on December 15, 2013


I don't know if I could navigate my files at all without such a thing.

I find this a bit mystifying because I have virtually never searched my filesystem except for grepping in a codebase for a particular thing. I've got lots of pictures and documents and records and emails and I virtually never search for content in it. I either know where it is, can quickly find it, or it doesn't really come up. That's all temperament, though.
For work there are at least two indexes along which I need to retrieve information: which dataset and which tool I used. A hierarchy can only allow speedy location by one axis. There's also the cases where I want to recall, "Z, that sounds weird but familiar, in what contexts has Z come up before?" or "oh, where did I do that cool bit of one-off analysis? I'd like to find that script again. I think it was where I concluded that X --> Y", and then text search is pretty much the only thing that lets me get to it. I've reverted to indexing the file hierarchy by date, as this third axis of indexing is the one that works best for my memory of how to get back to something.

I have 45,000 emails in my inbox from the last three years alone, and it's never quite clear which ones are going to be the important ones in six months or a year, so without searching it'd be a hopeless effort. When I had a lower mail volume I'd sort things out into a hierarchy of mailboxes, but the proliferation and hierarchy that I'd need now would be impossible. Right now, I'm often lucky if I can find another email from the same thread by search terms, then go one by one through the rest of the messages in the thread.

Its really only the last 10 years that I've been generating my personal dataset on my hard drive, and there's absolutely no way that my brain can handle indexing it. If I have 40 more years of document, code, and data generation at the current rate, I'm completely screwed. And if the rate of data generation continues to increase like it has, at some point I fear that even full text indexing is going to break down.

I have a worse memory than many people I know, but less data than them as well. So I think that it's not an uncommon problem.
posted by Llama-Lime at 8:45 AM on December 16, 2013 [1 favorite]


I like the latest versions of Visual Studio. And I mean Visual Studio the app—which is completely different from whatever platform you might be coding too (win32, vb, .net, winrt). But I will point out that VS6 was the last one written entirely in native code. The newer IDEs are managed... and the first few managed ones were pretty flakey. It seems to work pretty well in the two latest releases however.
posted by jeffamaphone at 11:24 AM on December 16, 2013


XFS seemed cool enough, but it kept fucking me like a cheap whore by zeroing out files that I needed to boot and log in successfully with.

I've been running XFS since IRIX still existed, and while I've cursed at it more than once, I never had catastrophic issues once I figured out this truism: XFS seriously stresses computers and is not designed for crappy hardware and/or no UPS. Almost every serious failure of XFS I've had was eventually tracked down to 1) unexpected loss of power or 2) marginal or crappy component tossing sand in the vaseline, usually both. Non-ECC memory silently injecting errors, crappy ethernet chips (looking at you, Realtek) buggering interrupts, bugs in disk controllers causing inconsistent cache state, and *boom*. Even on my home fileserver, I splurged on a server class motherboard, ECC ram, Intel ethernet, LSI disk controllers and solid UPS. I would never run XFS on consumer grade hardware if I could help it. It'll be interesting to see if RH has conquered this.

That said...any data I *really* care about has been migrated to ZFS.
posted by kjs3 at 12:18 PM on December 16, 2013 [1 favorite]


« Older And Santa was his name-o   |   Your tax dollars at work Newer »


This thread has been archived and is closed to new comments