NTP or Not NTP? That is the question.
March 13, 2015 6:30 AM   Subscribe

The Network Time Protocol provides a foundation to modern computing. So why does NTP's support hinge so much on the shaky finances of one 59-year-old developer?
posted by pjern (92 comments total) 17 users marked this as a favorite
 
It is disgusting that large companies turn their nose up at supporting open-source projects while they continue to benefit from them. I would wager that it comes down to the flat ignorance of management towards what their foundation is built from.
posted by pashdown at 6:41 AM on March 13, 2015 [14 favorites]


Asked to describe a proper NTP support organization, Stenn listed a project research scientist, project manager, several full-time developers, two technical writers, a system and network administrator, and two standards "wranglers" to represent NTP to the IETF, IEEE, and ITU. As he toted it up in his head, he came out at a minimum of $3 million a year.
I don't get it. Isn't this a mature product that occasionally has bug fixes? Why exactly isn't one developer getting $7000 a month sufficient?
posted by smackfu at 6:44 AM on March 13, 2015 [2 favorites]


I don't get it. Isn't this a mature product that occasionally has bug fixes? Why exactly isn't one developer getting $7000 a month sufficient?

The article explains why the $7k is insufficient.
posted by snofoam at 6:50 AM on March 13, 2015 [14 favorites]


Because time is money?
posted by jeffburdges at 6:52 AM on March 13, 2015 [4 favorites]


"Open source means no one owns the software"

This is so wrong I can't even.
posted by idiopath at 6:54 AM on March 13, 2015 [8 favorites]


Because time is money?

Time is free. I have never had to pay a Time bill. It just keeps coming, 86400 seconds of it a day.
posted by thelonius at 6:56 AM on March 13, 2015 [3 favorites]


thelonius: looks like a magnitude error on your accounting, that's not time being given to you, that's time being taken away.
posted by idiopath at 6:57 AM on March 13, 2015 [36 favorites]


We need public funding for open source projects, both development and maintenance. Anyone like Harlan Stenn though could absolutely just drop the project, find "regular work”, and put the word out that he's willing to take the project back up if organizations fun him, well roughly like he's doing here.

There was a more pressing case here :
The World’s Email Encryption Software Relies on One Guy, Who is Going Broke

Aren't there serious enough security flaws in NTP though that people build stuff like tlsdate? I'd imagine that's the real reason NTP needs more developers and standards work. I'd hope the C security bugs like buffer overflows, etc. mostly worked themselves out by now.
posted by jeffburdges at 6:58 AM on March 13, 2015 [13 favorites]


$7k a month before taxes (and no benefits) is not a sufficient salary for running one of the critical parts of the Internet's infrastructure. Especially if that's also your budget for testing and development hardware and software.
posted by mkb at 7:01 AM on March 13, 2015 [10 favorites]


OK I'll try. Open source doesn't mean "no one owns the software", it means the original owner of a piece of software has explicitly granted others the right to edit the code and redistribute the original or an edited version.
posted by idiopath at 7:02 AM on March 13, 2015 [2 favorites]


This seems like your classic open source project fiefdom.
posted by smackfu at 7:02 AM on March 13, 2015 [1 favorite]


The problem in this specific case seems to be more that NTP is sufficiently simple and low-level that most of the big players don't acknowledge it, especially given that it doesn't appear to have the backing of a larger organization or consortium (which surprised me, IMO, given that it is an IETF standard).

For instance, the W3C, Unicode Consortium are all fairly well-funded, while other "important" OSS projects and organizations such as Apache, Mozilla, BIND, and OpenSSL* all receive a reasonable amount of support from the corporate backers who depend on them.

Hell. It would not be remotely unreasonable for NIST to employ a full-time person to maintain ntpd. I'm actually a little surprised that they don't.

On the other hand, I'm not sure that, as a mature protocol (a well-defined and widely-implemented standard), I'm not sure why NTP itself needs constant maintenance. It'd be nice if a larger standards body took responsibility for the protocol's long-term future, but I can't reasonably envision a circumstance under which NTP will ever need to fundamentally change.

As far as I can tell, the primary problem has been that Stenn hasn't actively recruited new funding or contributers. I'm a little surprised that this is the first that I'm hearing about this issue.

* Okay, maybe not OpenSSL, but that's a mistake that's being rapidly corrected.
posted by schmod at 7:03 AM on March 13, 2015 [2 favorites]


I'm not sure why NTP itself needs constant maintenance. It'd be nice if a larger standards body took responsibility for the protocol's long-term future, but I can't reasonably envision a circumstance under which NTP will ever need to fundamentally change.

I think the article is a bit unclear on what exactly the open source project is here. Like you say, it's not the protocol itself. Is it the NTP daemon? Is is the software running on the time servers? I'm still not sure. And how are the server he maintains part of the open source project?
posted by smackfu at 7:10 AM on March 13, 2015 [3 favorites]


There are critical security issues with NTP, schmod, so the protocol either needs a major overhaul or need replacement. See : Don't update NTP – stop using it

In principle, all protocols must interact with cryptography going forward, because our world is a bit nastier than the early internet, so all protocols risk growing insecure over time as mathematics advances in fields like elliptic curve cryptography, post-quantum cryptography, etc.
posted by jeffburdges at 7:11 AM on March 13, 2015 [7 favorites]


$7k a month before taxes (and no benefits) is not a sufficient salary for running one of the critical parts of the Internet's infrastructure.

I don't think that this is a good characterization of NTP's role. If Stenn got hit by a bus (to use a colloquial term form the software industry), the internet would not grind to a halt. NTP isn't centralized -- it's little more than a "language" that servers use to synchronize the time.

Stenn maintains one of the most popular pieces of software that speaks this "language," and even though that appears to be occupying the bulk of his time, that's almost an irrelevant footnote when we talk about NTP's importance. There are other implementations, and new implementations can be created, because the standard is public and well-defined. (This is one of the reasons why standards are so important!)

We need public funding for open source projects

See above. NTP is a standard and a protocol first, and an open-source project (reference implementation) second. It's more appropriate to compare NTP to the W3C, Unicode, or IETF.

Government involvement in those has always been a little... tricky, and it's perhaps not a bad thing that governments have largely stayed out of computing standards, apart from their indirect invovlement via research institutions (which are certainly underfunded).
posted by schmod at 7:11 AM on March 13, 2015 [4 favorites]


What it is, he needs to work for or be funded by international agreement by the national/international metrology organizations, like NIST or BPIM or something. This should not be done as a non-profit. SI and ISO aren't run that way, the internet shouldn't either.
posted by bonehead at 7:12 AM on March 13, 2015


The $7,000 is not his bottom line salary, it is his top line income. He then has to pay for the servers, hosting and all the other expenses that come with being Father Time.

I am not sure he nets much if anything from that 7k.
posted by 724A at 7:13 AM on March 13, 2015 [3 favorites]


I'll agree that $7,000 is too little, but $3 million? That number is crazypants.
posted by thewalledcity at 7:16 AM on March 13, 2015 [1 favorite]


As he toted it up in his head, he came out at a minimum of $3 million a year.

I don't get it. Isn't this a mature product that occasionally has bug fixes? Why exactly isn't one developer getting $7000 a month sufficient?


A bunch of reasons, I think. Firstly, ask anyone what they need with no constraints and you're going to get a pie-in-the-sky number. Secondly, he's counting as paid for much that he's now either not paying for or getting cheap (e.g. volunteer help).

The true answer is likely somewhere in the middle and very likely well south of half a million a year.
posted by bonehead at 7:16 AM on March 13, 2015 [2 favorites]


The $7000/mo is only since last year. I wonder what he did before that?
posted by smackfu at 7:18 AM on March 13, 2015


No-one cares until it breaks.
posted by DZ-015 at 7:20 AM on March 13, 2015 [8 favorites]


I'll agree that $7,000 is too little, but $3 million? That number is crazypants.

I'm not saying it's not crazy, but there are plenty of mature commercial products requiring only occasional bug fixes that have budgets where $3 million is a rounding error.
posted by enn at 7:20 AM on March 13, 2015 [15 favorites]


Government involvement in those has always been a little... tricky, and it's perhaps not a bad thing that governments have largely stayed out of computing standards, apart from their indirect invovlement via research institutions

I'd argue, in fact, that this attitude is one of the causes of the problems for infrastructure bits like NTP. I fully agree that there have been many bad-faith sovereign actors, particularly on the security front, but by and large the big companies, who used to pay for these things, seem to have walked away from it. These are largely solved problems, boring, but vital to maintain.

This is the sort of thing that international treaty/UN-type bodies handle quite well in other contexts (ie. air safety, marine navigation, metrology and time). That's how this needs to be handled, IMO.
posted by bonehead at 7:23 AM on March 13, 2015


Read Don't update NTP – stop using it by Hanno Böck if you want to understand the technical issues behind the $3M figure, thewalledcity.

There are massive insecurities in NTP that weaken TLS so afaik you should immediately dump NTP for TLSdate, like Google alraedy did in Chrome, but TLSdate still suffers from a chicken vs egg problem.

It'll take real money for developers, technical writers, and a standards wranglers who understand cryptography to build out a protocol that avoids any that chicken vs egg problem and accompanying insecurities.
posted by jeffburdges at 7:27 AM on March 13, 2015 [6 favorites]


I'll agree that $7,000 is too little, but $3 million? That number is crazypants.

Why?
posted by Steely-eyed Missile Man at 7:36 AM on March 13, 2015 [2 favorites]


NTP still provides accuracy where tlsdate doesn't. Hanno Böck's essay pretty much brushes this off as "not imporant". A lot can happen on a system within a second. I don't buy the "old protocol = insecure protocol". I think the bigger problem is "unsupported software has a greater potential for bugs."
posted by pashdown at 7:41 AM on March 13, 2015 [6 favorites]


Google and Apple combined could fund the guy just from rounding errors in their accounts.
posted by Thorzdad at 7:43 AM on March 13, 2015 [2 favorites]


It seems to me that if this whole project is so centered on this one guy, the problem is not just his financial situation, it's that if he ever gets a minute free to run an errand, he might get hit by a bus.
posted by jacquilynne at 7:46 AM on March 13, 2015 [1 favorite]


The problem with Open Source is apparently that you get what you pay for.

It's clearly an NTP-complete problem.
posted by sour cream at 7:47 AM on March 13, 2015 [6 favorites]


I have never had to pay a Time bill.

Just wait until my new app, Timr, is released. You will be able to pay your time bills cleanly and efficiently. It also has an attractive, customizable interface and only records your activities in a very general way. Coming soon!
posted by GenjiandProust at 7:48 AM on March 13, 2015 [1 favorite]


He'd make more money selling NTP exploits on the black market. Since he's in a position to put them in the source code, and it's doubtful anyone actually reviews it, he could cash out quick and retire somewhere beyond the reach of the law. (Like the moon).
posted by blue_beetle at 7:49 AM on March 13, 2015 [2 favorites]


Google and Apple combined could fund the guy just from rounding errors in their accounts.

Yeah, but part of the problem is that he doesn't want to be funded by one big company because he would feel beholden to them. So he prefers the situation where companies give donations to a larger open source organization, and then they pass on the funding to him with no strings attached.
posted by smackfu at 7:52 AM on March 13, 2015 [1 favorite]


Actually, Poul-Henning Kamp is working on a replacement, ntimed, with the foundation's blessing. I saw both Harlan and Poul-Henning talk at FOSDEM 2015, and Harlan really didn't pan-handle too hard actually, he's perhaps not as much of a fund-raiser as the network time foundation needs. That said, I've started the process of asking the large organisation I work for if we can throw a bit of money their way while we're still using ntpd.
posted by psolo at 8:05 AM on March 13, 2015 [7 favorites]


Time is free. I have never had to pay a Time bill.

You will, eventually. Ask Coldchef about what happens when the bill comes due.
posted by pjern at 8:09 AM on March 13, 2015 [8 favorites]


you should immediately dump NTP for TLSdate,

And watch things die, because TLSdate will get you to a couple of seconds of UTC, and there are plenty of things that need to be kept a couple of milliseconds to UTC. Plus, this is a classic example of using a side effect, and side effects tend to change from version to version.

This is why the one big vendor that does explicitly support NTP development is VMware, because a lot of the things ESXi does become much harder to impossible to do without clocks kept within a few milliseconds. For those who need microsecond level sync, which is rare but is needed, there's PTP, but PTP needs NTP. For Apple and Google, NTP is a nice to have, for VMware, NTP is a must-have.

As mentioned, Poul-Henning Kamp is working on a new implementation, but we need the old implementation to keep working and have security holes fixed until PHK's replacement is production ready.

And, you know, right now, when you say "secure", I take a long hard look at SSL/TLS, because it seems every other major hole we've had in the last year has been with that stack.
posted by eriko at 8:40 AM on March 13, 2015 [11 favorites]


Wow. Just read that tlsdate proposal.

Super, super bad.

"NTP is old, and I don't understand the architecture, so lets just use something hacky from a protocol I do understand rather than try and build a more manageable client/server, and deal with some of the auth issues."

Anywhere where you need accuracy and precision, you need a proper network time protocol which modulates the kernel's wall clock (rather than kicks). PTP is pretty cool, but has limited deployment, and I don't know what it's like over a WAN.
posted by psolo at 8:40 AM on March 13, 2015 [9 favorites]


eriko: "take a long hard look at SSL/TLS, because it seems every other major hole we've had in the last year has been with that stack"

"every security issue we have with this vault is directly related to the locks or the doors"

Isn't that because the only thing on the network that is ever expected to be secure is TLS? Everything else is widely known to be vulnerable to every sort of snooping and mitm you care to invent.
posted by idiopath at 9:02 AM on March 13, 2015 [2 favorites]


"Isn't that because the only thing on the network that is ever expected to be secure is TLS?"

No, IPsec is expected to be secure. SSH is expected to be secure. I'm sure there are others, but those spring immediately to mind, and are widely used where secure communication is required.
posted by psolo at 9:06 AM on March 13, 2015 [3 favorites]


The number of system-critical programs with zero or one ageing maintainers and no likelihood of ever being improved, rewritten or even understood once they're gone is quite high. It's a curious feature of large old systems: nobody cares about low level stuff that "mostly usually" works, until one day it doesn't.

I gather the situation is the same or worse in the numerous hardware layers beneath.
posted by ead at 9:21 AM on March 13, 2015 [3 favorites]


Yeah, but part of the problem is that he doesn't want to be funded by one big company because he would feel beholden to them.

This is really the problem. He wants more funding, but on his terms. Big companies like Google, Microsoft, HP, IBM, etc., who make use of the technology, have the money, but they're not going to give it out in significant quantities except on their terms. So everyone is apparently at something of an impasse.

It seems that Google has already started working around NTP by rolling their own, more-secure, solution (TLSdate). Which is good for Google products but it doesn't help the computer ecosystem as a whole; it fragments it. A secure upgrade path for NTP, rather than a new project, would be much better. (Also, tlsdate reeks of obnoxious young-programmer "lets reinvent the wheel using my favorite toolset" hubris.)

As much as I admire NTP and the work that its lone developer has done, maybe it's time to pass the reins to a corporate team. Not every open source project is amenable to the standalone foundation + donation model. I personally wouldn't hand it over to Google, what with their shiny-object fetish and tendency to neglect things the second they get bored, but I'd be looking hard at IBM; they have a pretty good track record at open source stuff and they certainly have some skin in the game should it fall apart. And compared to a lot of IBM's portfolio, NTP is positively sexy.
posted by Kadin2048 at 9:24 AM on March 13, 2015 [4 favorites]


Working an OSS project is a thankless task, sometimes. If I was in his particular shoes and treated as badly as he was by Google and others, I'd just walk. When the important stuff breaks, their priorities should get realigned pretty quickly in the correct direction.
posted by a lungful of dragon at 9:25 AM on March 13, 2015 [1 favorite]


This is really the problem. He wants more funding, but on his terms.

Bingo. Even Torvalds works for a foundation. Being a lone wolf can work for a few years, but it's really hard to do that for more than a decade or two and impossible generationally. We're three or so decades on and the cracks in that (lack of) model are obvious.
posted by bonehead at 9:27 AM on March 13, 2015


I thought I should elucidate why time-keeping over a network is not as simple as asking a server for the current time and setting your clock to that. I mean, it'll work for your chromebook, but not your application server if you care about accurate and precise time.

Firstly, you don't just trust one server, you ask several the time. Then you characterize those servers in terms of how useful you think their answer is, which is largely a function of their "jitter", a statistic measure of how stable you think your measure of the round-trip-time to them is, as well as other bits and bobs.

Once you've decided on a time to aim for, you don't generally just "jump" to it. If time goes backwards while you're sampling something to produce timeseries data, for example, that could be catastrophic. ntpd tries to keep time monotonic where it can (it has a configurable step threshold.) Instead, it slews the kernel clock (on most systems, don't know the windows situation) by telling the kernel to speed up or slow down the length of a tick: http://man7.org/linux/man-pages/man2/adjtimex.2.html

It has to also be careful about how quickly it slews, and also keep an internal measure of how stable it thinks the local oscillator is (which varies with time/temperature), since a local clock whose 'second' is consistently longer/shorter than a BIPM second by a constant amount is actually a good clock. Most servers have a five-cent oscillator which varies wildly with temperature.

And don't get me started on leap seconds.

Also, in terms of commercial offerings, those who are genuinely worried about ntpd going away, and have money, will buy something like this: http://www.fsmlabs.com/timekeeper/

Finally, it's worth noting that Harlan is currently carrying the can, most of this work was done by David L Mills at the University of Delaware.
posted by psolo at 9:35 AM on March 13, 2015 [17 favorites]


Infrastructure is so boring and messy, let's spend most of our brainpower making social media sharing apps instead!
posted by fifteen schnitzengruben is my limit at 10:05 AM on March 13, 2015 [16 favorites]


smackfu: "I don't get it. Isn't this a mature product that occasionally has bug fixes?"

Well, I'm in no way qualified to evaluate these, but for the record here are the recent changelogs.
posted by mhum at 10:08 AM on March 13, 2015


It seems that Google has already started working around NTP by rolling their own, more-secure, solution (TLSdate). Which is good for Google products but it doesn't help the computer ecosystem as a whole; it fragments it.

Actually, no. I agree that tlsdate is definitely not the solution, but Chrome only seems to be using it to perform a sanity-check on the system clock, to mitigate any vulnerabilities that might affect the system time (which could come from an NTP vulnerability, or any number of other things). From a security perspective, Chrome's behavior is a clever way to reduce a few potential attack vectors.

Chrome also uses a variety of tricks with tlsdate to attempt to eastablish some baseline of security for systems where it cannot establish a reliable time source (for instance, an embedded system with no RTC). This is probably exploitable, but Google doesn't always control the operating systems that host their applications, so it's still an improvement over the status quo.

It's unlikely that Chrome, or any other user-facing application would ever implement NTP, and there are a variety of vulnerabilities that could expose time-based attacks. While Chrome's behavior does not necessary completely close those holes, it makes such an attack significantly more difficult to pull off.

Reading through the papers, yeah. NTP needs to be fixed with something more secure. However, I do not believe that Google (or anybody) are seriously pitching tlsdate as a successor -- they're simply saying that applications should not blindly trust a desktop PC's system time.
posted by schmod at 10:18 AM on March 13, 2015 [2 favorites]


> SSH is expected to be secure.

SSH falls flat on its face for the initial key-exchange, especially in the face of an APT such as the NSA. Its only saving grace is that the target audience should understand some level of network security and at least some of the bits involved, though I am sure that a large portion do not.
posted by fragmede at 10:22 AM on March 13, 2015 [3 favorites]


"SSH falls flat on its face for the initial key-exchange"

Yes, that's true.
posted by psolo at 10:27 AM on March 13, 2015


Infrastructure is so boring and messy, let's spend most of our brainpower making social media sharing apps instead!

I've been in infrastructure for 15 years and while it's always been the case that we are supposed to just make stuff work and only get noticed when something breaks (which is fine with me) this attitude has gotten much worse in the last few years. If it isn't new and shiny it's hard to get interest in putting any time, money or people into it.

That is very likely what's happening with things like NTP and the like that are just supposed to be there and work but are relied upon so heavily. Nobody wants to spend the money on an extra cluster node or a redundant controller for a SAN, but when it breaks they wonder why - it's always worked silently before!
posted by Clinging to the Wreckage at 10:28 AM on March 13, 2015 [6 favorites]


IMHO so long as important pieces of common internet infrastructure are based on permissively licensed code this problem will not go away. Sadly it appears we are forever doomed to just having these periodic and sometimes pathetic scrambles for funding for important projects because they rely on good will instead of responsibility. Hopefully the Linux Foundation will step up again.
posted by Poldo at 10:41 AM on March 13, 2015 [2 favorites]


If it isn't new and shiny it's hard to get interest in putting any time, money or people into it.

OTOH, part of the complaint of the NTP maintainer is that Google's vulnerability team found a bug in NTP and didn't give him enough time to fix it before disclosing it.
posted by smackfu at 10:46 AM on March 13, 2015


A post about falsehoods programmers believe about time made the rounds a while back, which should convince you that it's harder than it seems. That far from being "a mature product that occasionally has bug fixes", it's actually a hard problem to actually get right.

It also turns out that storing time as the number of seconds since 00:00 Jan 1st, 1970, is actually pretty horrible. The Earth's rotation and orbit don't actually neatly divide in to the divisions we have set up for it, so we have to compensate for it. In particular, this year we're inserting a leap second on June 30th of this year which requires wrangling, in part because some programs will see the time go backwards and crash.

(There was also this followup with *more* falsehoods about time.)
posted by fragmede at 10:50 AM on March 13, 2015 [5 favorites]


"Hopefully the Linux Foundation will step up again."

That's who's giving Poul-Henning Kamp $3000 a month to write ntimed (via the Network Time Foundation, I believe, but it could have been a referral I guess)

PHK's blog entry on the topic
posted by psolo at 11:04 AM on March 13, 2015 [2 favorites]


So was this code just plopped out into the public domain? Was it never "copylefted"? Or would that not make any difference?

This is really a political and ideological issue. A big reason the open source and sharing economy "movements" will fail to transform the economy in any meaningful, positive way is that the artisanal tech-heads who so idealistically pump value into the commons often see tech as a way to side-step the messy, discomfiting, non-algorithmic nature of ideology and political action. So... we get situations like this, where wealth is put into the commons and then simply "monetized" and concentrated into a few pockets, following the standard "externalize and extract" procedures of our existing system.

I thought the whole point of copyleft and similar things was to prevent people from simply profiting from such endeavours without giving back. Guess I was once again sadly mistaken.
posted by mondo dentro at 11:09 AM on March 13, 2015


He'd make more money selling NTP exploits on the black market.

Eventually, everyone who works in infosec realizes they could make more money working for the bad guys. Luckily most still decide not to.
posted by tommasz at 11:22 AM on March 13, 2015 [1 favorite]


So was this code just plopped out into the public domain?

It is not really in the "public domain", but rather it is copyrighted under a permissive license, functionally similar to the BSD three-clause license.

There was a time, back in the late 80s and maybe into the 90s, when I think there was merit to both the BSD and GPL licensing camps' arguments in terms of which model--copyleft or non-copyleft/permissive--were superior in terms of developing software and promulgating Good Ideas. So it is hard to necessarily fault software projects that date from that era or previous for using those licenses.

And BSD licenses do appear to be pretty good if your goal is to basically get everyone to adopt a particular protocol with minimal fuss and fragmentation.

I think time has shown pretty conclusively, however, that BSD-like licenses are basically unsustainable and lead to constant funding problems when it comes to maintaining the code itself, because they not only allow but seem to tacitly encourage free-rider behavior by for-profit entities that use the software. GPLed software doesn't play quite as nicely with corporate IT departments, but it doesn't enter into a suicide pact with them either.

If the goal is to push the protocol, and the software is merely a reference implementation that you want to get into everyone's hands as quickly and un-fussily as possible so that the protocol becomes popular, then yes by all means the BSD-type licenses, including the NTP License, are the way to go. But if the 'reference implementation' becomes basically the only implementation, or if there's backend infrastructure that needs to be maintained on an ongoing basis, it doesn't seem to create the same sort of ecosystem that GPL licensing does.
posted by Kadin2048 at 11:35 AM on March 13, 2015 [11 favorites]


So he prefers the situation where companies give donations to a larger open source organization, and then they pass on the funding to him with no strings attached.

Meh. That's just open-sourced money-laundering.
posted by Thorzdad at 11:36 AM on March 13, 2015


IPsec isn't such a great a solution really, psolo. At least people make an effort to fix TLS though. Appears ntimed might be the place to start building a secure time protocol, thanks.
posted by jeffburdges at 11:58 AM on March 13, 2015


if the 'reference implementation' becomes basically the only implementation, or if there's backend infrastructure that needs to be maintained on an ongoing basis, it doesn't seem to create the same sort of ecosystem that GPL licensing does.

That rings very true to me. There's always the vague split between reference implementations that are taken for granted, and ones that a younger generation of developers declare obsolete and decide to reinvent.

It's also increasingly clear that large internet companies (particularly Google) are paying their security engineers large amounts of money to audit the entire UNIX and TCP/IP code infrastructure behind the scenes. Look at some of the prominent CVE discovery credits in over the past year: Heartbleed was Google/Codenomicon; POODLE was Google; Shellshock was Stéphane Chazelas of Akamai, with Google providing followup.

In one sense, it's good that these core implementations aren't officially maintained by large corporate entities, but it's also problematic that the official maintainers are individuals who receive security dispatches from Mountain View and then have to coordinate fixes and tests and distribution releases. It's not exactly independence.
posted by holgate at 11:58 AM on March 13, 2015 [2 favorites]


Mod note: Comment removed; mega-pasting is not so great, so please link and just excerpt if needed.
posted by cortex (staff) at 12:03 PM on March 13, 2015


Thanks idiopath and Kadin2048 for the explanations.

So I wonder just how naive I'm being when I imagine a variation on the agreement idiopath posted that just says something like:
"Should this software be used in a commercial product, (a specified percentage, possibly with a maximum cap) of the profits shall be returned to (some organized developer entity)."
If it's true, as others have pointed out above, that supporting development teams would amount to round-off errors in Big Tech company budgets, I don't think they'd bat an eye--it would still be way cheaper than keeping an in-house development team.

So, please tell me why I'm full of shit and that will never work! Is it just that it could be easily ignored? What happens if people ignore the provisions already in place?
posted by mondo dentro at 12:09 PM on March 13, 2015


Sorry about the massive paste, finally found where the document is kept online (I just pasted from local):

ntp copyright info

Regarding the agreement, FOSS isn't compatible with EULAs as typically seen. It's not an agreement you are assumed to have accepted before using the software, it is the conditions under which you are allowed to copy or modify the software. That is, it doesn't restrict any common law rights you would get as the purchaser of software, it exclusively expands upon those rights. It's actually the opposite of a EULA in that way.
posted by idiopath at 12:26 PM on March 13, 2015 [1 favorite]


I was answering the question as asked -- whether anything other than TLS was expected to be secure. The fact that IPsec isn't is, I agree, deeply problematic.
posted by psolo at 12:27 PM on March 13, 2015


psolo: I guess I could rephrase what I originally said as "it shouldn't be surprising that the places where we see security issues are the few rare parts of the network that we expect to be secure"

We've seen significant holes fixed in TLS and SSH (as commonly implemented) recently, and IPsec may not be particularly trustworthy either. I'd actually venture that prominent and regular fixed issues is more of a sign you can trust the thing, compared to a vague distrust with no known specific exploits or fixes.
posted by idiopath at 12:31 PM on March 13, 2015 [3 favorites]


If it's true, as others have pointed out above, that supporting development teams would amount to round-off errors in Big Tech company budgets, I don't think they'd bat an eye--it would still be way cheaper than keeping an in-house development team.

Big Tech is going to retain in-house security engineers no matter what. There's no scenario where Google is going to trust implicitly a core source tree, but neither do they want to have official maintenance responsibility.

This reminds me a little of a discussion here a couple of years ago about Tim O'Reilly and the way in which Linux and OSS was made palatable for large-scale commercial enterprise in the late 90s. The shift towards the modern data-driven web made the infrastructural code (and the politics of "free software" surrounding it) less relevant than what was being built on top of it, but didn't take away the need for it to be maintained.
posted by holgate at 12:31 PM on March 13, 2015 [1 favorite]


I love NTP and have for 20 years now. I even did a neat research project on it 15 years ago. It's still an incredibly useful protocol and no, tlsdate is not a substitute. NTP easily keeps a clock accurate to ~10ms of true time with very little overhead. That actually matters on all computers, including the one on your desk and your pocket.

ntpd, though, I think maybe it's time to rewrite that entirely. It has a lot of problems. Mostly just from its age. Also it's a swiss army knife thing. Really I don't need code running as root that can talk to a 1990s-era shortwave radio over a serial interface. I also don't need the byzantine administrative capabilities of ntpd. Arguably most people don't even need to be an NTP server at all, although opinions differ on that.

OpenNTPD is one alternative to stock ntpd that's worth looking at again. A year or two ago I read a lot of criticism of it that it didn't do NTP right, criticism that seemed worth paying attention to. But the OpenBSD project is a good source for secure applications, I'd look there first.

The article is weird in claiming that NTP has been mostly secure. It's not. NTP is still responsible for one of the most dangerous DDOS traffic sources on the Internet. It also has the recent serious root level exploit discussed here. It's a bunch of old crufty code that is more complex than needed, time to clean it up.

I have no opinion on who should fund the work or whether Stenn is the right person to do it. I admire the hell out of David Mills work on NTP for so many years, but he was not good at building a community of people. NTP was very much his personal baby and I guess now it's Stenn's baby. That may be unhealthy.
posted by Nelson at 12:39 PM on March 13, 2015 [5 favorites]


Nelson, ntimed is exactly what you describe. It has separate server, slave (stratum 2+ server) and client components, and is a completely new implementation. Check it out!
posted by psolo at 12:42 PM on March 13, 2015 [3 favorites]


There's one other person I'd say is essential to NTP today: Ask Bjørn Hansen, who administers the global NTP pool. Chances are about 75% of the computers in your house use NTP Pool servers to set their clocks. Not your desktop computers; Apple and Microsoft both run their own NTP servers for their customers now and they work pretty well. But pretty much everything else uses the NTP pool, particularly small devices running embedded Linux systems.

The pool stats are reasonably healthy; there's about 2800 servers running on IPv4. Not a lot to serve 100+ million computers the time, but in practice it's working OK. Ask runs the pool really well, cleanly and with little drama. It's mostly pretty simple but there's some very clever geographic DNS stuff it's doing to give a diverse set of nearby addresses to requestors. Good stuff.

Last I checked the NTP pool ran on a shoestring budget, a volunteer project. There's a donations page but the only company I've heard of donating is Meinberg, which makes NTP servers. I wonder how many commercial users of the pool are donating the recommended $12/1000 clients?
posted by Nelson at 12:46 PM on March 13, 2015 [4 favorites]




Thanks for the pointer to ntimed. I'd read the original problem statement but hadn't quite clocked that this is PHK, the Varnish author. Sometimes "ninja rockstar" is a real thing, you know? Here's his full blog about the project.

PHK gave a talk at FOSDEM about Ntimed. Here's the slides and here's a blog summary. psolo's right, it's exactly what I was looking for, specifically Ntimed-client. It just keeps a computer's time set, nothing else, and that's all 99.99% of users will ever need. It can be kept simple. On first blush I'd much rather see this kind of project get funded than trying to continue to maintain the old ntpd forever.

The GitHub project hasn't been updated in two months. But maybe he only uses GitHub for publishing releases, not active development? Hoping for a Q1-2015 release for the client, that's coming awfully close :-P There's a three week old blog post indicating he's still working on it. The GitHub version has 2300 lines of C code; compare 200,000 for ntpd. PHK originally said he could do it in 1000 lines, although later he's said 10,000 which does seem a bit more realistic. Either way it looks very promising. Not just smaller, but new and written with modern understandings of Internet security.
posted by Nelson at 1:03 PM on March 13, 2015 [3 favorites]


So in my first job one of my earliest tasks was to write some betting feed software for greyhound races. The odds for the races change frequently, they bring the dogs out and one looks a bit skittish then its odds can fall (or rise, whatever). So i wrote the software and it would fire out feeds quite frequently, of course they were timestamped because like i say they odds can yo-yo.

Anyway, some years later i get a support request saying that the feeds from these scripts i'd long since forgotten about were sending out odds with bad timestamps. Eh? Which scripts? Oh, *those* scripts. Right, erm. Where do i start? Luckily one of the first questions i asked was "has the server been rebooted recently?" and that led to "is ntpd running?". It wasn't, and of course the server time had drifted and consequently the timestamps were wrongly marked for the odds. Maybe someone made some money off of this "bug". It's nice when you can fix a bug without changing any of your own code.

Reading the article and having a quick click around the ntp.org site i see it's fallen behind. Bugzilla? Bitkeeper? Look, if you're working with open source software then you need to keep up. This is especially the case if you're working with open source code that is everywhere and usually "just works" because people will forget about it, and when the bar to contribution is higher than what it should be people won't bother stepping over that bar. Get it into git and onto github and the bar is suddenly lowered, but not low enough that people trip over it. Once you've done that the pull requests will start coming in, and you can focus on the other problems.

This is more important as time goes by because, like a server clock that slowly drifts, your code *and* your tools will slowly accrue technical debt; eventually they will become old enough that the pool of developers willing to touch them, and thus the code, will dry up. When that happens your code is only a short trip away from >/dev/null
posted by lawrencium at 1:19 PM on March 13, 2015 [1 favorite]


What annoys me about the situation is the reliance of huge, wealthy firms on NTP to undergird electronic financial transactions, communications infrastructure, and distributed computing architectures, without being willing to support it — both on principle, and because of the security implications.

The NTF should announce a discrete NTP2.0 or sNTP or whatever, make it a bolt-on extension to 'classic' NTP, and release it under a dual license that requires commercial use by a firm doing XYZ or making over $N/year to pay for the privilege. They can offer discounted licenses for those who get onboard during the development stage, crowdfunding style. And the NTF should be VERY LOUD about the security risks of staying on the old platform, to the point of raising the eyebrows of managers at the these companies dealing with cyber-risk. And their insurers.

TLDR; if the huge financial, telecom (VOIP relies heavily on NTP) and Internet firms that rely on NTP won't support its maintenance and security improvements while they're free, stop making them free to those exploitative commercial entities. The current code is under a BSD-like license, but future code doesn't have to be. Right? I'm a little murky on changing licenses in a circumstance like this.
posted by snuffleupagus at 2:12 PM on March 13, 2015 [2 favorites]




Just wanted to note that the ensemble of stratum 1 NTP servers at NIST (computers synchronized to a timescale of atomic references) currently sees about 11 billion requests per day. The budget gap extends there as well, as almost all bandwidth is voltunteered.
posted by fatllama at 6:07 PM on March 13, 2015 [1 favorite]


[TLSdate] is a classic example of using a side effect, and side effects tend to change from version to version

Pretty sure they already plan to remove the time from the tls handshake in TLS 1.3.
posted by ryanrs at 6:14 PM on March 13, 2015


By the way, if anyone made it to page three and was confused to read:
"On a daily basis, NTP also consults atomic clocks, which tick off precise seconds based on radioactive Cesium-133 decomposition."
Then, just to reassure you, no, that is not at all how atomic clocks work. Though, if it was, I'm sure it would make a great April Fools' joke hiding half the cesium to slow the nation's clock down a factor of 2.
posted by fatllama at 6:17 PM on March 13, 2015 [3 favorites]


Wow, the more I look at TLSdate (admittedly only for 15 minutes though), the more laughable it looks. It doesn't appear to even attempt what NTP does. I mean the whole point of NTP is to sample multiple clocks and arrive at an accurate estimate of the actual time, taking into account stuff like network delay, drift, etc.

When I set the clock on my car's radio, I do it with literally as much precision and better traceability than tlsdate.
posted by ryanrs at 6:28 PM on March 13, 2015 [1 favorite]


It would not be remotely unreasonable for NIST to employ a full-time person to maintain ntpd.

In fact... among the current crop of NIST timeservers, the few that return a "ref-ID" (four bytes in the NTP protocol interpreted either as ASCII or an IPv4 address) of "NIST" run a custom NTP packet server process written (from scratch, I believe) by Judah Levine. Those returning the ref-ID "ACTS" run a version of ntpd that's had a lot of "functionality" ripped out of it in order to cope with many thousands of requests per second per server. The system-clock synchronization process and kernel interface is highly customized as well, though I won't drone on (private informational queries always accepted).
posted by fatllama at 6:32 PM on March 13, 2015 [4 favorites]


"Apple and Microsoft both run their own NTP servers" except that time.windows.com was inaccessible from outside the U.S. back when I did IT support so I recommended to all my customers a) switching to the NTP pool and b) donating to the project. Guess which happened.
posted by andrewdoull at 7:02 PM on March 13, 2015 [1 favorite]


"Should this software be used in a commercial product, (a specified percentage, possibly with a maximum cap) of the profits shall be returned to (some organized developer entity)."
Um, er... What percentage? 1%? Of course not that high... .01%? Does that seem low? Because I assure you that Windows/OSX/Android probably contains over 10,000 fragments of code (doing really useful things) from rendering images, to getting the time, to listing directories, to converting files, etc, etc... And who does that auditing? How much does Apple owe when they 'give' their OS away for 'free', how much does google owe when it has a revenue model based on it's entire ecosystem, and 'gives' android away for 'free'?

I mean, when there is seemingly SIMPLE math about what percentage a music company owes its musicians that created the original IP that the purchaser is paying for explicitly, those companies will often spend more money on accountants to artfully steal from the artist than to pay them their royalties.

There may be novel solutions to how to fund important open source infrastructure projects, but I'm pretty sure complex license agreements that try to funnel money back to the developers isn't going to work. Hell, using any non-standard license is enough to scare many large companies away from using your technology, even if you want them to and aren't going to charge them any money.
posted by el io at 10:30 PM on March 13, 2015


psolo:
I thought I should elucidate why time-keeping over a network is not as simple as asking a server for the current time and setting your clock to that.
Great explanation of the mechanics. As far as the semantics readers will be well-served by A Long, Painful History of Time.
posted by vsync at 4:34 AM on March 14, 2015 [1 favorite]


In my experience, complex licensing agreements, even for commercial software, often don't make it past procurement at large companies. A charge for use of OSS just means it won't be used.
posted by smackfu at 6:04 AM on March 14, 2015


For Apple and Google, NTP is a nice to have, for VMware, NTP is a must-have.

Google depend heavily on accurate time syncronization across their platform. For the 2012 leap second they modified their internal NTP servers to gradually adjust the time over the course of a day (see Time, technology and leaping seconds from the Official Google Blog).

Anyone who needs ACID transactions on distributed systems relies on NTP (or similar). This isn't an easy problem and it's one of the reasons behind the rise of NoSQL databases, many of which give up some ACIDity in exchange for greater tolerance of synchronization issues (not the only or most important reason, and not a problem all NoSQL databases solve, but it's a factor for a signifant chunk of them). There's plenty of problem domains where those compromises aren't acceptable.

NTP isn't going to die, but nobody's going to invest in it if they don't have to, especially when the internal politics of the project seem even more annoying to deal with than usual in the open source world. But if there were serious problems that the current funding/support model isn't able to solve, I suspect it would take all of an hour before Google's engineers were working on a solution. If, indeed, they weren't the ones who found and reported the problem in the first place.
posted by xchmp at 2:28 PM on March 14, 2015


Oh Google's use of NTP is way more interesting than just "make sure the clocks are synced". Their Spanner distributed datastore has a really fascinating way of achieving distributed consensus that relies on timestamping data. But time is never 100% accurate; so Spanner uses a thing called TrueTime, where timestamps are intervals saying "we know it's between time [earliest, latest]." (Some details.) It's a very clever idea.

The paper makes it sound like TrueTime is implemented its own way, not using NTP. The architecture is similar though; each client talks to multiple time servers to figure out what time it probably really is. Servers get the time from GPS or atomic clocks. I imagine they built their own system because it was too hard to bend NTP to do what TrueTime needs.
posted by Nelson at 2:41 PM on March 14, 2015


Google has its own atomic clocks in the facilities that use Spanner: if you treat your $600m datacentres the way small companies use a 1U or AWS instance, the cost of an atomic clock is the equivalent of a CMOS battery.
posted by holgate at 3:28 PM on March 14, 2015 [1 favorite]


An atomic clock is only a few thousand bucks these days. You find them in lots of places, including every cell tower (at least the CDMA ones, not sure about GSM).
posted by ryanrs at 6:00 PM on March 14, 2015 [1 favorite]


Can you not get a good time reference of GPS satellites?
posted by Mitheral at 11:26 PM on March 15, 2015


Yes, often the reference clocks are using GPS as their time source. GPS receivers that are made for clock purposes are not cheap hardware though.
posted by smackfu at 6:14 AM on March 16, 2015


GPS receivers that are made for clock purposes are not cheap hardware though.

I'm curious, what do they cost? Can't you just use a basic consumer GPS as long as you can get the data off of it at a reasonable rate without much jitter? Here's a hobbyist with a $70 GPS getting time accurate to 1µs. And here's a much more thorough treatment of rolling your own GPS clock, which suggests that it's a bit hard to find a consumer GPS with the necessary signals. Note the key signal off the GPS isn't the absolute time so much as it is a very accurate pulse-per-second tick, a pendulum.

The main reason I see for having an atomic clock is some source of time independent of GPS, in case of a satellite network failure. (GLONASS failed for 11 hours last year...) Also some installations have a hard time putting an antenna where it has a view of the sky.
posted by Nelson at 7:38 AM on March 16, 2015


I guess the prices aren't bad if you start with an OEM GPS receiver. A Garmin one will run you $70 or so on Amazon.

I had looked up something like the the Trimble Acutime GG, which is more like $900.
posted by smackfu at 7:49 AM on March 16, 2015


Speaking of time from an atomic clock independent of GPS, you can use eLoran if you're in the coverage area: eLoran + GPS timing receiver
posted by psolo at 9:31 AM on March 16, 2015


Can't you just use a basic consumer GPS as long as you can get the data off of it at a reasonable rate without much jitter?

You can... sorta. It's not quite that straightforward. Most handheld GPS units will put out the full date and time over a serial port (whether actual RS-232 serial, or serial which is then shoved over USB with an integrated bridge chip) in NMEA format.

What you get is a time signal that is absolute, in the sense that you can use it to set the clock of a system which lacks its own RTC ... very neat for remote sensor platforms, or off-the-grid networks where you are determining time from nothing. But it's not highly accurate. (I think you are talking maybe 0.01s for NMEA timestamps..?) The NMEA "sentence" containing the time takes a non-trivial amount of time, at 4800 bps, to transmit, and I don't think there is a guarantee of whether the time referenced in the message is necessarily the exact moment when the message started, ended, or somewhere in the middle. Anyway, it's good enough for a lot of things but not others. (Setting in-camera timestamps on your photos? Probably OK. Synchronizing your distributed transactional database against sophisticated attacks? Probably not.)

What you get when you pay for a real "time source" GPS unit is that in addition to the NMEA time stamps, you also get a "PPS" output, which is basically a 1Hz clock pulse. It's devoid of any context other than the ticks, but it's very accurate; some are good to a few parts-per-million and it's really up to the computer to not screw things up on the receiving side. (And of course the more you pay, the more assurances you get of exactly how accurate it is.)

The consumer handheld USB-interfaced units made for hiking and other navigation purposes rarely, if ever (that I have seen) put out PPS signals. Not quite sure how they would, over USB. However you can get PPS out of the GPS receivers made for OEM usage, which are now only $100ish including antennas. So you don't have to spend $900 if you are willing to do some hardware hacking.

It's only recently that the OEM units that produce PPS outputs have gotten that cheap. A decade or so ago, it was much cheaper—if you wanted PPS output—to build a little shortwave radio receiver and listen to WWV, which puts out both a PPS signal and an absolute timestamp periodically, so you can use it to both set a clock absolutely and keep it locked over time. Pretty neat.
posted by Kadin2048 at 11:00 AM on March 16, 2015 [5 favorites]


« Older Wow, that literally costs an arm and a leg!   |   Moog schematics Newer »


This thread has been archived and is closed to new comments