# Calculus of Averages

January 8, 2010 6:52 AM Subscribe

Calculus of Averages -

*Newton and Archimedes did not possess this knowledge. No mathematics professor today can provide this knowledge and depth of understanding.*Author John Gabriel maintains a blog, Friend of Wisdom, and contributes articles such as Are real numbers uncountable? to Google's Knol project.*Before you write or comment: I am correcting you and all your dimwit professors, not the other way round. You are of inferior intelligence to me, which is why I have disabled comments on my knol. Think I am a crank? Well, I couldn't care less what you think.*-John Gabriel

Wow, this guy is awesome.

posted by justkevin at 7:02 AM on January 8, 2010

*It was clear after intensive study, the derivative was born from Newton's finite differences. But how? I searched everywhere for a formula that could explain the direct connection between the derivative and its analog finite difference. No such formula existed until my two formulae illustrating the connection were discovered: one version describes this in terms of finite differences and the other in terms of integrals that are easier to use.*

Ummm.... definition of derivative?

This isn't Timecube, but it seems pretty cranky nonetheless.

posted by Humanzee at 7:04 AM on January 8, 2010 [1 favorite]

*By Cantor's definition, a set is countable if a bijection exists from a subset to itself.*

Wait, shouldn't this be amended to:

*a set is countable if a bijection exists from a subset*

**of the set of natural numbers**to itself.posted by kid ichorous at 7:04 AM on January 8, 2010 [4 favorites]

Oops, I'm wrong. This is Timecube.

posted by Humanzee at 7:04 AM on January 8, 2010 [2 favorites]

posted by Humanzee at 7:04 AM on January 8, 2010 [2 favorites]

Outsider math doesn't seem much like outsider art.

posted by sciurus at 7:04 AM on January 8, 2010 [4 favorites]

posted by sciurus at 7:04 AM on January 8, 2010 [4 favorites]

I think that ignoring "scientific breakthroughs" by people who refer to themselves in the third person is a good rule of thumb.

posted by TheyCallItPeace at 7:06 AM on January 8, 2010 [4 favorites]

posted by TheyCallItPeace at 7:06 AM on January 8, 2010 [4 favorites]

This guy makes 'I don't get it' into an art form. His powers of misunderstanding are just awesome.

PS--he really doesn't get it.

posted by hexatron at 7:07 AM on January 8, 2010 [5 favorites]

PS--he really doesn't get it.

posted by hexatron at 7:07 AM on January 8, 2010 [5 favorites]

Knol has other cranks like this guy, who, like John Gabriel, thinks that if he doesn't understand something that it must be wrong.

posted by zsazsa at 7:12 AM on January 8, 2010

posted by zsazsa at 7:12 AM on January 8, 2010

It's not outsider math or a scientific breakthrough if it already exists.

posted by DU at 7:12 AM on January 8, 2010 [1 favorite]

posted by DU at 7:12 AM on January 8, 2010 [1 favorite]

This reads like the a chapter from

posted by Mei's lost sandal at 7:14 AM on January 8, 2010

*"The Cloud Atlas".*posted by Mei's lost sandal at 7:14 AM on January 8, 2010

So I guess we can add knol to the growing list of Google's failures.

posted by Pastabagel at 7:14 AM on January 8, 2010 [2 favorites]

posted by Pastabagel at 7:14 AM on January 8, 2010 [2 favorites]

*Cantor may easily have been the first mathematical crank.*

But not the last, evidently.

posted by unSane at 7:15 AM on January 8, 2010 [2 favorites]

*The irony is when I had finished; I still had no idea what calculus is really all about.*

And this, my fellows, is why a good professor is invaluable. The foundation of calculus should be blindingly clear and based on physical principles. It wasn't developed as some sort of abstract theoretical idea, but as a

*tool*to solve real-world physics conundrums. The fact that this guy learned about Reimann and other sums without intuitively grasping their value (in a pre-computer age) really betrays the foundation of his "problems" with calculus.

posted by muddgirl at 7:19 AM on January 8, 2010 [1 favorite]

People who believe that they have found a flaw in Cantor's proof are a dime a dozen; angry ones with superiority complexes only slightly less so.

posted by Flunkie at 7:24 AM on January 8, 2010 [3 favorites]

posted by Flunkie at 7:24 AM on January 8, 2010 [3 favorites]

And he's, apparently, a software developer? That's kind of... unsettling.

posted by episteborg at 7:28 AM on January 8, 2010 [4 favorites]

posted by episteborg at 7:28 AM on January 8, 2010 [4 favorites]

*By Cantor's definition, a set is countable if a bijection exists from a subset to itself.*

Wait, shouldn't this be amended to:

a set is countable if a bijection exists from a subset of the set of natural numbers to itself.

Wait, shouldn't this be amended to:

a set is countable if a bijection exists from a subset of the set of natural numbers to itself.

It looks like he hilariously misunderstands the lemma that a set is infinite if there's a bijection from itself to a proper subset.

posted by kmz at 7:30 AM on January 8, 2010 [5 favorites]

I can assure you that there is no shortage of software developers who are, frankly, incompetent.And he's, apparently, a software developer? That's kind of... unsettling.

posted by Flunkie at 7:32 AM on January 8, 2010 [4 favorites]

*I can assure you that there is no shortage of software developers who are, frankly, incompetent.*

Not to mention over-confident and abrasive.

posted by DU at 7:34 AM on January 8, 2010 [2 favorites]

I am not at all surprised that this guy is a software developer. Fits perfectly.

posted by mpbx at 7:36 AM on January 8, 2010 [4 favorites]

posted by mpbx at 7:36 AM on January 8, 2010 [4 favorites]

So, I am by no means a mathematician (not even close), but something struck me as odd. He wrote "But 1/3 and pi cannot be written out entirely." Evidently he means that 1/3 can't be written out entirely as a

Am I missing something?

posted by oddman at 7:41 AM on January 8, 2010 [1 favorite]

*decimal*. Right? But in this case 0.333... would be just a rough approximation of 1/3. In other words you can't translate 1/3 into decimal notation, but you certainly can write it out in its entirety; like so: 1/3.Am I missing something?

posted by oddman at 7:41 AM on January 8, 2010 [1 favorite]

Yeah but like, wouldn't his code fail to compile? That seems like a pretty good mechanism for weeding out thinking this incoherent and illogical.

I admit to finding the mere existence of crackpots like this a little terrifying. He clearly believes he's right, and there seems to be no shortage of people who have tried to show him the errors in his thinking, yet he does not repent from his heresy. Earnest discourse ought to have the property of resolving conflicts like this.

posted by episteborg at 7:42 AM on January 8, 2010

I admit to finding the mere existence of crackpots like this a little terrifying. He clearly believes he's right, and there seems to be no shortage of people who have tried to show him the errors in his thinking, yet he does not repent from his heresy. Earnest discourse ought to have the property of resolving conflicts like this.

posted by episteborg at 7:42 AM on January 8, 2010

Software developer is the #2 job, after all, just behind actuaries (who probably learn calculus).

posted by autopilot at 7:42 AM on January 8, 2010

posted by autopilot at 7:42 AM on January 8, 2010

You aren't missing something, oddman. He's just saying that the decimal representation of a third is infinite. The conclusion he draws from this is entirely spurious, though.

posted by episteborg at 7:46 AM on January 8, 2010 [1 favorite]

posted by episteborg at 7:46 AM on January 8, 2010 [1 favorite]

*Yeah but like, wouldn't his code fail to compile? That seems like a pretty good mechanism for weeding out thinking this incoherent and illogical.*

He probably is a very good coder, and a logical thinker. The problem is, he thinks that because he's good at logic and good at coding, he's good at everything. Therefore, he will apply his logical mind to fields he really knows nothing about.

He codes well because he understands the basic principals well, and can logically proceed to a desired conclusion from there.

He has absolutely no clue what the basic principals of math are, and what he does know is fragmented or incorrect. Nevertheless, he believes he does know, and then he applies logic to his initial mistakes and ends up at an end point that is gobbledygook. But in his mind, because he is so smart, he cannot be wrong about the basic principals and since he has proceeded logically in his thought processes, his conclusions absolutely must be correct.

posted by mpbx at 7:48 AM on January 8, 2010 [4 favorites]

To quote Pauli: "This isn't right. This isn't even wrong."

posted by vacapinta at 7:51 AM on January 8, 2010 [6 favorites]

posted by vacapinta at 7:51 AM on January 8, 2010 [6 favorites]

*slowly bangs head against wall*

I shudder to think what this guy would make of the cardinal hierarchy, if he calls transfinites "a myth".

Nutcases and maths, it requires a whole special Murphy's Law by now, I despair.

posted by Iosephus at 7:55 AM on January 8, 2010

I shudder to think what this guy would make of the cardinal hierarchy, if he calls transfinites "a myth".

Nutcases and maths, it requires a whole special Murphy's Law by now, I despair.

posted by Iosephus at 7:55 AM on January 8, 2010

*Yeah but like, wouldn't his code fail to compile? That seems like a pretty good mechanism for weeding out thinking this incoherent and illogical.*

If his code doesn't compile it is only because the compiler doesn't understand his genius.

posted by justkevin at 7:57 AM on January 8, 2010 [7 favorites]

He's not stupid, he's just poorly educated and has an authority problem. He's full of himself, and he hasn't been properly socialized into mathematical proof, so he doesn't notice where his arguments go wrong. Fortunately, he's writing in a subject area where he can't do much damage, I think.

Incidentally, there remain intellectually respectable viewpoints that don't "believe" in uncountable infinities. But this, I think, is not them.

Really, though, I think we should stop doing Crankfilter.

posted by grobstein at 8:11 AM on January 8, 2010 [3 favorites]

Incidentally, there remain intellectually respectable viewpoints that don't "believe" in uncountable infinities. But this, I think, is not them.

Really, though, I think we should stop doing Crankfilter.

posted by grobstein at 8:11 AM on January 8, 2010 [3 favorites]

Man, I love a good crank. I was just disappointed he didn't disprove the Continuum Hypothesis in 25 words or less.

posted by el_lupino at 8:16 AM on January 8, 2010

posted by el_lupino at 8:16 AM on January 8, 2010

indicting every human on Earth as Dumb, Educated Stupid and Evil -

posted by ersatz at 8:21 AM on January 8, 2010

*... He has absolutely no clue what the basic principals of math are, and what he does know is fragmented or incorrect. ...*

This explanation mostly satisfies me. I guess, in a way, I'm making a similar mistake to Gabriel himself in thinking that if he's bad at math, he must be bad at everything.

Still, as kmz pointed out, he seems to confuse a condition for infinite sets for the property of countability. It shocks me that someone could learn the syntax and grammar of a programming language but not be able to write down the definition of countability correctly.

posted by episteborg at 8:22 AM on January 8, 2010

Oh good. Time to break out the Crackpot Index again. Although I think we need some new entries on the index. Say: 5 points every time he calls Cantor's theory ridiculous. 5 points every time he calls Cantor an idiot. 10 points every time he refers to Cantor as having "followers" or that said followers are moronic. 20 points for straight-facedly putting a quote

Mark Chu-Carroll takes his argument down here and here.

posted by Electric Dragon at 8:33 AM on January 8, 2010 [4 favorites]

*by himself*at the top of the page.Mark Chu-Carroll takes his argument down here and here.

posted by Electric Dragon at 8:33 AM on January 8, 2010 [4 favorites]

I now want to see the episode of the A-Team where BA pities "the fool Cantor" while Murdoch freestyles on Spectral Theory in front of the Field Medal award committee.

posted by fallingbadgers at 8:35 AM on January 8, 2010

posted by fallingbadgers at 8:35 AM on January 8, 2010

John Gabriel thinks very highly of John Gabriel.

posted by HumanComplex at 8:40 AM on January 8, 2010

posted by HumanComplex at 8:40 AM on January 8, 2010

He could learn something from Raymond Smullyan who explained countable infinities very handily in a lecture I attended 20 years ago and presumably in this book. He posed a series of puzzles in which you were condemned to hell. Satan picks a number from a given set and you are allowed one guess a day. If you are correct, you are released. Will you get out of hell if the initial set is (a) natural numbers, (b) integers, (c) rationals, (d) reals, etc. Then he went on to what happens if the guess changes.

It was polar opposite of what John Gabriel wrote in that Smullyan was clear, concise, genial, and unegotistical. It also came as no surprise that he was a student of Alonzo Church.

One of my classmates, after seeing Smullyan came up with this Smullyanesque puzzle:

Smullyan's biography is here.

posted by plinth at 8:50 AM on January 8, 2010 [5 favorites]

It was polar opposite of what John Gabriel wrote in that Smullyan was clear, concise, genial, and unegotistical. It also came as no surprise that he was a student of Alonzo Church.

One of my classmates, after seeing Smullyan came up with this Smullyanesque puzzle:

You are on an island of Knights and Knaves. Knights always tell the truth. Knaves never tell the truth. You encounter someone at a crossroads who is either a Knight or a Knave, but you don't know which. By asking one question, answerable by 'yes' or 'no' can you determine the shortest road to town? (Answer: "Hey, did you hear that they're giving out free beer in town?")For an example of this, read his explanation of Gödel's incompleteness theorem.

Smullyan's biography is here.

posted by plinth at 8:50 AM on January 8, 2010 [5 favorites]

*"But 1/3 and pi cannot be written out entirely."*

That this came out of software developer's mouth is just shocking. Surely people with a background in CS know about other bases, yes? And surely it is clear to people who know about other bases that 1/3 can be represented in base three as a very finite length 0.1, whereas the same trick cannot be pulled for pi. How odd that a software developer would be so attached to decimal.

posted by Westringia F. at 8:51 AM on January 8, 2010

Not defending this guy at all. Just wanted to note that there are huge swaths of programming where you rarely do any math more complicated than adding or subtracting one. It isn't the most cutting edge world for sure but many people make decent livings validating and shuttling data between forms and databases.

posted by Babblesort at 8:57 AM on January 8, 2010 [2 favorites]

posted by Babblesort at 8:57 AM on January 8, 2010 [2 favorites]

Grobstein: any non-wikipedia suggested reads on Intuitionism? Was having my mind blown over breakfast this week by a math philosopher friend try to explain this to me :-)

posted by honest knave at 9:00 AM on January 8, 2010

posted by honest knave at 9:00 AM on January 8, 2010

Agreed with babblesort that there's a whole lot of professional programming that can be done without much math skill. I did not mean to imply that this man was an incompetent programmer; I only meant that the fact that he's a programmer shouldn't be taken to imply that he's competent.

I wouldn't go so far as mpbx, who said that he's "probably a very good coder", both because of the vaguely related evidence we have about him in particular and because of my opinion that

posted by Flunkie at 9:07 AM on January 8, 2010

I wouldn't go so far as mpbx, who said that he's "probably a very good coder", both because of the vaguely related evidence we have about him in particular and because of my opinion that

*most*coders are not "very good coders", but at the same time I don't think that the fact that he severely miscomprehends mathematics implies that he's definitely*not*a very good coder.posted by Flunkie at 9:07 AM on January 8, 2010

C,WAA!

posted by Mental Wimp at 9:31 AM on January 8, 2010

posted by Mental Wimp at 9:31 AM on January 8, 2010

Oh, and (from the home page):

FAIL!

posted by Mental Wimp at 9:34 AM on January 8, 2010

*Writing is very difficult for me. One might say I am not a natural writer. Although there are thousands of ideas I would like to share, writing these down in a simple, comprehensible and interesting way is no easy task.*FAIL!

posted by Mental Wimp at 9:34 AM on January 8, 2010

Oh, oh, and one more: I like how he calls Cantor "George".

posted by Mental Wimp at 9:39 AM on January 8, 2010

posted by Mental Wimp at 9:39 AM on January 8, 2010

honest knave: http://plato.stanford.edu/entries/intuitionism/

I don't know if that's any better because I haven't read that particular article, but the Stanford Encyclopedia of Philosophy is a good source for logic information.

Intuitionism is an interesting phenomenon to me (context, I'm working on my PhD studying formal logic, so I run into this sort of thing every now and then). There's a lot of philosophical junk to be argued here about whether intuitionist logic is more "true" than classical logic. But for the large majority of mathematics, it doesn't really matter. Often, when someone resorts to truly classical means, they're proving a negative result, for which there isn't really a distinction between the two methods. There's a strong desire to write constructive proofs of existential claims for most mathematicians (even absent of intuitionist leanings), just because they're often more useful.

Most mathematicians and logicians I know don't really have a stand on the issue. They usually work classically, but don't give much thought to the distinction. The one real intuitionist I know (by "know" I mean that I've met more than once) has very strong opinions on the matter. The classical viewpoint is such a default that I've never run into anyone who subscribes very strongly to this as a philosophy. Personally, I lean slightly to the intuitionist side, but only because it seems more practical. I can't even imagine a physically applicable result that can be proven classically, but has been shown to be unprovable constructively. This is not to say that strictly classical results are useless. They point out places where certain techniques lead to useless existential results (e.g. the Borsuk-Ulam Theorem). All classical methods are directly usable within intuitionism when applied to negative claims. And most relevantly to me, when you're studying formal logical systems as mathematical objects themselves, it doesn't even matter whether or not they're "true" in any sense. In my line of work, I'm finding connections between formal logical systems and computational complexity classes, and here, the intuitionist/classical distinction (and all the stuff in between) is just one of many tools I have for finding/building new logical systems whose primary purpose is that proofs in those systems have a strong connection to certain computational algorithms.

What was the question again?

posted by ErWenn at 9:42 AM on January 8, 2010 [3 favorites]

I don't know if that's any better because I haven't read that particular article, but the Stanford Encyclopedia of Philosophy is a good source for logic information.

Intuitionism is an interesting phenomenon to me (context, I'm working on my PhD studying formal logic, so I run into this sort of thing every now and then). There's a lot of philosophical junk to be argued here about whether intuitionist logic is more "true" than classical logic. But for the large majority of mathematics, it doesn't really matter. Often, when someone resorts to truly classical means, they're proving a negative result, for which there isn't really a distinction between the two methods. There's a strong desire to write constructive proofs of existential claims for most mathematicians (even absent of intuitionist leanings), just because they're often more useful.

Most mathematicians and logicians I know don't really have a stand on the issue. They usually work classically, but don't give much thought to the distinction. The one real intuitionist I know (by "know" I mean that I've met more than once) has very strong opinions on the matter. The classical viewpoint is such a default that I've never run into anyone who subscribes very strongly to this as a philosophy. Personally, I lean slightly to the intuitionist side, but only because it seems more practical. I can't even imagine a physically applicable result that can be proven classically, but has been shown to be unprovable constructively. This is not to say that strictly classical results are useless. They point out places where certain techniques lead to useless existential results (e.g. the Borsuk-Ulam Theorem). All classical methods are directly usable within intuitionism when applied to negative claims. And most relevantly to me, when you're studying formal logical systems as mathematical objects themselves, it doesn't even matter whether or not they're "true" in any sense. In my line of work, I'm finding connections between formal logical systems and computational complexity classes, and here, the intuitionist/classical distinction (and all the stuff in between) is just one of many tools I have for finding/building new logical systems whose primary purpose is that proofs in those systems have a strong connection to certain computational algorithms.

What was the question again?

posted by ErWenn at 9:42 AM on January 8, 2010 [3 favorites]

*Still, as kmz pointed out, he seems to confuse a condition for infinite sets for the property of countability.*

The only way I can make Gabriel's writing comprehensible is to exchange some notion of "computability" for countability. It really seems like he's arguing that for a set to be countable, he needs to have a computer program that can output every element in that set in a finite amount of time.

*So Cantor came up with the ridiculous idea that rationals could be counted. Well, just stop and think: the "set" of rationals is infinite. So how can it be listed? Was the idiot referring to an imaginary list or an actual complete list? His idiot followers don't seem to know either.*

It seems like he expects diagonalization to be a

*program*that eventually terminates rather than what it is: a sketch of a bijection from the natural numbers to S.

posted by kid ichorous at 9:42 AM on January 8, 2010

There's a certain "elegance" to his "proof". By starting at the decimal point in working outwards, he very puts off actually dealing with any of the numbers he's trying to talk about indefinitely. He will generate an infinite list of rational numbers that (just like cantor's list of rational numbers) will contain elements arbitrarily close to irrationals like pi, √2,

posted by aubilenon at 9:47 AM on January 8, 2010

*e*, but never actually contain any of them. The way it turns the property which makes these numbers difficult to understand to his advantage is clever, and I can't tell if he knows he's doing it (his trollish tone suggests: yes).posted by aubilenon at 9:47 AM on January 8, 2010

This reminds me of when I was in college involved in an undergrad research program in Chemistry. My advisor was the Department Chair and one day, over lunch, we were talking about cranks and so forth. He laughed about how many letters the department received on a regular basis from laypeople in the community certain that they had uncovered a great new discovery.

"Not that it can't happen, theoretically, that someone might stumble upon a new theory while working as a clerk somewhere" he said. "It's certainly been known to happen. But from the letters we get, it seems pretty unlikely."

He showed me some of them. Many of them were from folks who had studied chemistry and had a basic understanding, but then just went off the rails. A very few postulated interesting ideas that had already been tested and proven right or wrong, which was cool, but most were just way, way out there.

The one that stuck in my mind was a typewritten letter, single-spaced with a fading ribbon, with no paragraph breaks. The author's thesis was that the reason some chemicals are poisonous were because the molecules were more "pointy" than others.

I have a sense of multifold pity for someone like that (and Timecube guy and this Gabriel dude). Not only is their math/science flawed on some level but there is something about their the presentation, taken as a whole, that hints strongly at some degree of mental pathology.

Nevertheless, my sympathy didn't prevent me from using, for years, the response "because it's pointy?" as a conjecture in Socratic dialogue when discussing chemical mechanism in grad school.

posted by darkstar at 9:52 AM on January 8, 2010 [3 favorites]

"Not that it can't happen, theoretically, that someone might stumble upon a new theory while working as a clerk somewhere" he said. "It's certainly been known to happen. But from the letters we get, it seems pretty unlikely."

He showed me some of them. Many of them were from folks who had studied chemistry and had a basic understanding, but then just went off the rails. A very few postulated interesting ideas that had already been tested and proven right or wrong, which was cool, but most were just way, way out there.

The one that stuck in my mind was a typewritten letter, single-spaced with a fading ribbon, with no paragraph breaks. The author's thesis was that the reason some chemicals are poisonous were because the molecules were more "pointy" than others.

I have a sense of multifold pity for someone like that (and Timecube guy and this Gabriel dude). Not only is their math/science flawed on some level but there is something about their the presentation, taken as a whole, that hints strongly at some degree of mental pathology.

Nevertheless, my sympathy didn't prevent me from using, for years, the response "because it's pointy?" as a conjecture in Socratic dialogue when discussing chemical mechanism in grad school.

posted by darkstar at 9:52 AM on January 8, 2010 [3 favorites]

Fair enough, babblesort. I'd have guessed some exposure to binary, but I can see how it may not be something one needs to think about frequently.

posted by Westringia F. at 10:03 AM on January 8, 2010

posted by Westringia F. at 10:03 AM on January 8, 2010

A little more evidence of teh crazee and maybe a hint of teh hate: The following link contains a few statements by George Cantor and John Von Neuman, both Jewish mathematicians:

posted by Mental Wimp at 10:05 AM on January 8, 2010

posted by Mental Wimp at 10:05 AM on January 8, 2010

Well, there is a connection there between computability and countability. You can look at the class of real numbers whose decimal (binary, whatever) expansion is computable (meaning that there's an algorithm that can generate the digits of the expansion). Every number you've ever heard of probably falls into this category. And the set of such numbers is countable. But not every countable set of real numbers fits this description.

You can talk about real numbers that don't fall into this category, but you can't really do much with them because you can't ever pin down much useful information about them because of their uncomputability. It's not an insensible point of view to claim that these numbers don't exist in any meaningful way, but it's not really insensible to claim the opposite either.

What our crackpot of the day seems to be doing here is misunderstanding the idea of a "list". By "list", Cantor means that there is (in some abstract, not necessarily constructible way) a bijection between the set and the natural numbers. That's what we mean by "countable". But we don't need to be so abstract and classical to do this. It's really, really easy to explicitly, via some program show how to enumerate the set of rational numbers. And if you don't like the idea of computable enumeration like this, you can even write a terminating algorithm that, if you ask it for the nth number in your set, will output that number. He seems to think that "countable" means finite, as in you can count all the numbers in the set and get an answer. Which is a sensible misinterpretation if all you have to go on is the word "countable" itself. But anyone who is willing to give another human being a moment of benefit of the doubt will quickly realize that's not what is meant by "countable".

You can get into an argument about whether "countable" should require some sort of

I guess I'm just saying that this guy seems to be having trouble at an even more basic level. (Oh, and that this is an example of one of those cases where a reasonable, intuitionist-style objection to nonconstructiveness doesn't actually change anything.)

posted by ErWenn at 10:10 AM on January 8, 2010 [2 favorites]

You can talk about real numbers that don't fall into this category, but you can't really do much with them because you can't ever pin down much useful information about them because of their uncomputability. It's not an insensible point of view to claim that these numbers don't exist in any meaningful way, but it's not really insensible to claim the opposite either.

What our crackpot of the day seems to be doing here is misunderstanding the idea of a "list". By "list", Cantor means that there is (in some abstract, not necessarily constructible way) a bijection between the set and the natural numbers. That's what we mean by "countable". But we don't need to be so abstract and classical to do this. It's really, really easy to explicitly, via some program show how to enumerate the set of rational numbers. And if you don't like the idea of computable enumeration like this, you can even write a terminating algorithm that, if you ask it for the nth number in your set, will output that number. He seems to think that "countable" means finite, as in you can count all the numbers in the set and get an answer. Which is a sensible misinterpretation if all you have to go on is the word "countable" itself. But anyone who is willing to give another human being a moment of benefit of the doubt will quickly realize that's not what is meant by "countable".

You can get into an argument about whether "countable" should require some sort of

*computable*method of counting, but in this case, that's not actually important. Since the diagonalization argument is an argument by contradiction, it doesn't care. If you believe that a countable set requires an algorithm for how to count it, then you assume that you have one for the real numbers (in order to disprove this), and then you use that algorithm to build a new algorithm which describes a new real number which can't get counted by your original algorithm. If you just treat things classically and define countability in terms of the existence of some abstract, not necessarily computable function from the natural numbers to your set, then the argument works the same way. You can use the existence of that function to prove the existence of a number that is missed by the function.I guess I'm just saying that this guy seems to be having trouble at an even more basic level. (Oh, and that this is an example of one of those cases where a reasonable, intuitionist-style objection to nonconstructiveness doesn't actually change anything.)

posted by ErWenn at 10:10 AM on January 8, 2010 [2 favorites]

*Fair enough, babblesort. I'd have guessed some exposure to binary, but I can see how it may not be something one needs to think about frequently.*

posted by Westringia F

posted by Westringia F

Exposure for sure. I'm a CS grad. Had calculus, stats, formal logic, etc. Had a final in assembly that was a block of hex that you had to interpret into code, execute (self modifying code too evil bastard prof), and write out the results in a new block of hex.

The fact is though, once I started writing code in the business world virtually none of that had any direct application. Learning it all definitely has its uses. Reality is though, that in tons of business software it just never comes up. Turns out the most important skills are negotiation, encapsulation, and adaptability.

posted by Babblesort at 10:22 AM on January 8, 2010

So there's such a thing as a calculus denier? I am completely boggled.

posted by chairface at 10:39 AM on January 8, 2010

posted by chairface at 10:39 AM on January 8, 2010

As an undergraduate, I once got an unsolicited e-mail from someone who had reinvented calculus because he didn't like the idea of vectors. He organized his work into three "kingdoms" (so-called because of their massive importance to humanity), and replaced mathematical concepts with similar concepts that had fanciful names like "tornadoes". It was more than 100 pages long.

posted by Humanzee at 11:21 AM on January 8, 2010

posted by Humanzee at 11:21 AM on January 8, 2010

there's a long discussion about this guy's "knols" over at the xkcd math forum and several basic questions that have been unanswered/refused.

posted by mordacil at 11:24 AM on January 8, 2010 [1 favorite]

posted by mordacil at 11:24 AM on January 8, 2010 [1 favorite]

In the link where he describes his alleged disproof of Cantor, he mentions:A little more evidence of teh crazee and maybe a hint of teh hate: The following link contains a few statements by George Cantor and John Von Neuman, both Jewish mathematicians:

Cantor was not only a fool, he was a religious fool. It is generally believed his parents were Jews who converted to Christianity (From the frying pan into the fire? It is hard to tell which religion is more ridiculous at times - Judaism or its daughter Christianity).posted by Flunkie at 11:27 AM on January 8, 2010

*I admit to finding the mere existence of crackpots like this a little terrifying.*

Same. And the reason I find stuff like this terrifying is that he doesn't know he's a crackpot. Which means that I could become a crackpot and not realize it. Which, even worse, means that I might be a crackpot

*right now*but have no way of snapping out of it or even realizing it which would at least tell me that I

*should*snap out of it.

*He seems to think that "countable" means finite, as in you can count all the numbers in the set and get an answer.*

This is precisely his problem which he point blank states right here:

So Cantor came up with the ridiculous idea that rationals could be counted. Well, just stop and think: the "set" of rationals is infinite. So how can it be listed?But it is part and parcel of the basic math curriculum for anyone studying CS or even any kind of math beyond calculus to understand this basic point. Although even in my case, I'm pretty sure that my high school calculus teacher taught us to understand how something could be both infinite and countable.

posted by deanc at 11:35 AM on January 8, 2010 [2 favorites]

Kronecker didn't think that Real numbers were countable, he just thought that irrational numbers were "fake". If you think about that, it kind of makes sense.It's not outsider math or a scientific breakthrough if it already exists.--DU

I mean for example √2 might not be representable by a countable number, but what does √2 or π really mean? They are stand-ins for numbers that we never

*really*know.

The problem with that thinking is that you have formulas like (√2)

^{2}or e

^{πi}that get us back to rational numbers.

**but**you should also be able to represent those numbers as polynomials over the rational field if you want too using extension fields

Technically real numbers are already an extension of the rational numbers, and imaginary numbers are an extension of real numbers. But you can also extend a field by taking all the polynomials in that field by using "all the polynomials mod p(x)" (technically taking all the cosets of a polynomial in that field)

To try to explain: There is an infinite number of integers, right, but if you take all the integers mod 8 you now only have 8 integers. You can do the same thing with polynomials.

So lets say you have rather then 8 we use a prime number like 7. Now we have a new field. (every number has an inverse 4*3 = 8 and 8%7 = 1 so 4 and 3 are inverses in the field of integers mod seven)

Anyway as you know you can divide polynomials. So you can do the modulo with polynomials as well. So if you take all the polynomials mod x

^{4}you should get all the polynomials of degree 3 (er, I think). Those new sets of polynomials form new algebraic rings.

And here's the interesting part. You can take any polynomial that doesn't have any zeros (like x

^{2}= -1) and create a new ring that contains a zero using polynomials.

So you don't

*need*irrational numbers, you can do math without them if you really want.

But I think it was Gauss who proved that the set of complex numbers is

*algebraically closed*meaning that any extension of it, is isomorphic to it. What this means is that any polynomial has a zero in the complex numbers. This is actually called The Fundamental theorem of Algebra (Well, Wikipedia says Gauss had a proof with a topological gap, which was later filled by someone else)

And of course C contains the real numbers.

**but**you could theoretically do all your math

*without*imaginary or irrational numbers (I think). But you would be stuck using polynomials rather then single numbers. And that would be a lot of work. Polynomials can be infinitely long, just like irrationals)

(and rational numbers are an extension of integers, so Kronecker could have stuck with things created from integers, if he'd wanted.)

Note: I may have screwed all of that up :P

I don't really think this crackpot compares to Kronecker .

posted by delmoi at 12:08 PM on January 8, 2010 [4 favorites]

Okay, I thought metafilter just crashed on me when I tried posting that. I did a little editing before trying to post the second time so the first version should go (I reloaded to check and make sure it hadn't been posted before trying again even!)

posted by delmoi at 12:12 PM on January 8, 2010

posted by delmoi at 12:12 PM on January 8, 2010

*but what does √2 or π really mean? They are stand-ins for numbers that we never really know.*

I guess you can hedge on what 'really know' means, but I feel like I

*really know*what a unit square is, and it has a diagonal.

posted by Wolfdog at 12:21 PM on January 8, 2010 [1 favorite]

*I guess you can hedge on what 'really know' means, but I feel like I really know what a unit square is, and it has a diagonal.*

Right but true squares don't exist in nature, just in people's imaginations. And the same thing is true of circles. And beyond that, you can create an

*algebraic extension*of the rational numbers which contains a square root of two but is still (I think) countable. So Kronecker would have no trouble with unit squares and their diagonals, or figuring the circumference of a circle.

I don't really know all that much about Kronecker, just that bit from my Abstract Algebra text that brought up that quote in the context of talking about algebraic extensions and whatnot. Also the Kronecker delta function (which is like an infinite identity matrix) is pretty useful.

Now, if I'm remembering this right you could also easily have a vector space where the first component is a rational number that gets multiplied by a constant √2 and the second gets multiplied by 1. By including those directly into your set, you

*basically*are getting the benefits of having a set of numbers with √2 that are still countable. This is a little easier then the cosets of polynomials method and in fact it should be isomorphic a ring you come up with by doing that.

So you've got an algebraic ring that has the square root of two and is still countable. The problem is that it's not

*algebraically closed*. You could have a polynomial that has a root that's not in your nice little vector space. So you need to extend again and potentially forever.

The nice thing about

**C**is that it

*is algebraically closed*so you never have to extend it. But, the point is if you don't want to use irrational numbers, you don't have to. It just creates a lot of work.

(That's just my understanding, which may be off!)

posted by delmoi at 12:55 PM on January 8, 2010

Okay, but where's his method for dividing by zero? All the really

posted by Limiter at 2:05 PM on January 8, 2010

*good*crackpots start there. Points for attacking targets most people won't have ever heard of, though -- Cantor's had it too easy for too long, and going after Einstein is*so*overdone.posted by Limiter at 2:05 PM on January 8, 2010

Delmoi:

If you didn't need transcendental numbers, you wouldn't need any infinite polynomials either.

posted by Obscure Reference at 2:18 PM on January 8, 2010

*you could theoretically do all your math without imaginary or irrational numbers (I think). But you would be stuck using polynomials rather then single numbers. And that would be a lot of work. Polynomials can be infinitely long, just like irrationals*If you didn't need transcendental numbers, you wouldn't need any infinite polynomials either.

posted by Obscure Reference at 2:18 PM on January 8, 2010

Also, as a crank myself, I need to point out that John Gabriel does, in fact, understand number bases other than 10. In his anti-Cantor rant, he says:

So, Westringia F., please be fair to the cranks by attacking them for what they are actually asserting.

posted by Obscure Reference at 2:32 PM on January 8, 2010

*Cantor failed to realize there is nothing special about 1/3. If one were to use a base that is a multiple of 3, then it would be possible to write the representation of 1/3 in a finite number of steps and associate a natural number with it*So, Westringia F., please be fair to the cranks by attacking them for what they are actually asserting.

posted by Obscure Reference at 2:32 PM on January 8, 2010

Well, to be fair you can represent pi exactly if you choose the right base for your number system:

Base pi:

pi= 10

1= 1

2= 2

2.5= 2.11211202102012...

3= 3

4= 10.22012202112111...

100= 302.02012213001...

posted by hexatron at 3:21 PM on January 8, 2010

Base pi:

pi= 10

1= 1

2= 2

2.5= 2.11211202102012...

3= 3

4= 10.22012202112111...

100= 302.02012213001...

posted by hexatron at 3:21 PM on January 8, 2010

Cantor's diagonalization argument, of course, is not wrong -- in fact, Cantor's diagonalization argument is one of the most beautiful and elegant proofs I know, and it's not overstating the case to say that his argument is one of the founding elements of modern (pure) mathematics.

It's always been perplexing to me that this has been so deeply controversial, and frankly I consider people like this asshole to be the global-warming deniers of mathematics (okay, maybe not the best analogy, since their influence is much less pernicious); the sad thing about this is that a

As for this guy's argument, after wiping away the flying spittle ("The diagonal "argument" is not worthy of being called an argument because it is devoid of any logic. Cantor's ideas are the most ridiculous of any mathematics professor who ever lived." -- huh??), we see that he instantly makes a basic mistake: he says that "by Cantor's definition, a set is countable if a bijection exists from a subset to itself." This is false -- a set is

posted by Frobenius Twist at 4:01 PM on January 8, 2010 [3 favorites]

It's always been perplexing to me that this has been so deeply controversial, and frankly I consider people like this asshole to be the global-warming deniers of mathematics (okay, maybe not the best analogy, since their influence is much less pernicious); the sad thing about this is that a

*lot*of laypeople still think that Cantor's diagonalization argument is wrong (see the comments on the wikipedia article, as well as this related article), so this isn't just a case of isolated crankery, but an anti-mathematical movement that for mystifying reasons involves a large number of people. For that reason, I think it's worth shining a light on this sort of craziness.As for this guy's argument, after wiping away the flying spittle ("The diagonal "argument" is not worthy of being called an argument because it is devoid of any logic. Cantor's ideas are the most ridiculous of any mathematics professor who ever lived." -- huh??), we see that he instantly makes a basic mistake: he says that "by Cantor's definition, a set is countable if a bijection exists from a subset to itself." This is false -- a set is

*infinite*if and only if a bijection exists between a proper subset of the set and itself; it is*countable*if and only if a bijections exists with the integers. So in the*very first line*of his argument, he immediately exposes himself as a mathematical idiot, although this doesn't stop him from hurling invective at "Cantor's moron followers."posted by Frobenius Twist at 4:01 PM on January 8, 2010 [3 favorites]

*Well, to be fair you can represent pi exactly if you choose the right base for your number system:*

err, what? could you explain what you mean by this?

posted by episteborg at 1:45 AM on January 9, 2010

pi represents a relationship, not a group, so you can't make a traditional base system out of it. If he has a new idea for how to make bases, he has to explain it further.

posted by mdn at 11:14 AM on January 9, 2010

posted by mdn at 11:14 AM on January 9, 2010

I think it's important (for general discussion of why some people don't really like the entire uncountable set of real numbers, not for discussion of why this guy is a crackpot) to distinguish between the need for an extension of the set of rational numbers to a larger set and the need for an extension of the rational numbers to an uncountable set.

delmoi said of irrational numbers like the square root of two that "they are stand-ins for numbers that we never really know." But this idea of "really knowing" is not terribly well defined. The rational numbers are all we can really know if you choose to believe that "really knowing" means "finitely expressing using (say) addition, multiplication, subtraction, and division on some natural numbers". (This is just one arbitrary way, of course. You could cut it down by just allowing numbers gotten by subtraction and division starting with just 1. Or you could get the rational numbers by allowing for finite representations in some whole number base.) The finite expressibility part is a pretty reasonable expectation for "really know", but the rest of those constraints are totally arbitrary and based essentially on what you feel comfortable with. Those of us raised in the ages of calculators or slide rules are very comfortable with decimal expansions. If we have a little bit of math aptitude, we're also often comfortable with repeating decimal expansions as a way of "really knowing" a number. But why does a very simple, but non-repeating pattern (e.g. .101001000100001...) seem beyond really knowing?

Even if we don't approach things from a decimal expansion point-of-view, why should we limit ourselves to adding, subtracting, multiplying, and dividing? Why not allow ourselves exponentiation, roots, and logarithms as well? Why not open it further to allow ourselves to describe numbers by the polynomials that they are roots of? (This gives you the algebraic numbers, which are algebraically closed, by the way. You don't need to go all the way to the full complex numbers to get

These numbers contain just about anything you'd ever encounter in any branch of mathematics or science except for those branches specifically devoted to poking at the limits of representation. And yet this set of numbers is still countable. Using this counting method, you could describe a new number that is not in your set of numbers using Cantor's diagonalization argument. As long as you can describe a countable set of numbers, you can make a finite description of a number that is not in your countable set. I can see two ways of reconciling this.

A) You can just change the set of numbers you work with to be big enough for whatever your current purposes are. If it turns out your set wasn't big enough, you may have to reprove all your universal claims to increase it. This isn't actually that big of a deal because you can pick a really, really enormous countable set of numbers where you generally have to be intentionally obtuse (a la Cantor's diagonalization) to get outside of it.

B) You can resign yourself to an uncountable set of numbers that includes numbers that you can never "really know" (e.g. a number defined by way of the halting problem or by way of a Borsuk-Ulam type "paradox"). This is the classical mathematical approach. Anything you can prove about these numbers applies to any number you could ever possibly want to use. But you can also prove the existence of numbers you can't even remotely begin to really know in any useful way.

posted by ErWenn at 4:48 PM on January 9, 2010

delmoi said of irrational numbers like the square root of two that "they are stand-ins for numbers that we never really know." But this idea of "really knowing" is not terribly well defined. The rational numbers are all we can really know if you choose to believe that "really knowing" means "finitely expressing using (say) addition, multiplication, subtraction, and division on some natural numbers". (This is just one arbitrary way, of course. You could cut it down by just allowing numbers gotten by subtraction and division starting with just 1. Or you could get the rational numbers by allowing for finite representations in some whole number base.) The finite expressibility part is a pretty reasonable expectation for "really know", but the rest of those constraints are totally arbitrary and based essentially on what you feel comfortable with. Those of us raised in the ages of calculators or slide rules are very comfortable with decimal expansions. If we have a little bit of math aptitude, we're also often comfortable with repeating decimal expansions as a way of "really knowing" a number. But why does a very simple, but non-repeating pattern (e.g. .101001000100001...) seem beyond really knowing?

Even if we don't approach things from a decimal expansion point-of-view, why should we limit ourselves to adding, subtracting, multiplying, and dividing? Why not allow ourselves exponentiation, roots, and logarithms as well? Why not open it further to allow ourselves to describe numbers by the polynomials that they are roots of? (This gives you the algebraic numbers, which are algebraically closed, by the way. You don't need to go all the way to the full complex numbers to get

*algebraic*closure.) Or, if you're more physically minded, proportions that can be achieved by simple compass-and-straightedge constructions? Or go further and allow constructions using simple paper-folding techniques? Why not allow anything describable using limits, derivatives, integrals, and/or summation notation? Numbers described by these methods have finite, comprehensible descriptions. We can compare them to each other (e.g. is one greater than or equal to another?), do arithmetic with them, carry out any of the above operations on them, convert them from one method of description to another, approximate them as accurately as I like (using decimal expansions,*n*-ary expansions, continued fractions, etc.), and even use them as measurements of real physical properties. What else is there to "really knowing"?These numbers contain just about anything you'd ever encounter in any branch of mathematics or science except for those branches specifically devoted to poking at the limits of representation. And yet this set of numbers is still countable. Using this counting method, you could describe a new number that is not in your set of numbers using Cantor's diagonalization argument. As long as you can describe a countable set of numbers, you can make a finite description of a number that is not in your countable set. I can see two ways of reconciling this.

A) You can just change the set of numbers you work with to be big enough for whatever your current purposes are. If it turns out your set wasn't big enough, you may have to reprove all your universal claims to increase it. This isn't actually that big of a deal because you can pick a really, really enormous countable set of numbers where you generally have to be intentionally obtuse (a la Cantor's diagonalization) to get outside of it.

B) You can resign yourself to an uncountable set of numbers that includes numbers that you can never "really know" (e.g. a number defined by way of the halting problem or by way of a Borsuk-Ulam type "paradox"). This is the classical mathematical approach. Anything you can prove about these numbers applies to any number you could ever possibly want to use. But you can also prove the existence of numbers you can't even remotely begin to really know in any useful way.

posted by ErWenn at 4:48 PM on January 9, 2010

The idea of 'really knowing' is not only poorly defined, it's also completely irrelevant to mathematics.

Math (and by extension, most of theoretical physics) proceeds by hypothesis, theorem and proof. If we restricted ourselves to what we 'really knew', progress in both math and physics would have ended some time in the early 20th century. Since then, we have learned in many cases not to attempt to conceptualize what the math is describing, because it is so strange and removed from our experiences as humans. Feinmann is particularly good on this. You just do the math and accept the consequences, however bizarre.

Scientists who refuse to accept the consequences of the math usually turn out to be wrong, and that famously included Einstein.

posted by unSane at 7:55 PM on January 9, 2010

Math (and by extension, most of theoretical physics) proceeds by hypothesis, theorem and proof. If we restricted ourselves to what we 'really knew', progress in both math and physics would have ended some time in the early 20th century. Since then, we have learned in many cases not to attempt to conceptualize what the math is describing, because it is so strange and removed from our experiences as humans. Feinmann is particularly good on this. You just do the math and accept the consequences, however bizarre.

Scientists who refuse to accept the consequences of the math usually turn out to be wrong, and that famously included Einstein.

posted by unSane at 7:55 PM on January 9, 2010

unSane, I'd say that your reading of "really knowing" is pretty far off from what was being referred to earlier. I agree that it's an ill-defined concept, but I don't think anyone who's used that phrase here meant it in the sense that you seem to be implying (i.e. we only really know things that fit into our current conceptual models).

As a mathematician and logician, I can say with confidence that if we all stopped attempting to conceptualize what our math described, progress would grind to a near stand-still. When the mathematics doesn't seem to match your conceptualization, the first thing you do is you double check your mathematics. The second thing you do is triple check your mathematics. Then instead of abandoning your conceptualization, you start to entertain a new conceptualization, one that hopefully serves the problem better. Then you tell other people about it, and if they don't point out a flaw in your reasoning, you start working with that new conceptualization to give you a hint on what aspects of your mathematics are worth investigating.

Now in

posted by ErWenn at 9:13 PM on January 9, 2010

As a mathematician and logician, I can say with confidence that if we all stopped attempting to conceptualize what our math described, progress would grind to a near stand-still. When the mathematics doesn't seem to match your conceptualization, the first thing you do is you double check your mathematics. The second thing you do is triple check your mathematics. Then instead of abandoning your conceptualization, you start to entertain a new conceptualization, one that hopefully serves the problem better. Then you tell other people about it, and if they don't point out a flaw in your reasoning, you start working with that new conceptualization to give you a hint on what aspects of your mathematics are worth investigating.

Now in

*science*, it should never be the*math*that forces you to accept truth, but always the, er, science. The math is just a model until it's verified. Scientists refuse to accept the consequences of the math all the time (i.e. any time it fails to predict the actual data), and usually they're right. Of course you don't hear about this all the time because it's just part of the natural progression of a theory. Not every downed hypothesis makes big news.posted by ErWenn at 9:13 PM on January 9, 2010

I was obviously referring to scientists who refuse to accept the consequences of the math because it doesn't fit with their mental model of how the universe works, as opposed to contradicting experimental results. I kind of assumed people would figure that out. Sadly no. So to be explicit, I trust the math over their mental models.

I'm at a loss as to what 'really know' means upthread, unless it means 'can't write down exactly in a decimal representation', which is weak sauce. I gave it the benefit of the doubt and assumed it meant 'can't intuit'. Maybe you can enlighten me as to the technical definition.

posted by unSane at 9:36 PM on January 9, 2010

I'm at a loss as to what 'really know' means upthread, unless it means 'can't write down exactly in a decimal representation', which is weak sauce. I gave it the benefit of the doubt and assumed it meant 'can't intuit'. Maybe you can enlighten me as to the technical definition.

posted by unSane at 9:36 PM on January 9, 2010

Base pi guy here.

Here's the short lesson on 'base' representations:

A number's representation in base x is the sum of ( powers of x times integers)

with integers less than x. (I omit here discussion of bases less than or equal to 1).

This is usually written as just a list of integers, with a dot after the zeroth power of x.

x does not need to be an integer!

Then 4= 10.22012202112111...

means 4=1*pi

Of course, most integers base pi are non-repeating infinite decimals, which is not handy for many applications. But it does make the circumference of a circle of radius

unSane seems to confuse how math is presented with how it is created. It is created by knowing a subject well enough that you can guess at (or intuit) new and interesting results.

Proofs come afterward. (or disproofs; for new stuff, you work on the proof for a while, and if you don't get it, you begin to lose faith, and work on a disproof (or counterexample). When that gets too dispiriting, it's back to the proof, & etc)

posted by hexatron at 9:34 AM on January 10, 2010

Here's the short lesson on 'base' representations:

A number's representation in base x is the sum of ( powers of x times integers)

with integers less than x. (I omit here discussion of bases less than or equal to 1).

This is usually written as just a list of integers, with a dot after the zeroth power of x.

x does not need to be an integer!

Then 4= 10.22012202112111...

means 4=1*pi

^{1}+0*pi^{0}+2*pi^{-1}+2*pi^{-2}+0*pi^{-3}+1*pi^{-4}+2*pi^{-5}+2*pi^{-6}+ ....Of course, most integers base pi are non-repeating infinite decimals, which is not handy for many applications. But it does make the circumference of a circle of radius

*r*exactly 20*r*.unSane seems to confuse how math is presented with how it is created. It is created by knowing a subject well enough that you can guess at (or intuit) new and interesting results.

Proofs come afterward. (or disproofs; for new stuff, you work on the proof for a while, and if you don't get it, you begin to lose faith, and work on a disproof (or counterexample). When that gets too dispiriting, it's back to the proof, & etc)

posted by hexatron at 9:34 AM on January 10, 2010

Perhaps I wasn't clear. I didn't think you were referring to the case of math contradicting experimental result. My claim was that the situation you were referring to where a scientist inappropriately refuses to accept the consequences of the math is not a common occurrence for the following reason. If the consequence of the math is a physical prediction, and that prediction feels wrong to a scientist, they typically run an experiment to test the prediction. If this verifies the counterintuitive mathematics, then the scientist typically accepts that their intuition was wrong and tries to alter their mental model as best they can. If it verifies the scientist's intuition, then they try to alter the mathematical model to better fit those results (and indirectly, the original intuition). I have a hard time imagining a situation where the math contradicts a scientist's mental model and they don't move to actual data to try and resolve the conflict. For science, I trust the data above either the mathematical or intuitive mental models. The data tells us whether the mathematical model works, and when it doesn't, the intuitive mental models are what give us the insight to find better mathematical models.

I did not mean to imply that there was a technical definition of "really know". And there are contexts in which "can't intuit" would be a very reasonable interpretation of the phrase. I do think that "can't write down exactly in a decimal representation" isn't actually far off from what some people would mean. In those exact terms, it is, as you say, weak sauce. It's maybe not as weak as it seems at first blush, however. For many people, rational numbers are easy to get a hold of, partly because of their easy to grasp decimal expansions, but also because they are easy to represent in other formats as well (e.g. as simple ratios of integers). When most people work with numbers (e.g. measuring, performing arithmetic, comparing, writing in a standard form, etc.), they almost always resort to either a decimal (or binary or hexadecimal or whatever) expansion or a fractional representation. They don't feel like they really know a number unless they can work with it in this way. That's why something like the square root of two doesn't feel like a number that they really know. To me, I'm perfectly happy with √2 as a concept in and of itself. I can do arithmetic with it, compare it to other numbers (although I admit that switching to a decimal approximation is a very natural way to that), use it as a measurement, even convert numbers expressed with it into a standard form (though that standard form varies, depending on context (e.g. a+b√2 if I'm working in

What I've been trying to do is posit a reasonable informal definition of "really know" to encapsulate all this. I can really know numbers that I can actually work with. I can work with many irrational numbers (all the algebraic numbers, for example). I can work with many transcendental numbers (e.g., π or .1010010001...). So I feel like I really know all these numbers. How this fits into everything else that was being discussed, is that if you accept the existence of the full, uncountable set of real numbers, you are forced to accept the existence of numbers that you cannot really know (in this sense).

Philosophically, this bugs some people. One of my original points was that it's fairly reasonable to act like these unknowable real numbers don't exist. Someone who does that can still accept the vast majority of results proven by someone who does believe that they exist (because the results that they wouldn't believe tend to be about the existence of things that are unknowable (in my sense) and thus those results aren't very useful from a physical standpoint). I also think that it's also quite reasonable to take the view that these unknowable numbers do exist. Someone who does this can still accept all of the results of someone who doesn't.

posted by ErWenn at 9:34 AM on January 10, 2010

I did not mean to imply that there was a technical definition of "really know". And there are contexts in which "can't intuit" would be a very reasonable interpretation of the phrase. I do think that "can't write down exactly in a decimal representation" isn't actually far off from what some people would mean. In those exact terms, it is, as you say, weak sauce. It's maybe not as weak as it seems at first blush, however. For many people, rational numbers are easy to get a hold of, partly because of their easy to grasp decimal expansions, but also because they are easy to represent in other formats as well (e.g. as simple ratios of integers). When most people work with numbers (e.g. measuring, performing arithmetic, comparing, writing in a standard form, etc.), they almost always resort to either a decimal (or binary or hexadecimal or whatever) expansion or a fractional representation. They don't feel like they really know a number unless they can work with it in this way. That's why something like the square root of two doesn't feel like a number that they really know. To me, I'm perfectly happy with √2 as a concept in and of itself. I can do arithmetic with it, compare it to other numbers (although I admit that switching to a decimal approximation is a very natural way to that), use it as a measurement, even convert numbers expressed with it into a standard form (though that standard form varies, depending on context (e.g. a+b√2 if I'm working in

**Z(√2)**)).What I've been trying to do is posit a reasonable informal definition of "really know" to encapsulate all this. I can really know numbers that I can actually work with. I can work with many irrational numbers (all the algebraic numbers, for example). I can work with many transcendental numbers (e.g., π or .1010010001...). So I feel like I really know all these numbers. How this fits into everything else that was being discussed, is that if you accept the existence of the full, uncountable set of real numbers, you are forced to accept the existence of numbers that you cannot really know (in this sense).

Philosophically, this bugs some people. One of my original points was that it's fairly reasonable to act like these unknowable real numbers don't exist. Someone who does that can still accept the vast majority of results proven by someone who does believe that they exist (because the results that they wouldn't believe tend to be about the existence of things that are unknowable (in my sense) and thus those results aren't very useful from a physical standpoint). I also think that it's also quite reasonable to take the view that these unknowable numbers do exist. Someone who does this can still accept all of the results of someone who doesn't.

posted by ErWenn at 9:34 AM on January 10, 2010

Hexatron, I think your "base pi" doesn't work for the following reason: the distance between 0 and 1 (or 1 and 2, 2 and 3, etc) is NOT the same as the distance between 3 (the last integer you represent) and 10 (pi). This wonkifies the math tremendously (and your set 1,2,3,10,11,12... is not closed under addition). Making "1" represent pi/4 doesn't help, since if you count forward (adding pi/4 each time and allowing yourself only the digits 0,1,2,3, similar to base 4) you'll find that "100" works out to 4pi (4^2 pi/4), not pi^2 as it should in "base pi."

Unless I'm missing something -- is there an algebraist in the house? -- there's a pretty fundamental problem with using transcendental numbers as the base for a polynomial representation of numbers.

posted by Westringia F. at 8:28 AM on January 11, 2010

Unless I'm missing something -- is there an algebraist in the house? -- there's a pretty fundamental problem with using transcendental numbers as the base for a polynomial representation of numbers.

posted by Westringia F. at 8:28 AM on January 11, 2010

« Older A website devoted to jazz and American... | Once-lost Hubbard writings now available for... Newer »

This thread has been archived and is closed to new comments

posted by DU at 7:01 AM on January 8, 2010 [8 favorites]