Understanding e to the pi i, reprise
March 14, 2017 2:34 AM   Subscribe

Euler's formula with introductory group theory [slyt] - "How some perspectives from group theory shed light on a way the formula e^(pi i) = -1 can make intuitive sense."
posted by kliuless (18 comments total) 34 users marked this as a favorite
 
I remember feigning amazement when my high school calc teacher first explained e^pi*i = -1. I was already familiar with it, but felt obligated to ham it up a bit because everyone else in the class just had a bored, glazed look in their eyes. Poor teacher looked like he was going to go home and eat a bullet if nobody showed an interest in that formula.
posted by ryanrs at 4:53 AM on March 14, 2017 [10 favorites]


This is your obligatory reminder that e^(tau i) = 1.

(Happy half-tau day to those of you who use conventional USA date notation.)
posted by NMcCoy at 5:04 AM on March 14, 2017 [12 favorites]


This is your obligatory reminder that e^(tau i) = 1.

Flagged as taudious derail!
posted by Huffy Puffy at 5:17 AM on March 14, 2017 [6 favorites]


One thing I like about e is, no one makes a big stupid shibbolethy thing out of Feburary 71st
posted by thelonius at 6:20 AM on March 14, 2017 [20 favorites]


HERETICS! Everyone knows it's e^pi*j = -1

(If you're an EE, anyway.)
posted by ZenMasterThis at 6:56 AM on March 14, 2017 [4 favorites]


One thing I like about e is, no one makes a big stupid shibbolethy thing out of Feburary 71st

Oh yeah? Just wait until the year 8281.
posted by Obscure Reference at 6:56 AM on March 14, 2017 [5 favorites]


IIRC our proof route to e๐›‘๐‘–=-1 was via Taylor series expansion & the connection with sin + cos. Which I do remember being quite surprised by at the time, probably because the rotational nature of exponentiation by ๐‘– hadnโ€™t really been made clear to me at that point.

Still not sure whether this or Maxwellโ€™s equations are my favourite โ€œwill fit on a T-shirtโ€ bit of mathematics though.
posted by pharm at 7:05 AM on March 14, 2017


I like the fact that ii happens to be a real number (e-๐›‘/2)

Interesting corollary: This is your obligatory reminder that e^(tau i) = 1. ==> ln (both sides) ==> tau*i is zero ==> tau is zero.
posted by kurumi at 8:23 AM on March 14, 2017


It was probably just a joke, but for those confused by kurumi's comment... that's not how logs of complex numbers work.
posted by mr_roboto at 9:32 AM on March 14, 2017 [1 favorite]


IIRC our proof route to e๐›‘๐‘–=-1 was via Taylor series expansion & the connection with sin + cos.

I recently began re-learning basic signals & systems (from a variety of sources) for fun, having studied it 30+ years ago and enjoying it. I compile summaries for myself based on what I read.

Here's an excerpt regarding the origins and combinations of series expansions that gets you to Euler's Formula ...
posted by ZenMasterThis at 10:24 AM on March 14, 2017 [2 favorites]


The Taylor series has always blown my mind. I just don't see what the relation between an infinite series of terms and, say, the sine function is. How did they figure this out?
posted by thelonius at 10:36 AM on March 14, 2017 [1 favorite]


Cringing as I remember the bit in The Imitation Game when Keira Knightley, playing a mathematician, pronounces "Euler" as "you-ler" not "oil-er".
posted by w0mbat at 10:56 AM on March 14, 2017 [1 favorite]


How did they figure this out?

Integration. The trick is that the integral of the first derivative of f(x) is f(x). The integral of f''(x) is f'(x). If you nest all of the integrals up to that of the (1+n)th derivative, you can show that it expands out to the Taylor series with a remainder term that gets arbitrarily small as x->x0 and n->infinity.
posted by mr_roboto at 12:04 PM on March 14, 2017 [1 favorite]


One approach to complex numbers I've always liked is to think of them as 2x2 matrices. If you identify the identity matrix
  ( 1   0 )
  ( 0   1 )
with the real number 1 and the matrix
  ( 0   1 )
  (-1   0 )
with the imaginary unit i, then the algebra generated from these by the usual matrix addition and multiplication is the same as (isomorphic to, in mathematical terminology) the algebra of complex numbers.

The second row of the matrix is determined by the first, so we can think of the first row as specifying a point in a plane. If you define the usual dot product on the first row then it's plain to see that if you multiply anything by i, the result is equal in magnitude to what you started with, but perpendicular--laying bare the fact that multiplication by i is the same thing as rotation by ๐›‘/2. Then, as long as you believe that calculus works as you'd expect, you can see that the function f(t) = e^(it) has to trace out a unit circle: when t = 0 you have f(0) = 1, and the derivative f'(t) = ie^(it) is always equal to 1 in magnitude but perpendicular to the vector from the origin to the value f(t). Without prior knowledge of ๐›‘ it's not immediately obvious how long it takes to trace out the circle, but you can always follow Ahlfors' classic text on complex analysis and define ๐›‘ to be half the period of the function f(t) = e^(it)*. The direction, at least, can be picked out by evaluating f'(0).

The Taylor series has always blown my mind. I just don't see what the relation between an infinite series of terms and, say, the sine function is. How did they figure this out?

The basic idea isn't so strange: approximate a function by a polynomial of degree n by matching the value and first n derivatives at a point. Sure, why not? What's more surprising to me is that it works so well--that there's a class of functions that are determined globally (even just on an interval) by their value and derivatives at a single point, and that this class includes so many useful functions. (It's not as surprising in the context of unique solutions to initial value problems, but I think Taylor, Maclaurin, etc. predated the standard existence and uniqueness theorems.)

*I have always found this to be a delightfully audacious definition
posted by egregious theorem at 1:54 PM on March 14, 2017 [4 favorites]


Note that if you're looking at the Taylor series for F(x) around x = 0 (the Maclaurin series, that is), the coefficient of the nth derivative of F at x = 0 is simply the nth term in the series for ex, which means that you can look at your function F(x) in that neighborhood of 0 as the dot product of two vectors, one where the nth entry in the vector is the (fixed) numerical value of the nth derivative of F(x) at 0, and another where the nth entry in the vector is the nth term in the series for ex (which is a variable expressed in terms of integer powers of x, and except for the first, varies as x varies).

Which I think kind of makes ex the mother of all analytic functions on x, since each such function F(x) can be represented as a weighted sum of terms of the series for ex, where the weight of the nth term is merely the numerical value of the nth derivative of F(x) at a point where you chose to set x equal to 0.
posted by jamjam at 4:35 PM on March 14, 2017 [1 favorite]


w0mbat: They defended that pronunciation, saying that it was the convention in England at that time.
posted by Obscure Reference at 5:14 PM on March 14, 2017 [1 favorite]


Everyone knows it's e^pi*j = -1

(If you're an EE, anyway.)


It's my personal belief that the EE's j is actually -i but nobody's noticed.
posted by flabdablet at 9:52 PM on March 14, 2017 [2 favorites]


It was probably just a joke, but for those confused by kurumi's comment... that's not how logs of complex numbers work.

Just in case anyone needs this explaining: ln on complex numbers is multivalued, so the same logarithm also proves that ๐œ๐‘– = 2๐œ‹๐‘–. This ought to be unsurprising :)
posted by pharm at 4:13 AM on March 15, 2017


« Older When I was a child, I spake "WTF?!"   |   Listen to the clouds Newer »


This thread has been archived and is closed to new comments