Cyclical reasoning
January 11, 2019 6:06 AM   Subscribe

 
Still haven't gotten past that killer gif opener
posted by stinkfoot at 6:40 AM on January 11 [6 favorites]


Very cool. I saw this video recently which uses a quite different approach to explain the Fourier transform.
posted by exogenous at 6:43 AM on January 11 [2 favorites]


> Still haven't gotten past that killer gif opener

Meh. How good can a gi....... *woah*.
posted by Leon at 6:48 AM on January 11 [10 favorites]


And the last screen shows the power of cursive.
posted by Nanukthedog at 6:57 AM on January 11


I'm curious what percentage of users drew a dick in the "draw your own fourier transform thing" box. I definitely didn't, but I'm sure many users did.
posted by stinkfoot at 6:59 AM on January 11 [5 favorites]


My first job after Uni I had to implement an FFT algorithm. In C but wasn’t fast enough so rewrote in 80387 assembly. I wish I still had the code. Ran like the blazes. Interesting stuff - the visual is cool.
posted by parki at 7:05 AM on January 11 [2 favorites]


I'm curious what percentage of users drew a dick in the "draw your own fourier transform thing" box.

I tried to draw Dickbutt. Not sorry.
posted by thelonius at 7:16 AM on January 11 [2 favorites]


Poor Laplace gets no love
posted by GuyZero at 7:34 AM on January 11 [6 favorites]


@GuyZero: If he wanted to be mentioned first he should have changed his name to Lawin.
posted by ptfe at 8:31 AM on January 11 [4 favorites]


Wow, this is wonderful. Bookmarked for future teaching needs. Thanks for posting it!
posted by Dashy at 8:36 AM on January 11 [1 favorite]


This wins the internet.
You can shut the web down now, this is what is was for.
posted by signal at 9:05 AM on January 11 [2 favorites]


This is fantastic!
posted by Tell Me No Lies at 9:18 AM on January 11


@GuyZero: Poor Laplace indeed.

I'm just enough of a mathematician to have the typical simplified intro to Fourier series annoy the heck out me. Fourier invented the technique to solve a problem about heat transfer in a metal plate. Nothing to do with waves of any kind.
posted by SemiSalt at 9:22 AM on January 11 [1 favorite]


That was pretty cool (and I didn’t realize that JPEGs use it) but still doesn’t answer the question I always have: how does one actually derive the Fourier transformation of a given signal? For any random squiggly line what would I do to determine its component sine waves?
posted by TedW at 9:52 AM on January 11 [1 favorite]


@GuyZero: If he wanted to be mentioned first he should have changed his name to Lawin.

I had to read this like 10 times, but I got it.
posted by GuyZero at 10:06 AM on January 11 [1 favorite]


I thought this was a true delight, but I still don't understand the drawing bit. Does each consecutive circle represent a smaller wave? I get that.... but how do you get it to draw only on this bit of the line and not that bit? It just seems.... monumentally coincidental.

I wonder who the first person was who figured out the circles drawing bit?
posted by rebent at 10:42 AM on January 11


how does one actually derive the Fourier transformation of a given signal?

I'm up against or beyond the limits of my knowledge but I think this is often done with a Fast Fourier Transform.
posted by exogenous at 10:49 AM on January 11 [2 favorites]


I had to read this like 10 times, but I got it.

(Laplace would have been first with a better Fourier.)
(Fourier is more of a square.)
(Laplace was good, but he wasn't Legendrey.)

Unfortunately for all involved I have a whole...uh...stable of these jokes.
posted by ptfe at 11:26 AM on January 11 [2 favorites]


Cyclical reasoning

See "Reasoning, cyclical".
posted by Greg_Ace at 11:33 AM on January 11 [2 favorites]


TedW: aside from the FFT code side of things have you checked out the video exogenous linked? (It's also linked by the site in the Further Resources section at the end of the page.)

Those scrolling controlled animations were a total surprise and amazement.

ptfe made me snortle for a bit. Yay math jokes.
posted by zengargoyle at 12:11 PM on January 11


I'm up against or beyond the limits of my knowledge but I think this is often done with a Fast Fourier Transform.

In the digital domain you're getting a DFT (discrete Fourier Transform) and you do that with a version of the FFT algorithm. A lot of DSP actually uses related things like the Discrete Cosine Transform but I only barely know this stuff anyway.
posted by atoxyl at 1:07 PM on January 11


I of course, drew a Donkey for the figure to be transformed.

There was a running joke in Fluidization Engineering where they use multiple variables to model the experimental data. One of the Profs I worked with wrote a paper saying that he used 5 variables to model, whereas the previous model only used four. The original lead author (the legendary Octave Levenspiel) wrote to the editor; "Of course, if you give me 5 variables I can put a Donkey on a curve." Then another Prof actually used Fourier Transforms to model a Donkey on a curve. I think he came up with an answer of like 5 or 7 or something.

I tried that here, but unfortunately the graph does not give you a number. :(
posted by indianbadger1 at 1:21 PM on January 11 [2 favorites]


I thought this was a true delight, but I still don't understand the drawing bit. Does each consecutive circle represent a smaller wave?

That's basically it, yes. They're representing the sum of all of the waveforms by attaching the center point of the next circle to a tangent of the previous larger circle and then spinning them all at the same time, with the period of of rotation of each smaller circle being 1/2 the duration of the larger. Only the last, smallest circle draws a line. Also, the initial configuration of the circles is important; they have to "start" at the exact positions they do, or you get a different result.
posted by Aleyn at 3:29 PM on January 11 [1 favorite]


Or you can just start anywhere and keep on cycling. Like the Mayan Calendar it will eventually repeat and repeat and repeat. You just need to add more circles and wait.
posted by zengargoyle at 6:08 PM on January 11


I was a little bummed that this didn’t go into the rotating speaker analogy as a way to explain how the constituent frequencies of a compound signal are actually isolated, but the JPEG visualization is awesome and like nothing I’ve ever seen before, so thanks!
posted by invitapriore at 6:15 PM on January 11


I was a little bummed that this didn’t go into the rotating speaker analogy as a way to explain how the constituent frequencies of a compound signal are actually isolated, but the JPEG visualization is awesome and like nothing I’ve ever seen before, so thanks!

When I first saw static versions of those basis function visuals (on Wikipedia) I literally had to go and share them with people I knew because while I was familiar with the DFT/DCT in the audio context and from CS school I never quite got what "frequency" actually meant in an image until I saw it. So the animated version really tickled me.
posted by atoxyl at 6:42 PM on January 11 [1 favorite]


stinkfoot: I tried to, but I can never get Nixon’s nose right.
posted by drfu at 9:24 PM on January 11 [1 favorite]


how does one actually derive the Fourier transformation of a given signal?

You write a circle algebraically with e^wi. A sum of circles is

ae^wi + bd^xi + cd^yi + de^zi + …

You set the w, x, y, z yourself, always in a specific fixed pattern, so those aren’t variables.

Set up a system of equations where each sum of circles equals one of your points:

ae^wi + bd^xi + cd^yi + de^zi = p1
ae^wi + bd^xi + cd^yi + de^zi = p2
ae^wi + bd^xi + cd^yi + de^zi = p3
ae^wi + bd^xi + cd^yi + de^zi = p4

Then you have a system of N equations in N unknowns, so you just solve for a, b, c and d.

a, b, c and d determine how big each circle is, while the w, x, y and z control the speed they spin. You set w, x, y and z so that those four circles go around once, twice, 3x and 4x over the course of your data points. That pattern of circle speeds is the Fourier matrix.

If you have 5 points, then you’ll need to add another circle so you have 5 equations in 5 variables and can solve it. If it’s 48,000 Hz sound, where the data points are 48,000 air pressure measurements, you’ll need 48,000 equations in 48,000 variables.

Because the Fourier matrix is fixed and very regular, the Fast Fourier Transform can take advantage of the regularity to skip a lot of the arithmetic. Thus a computer can solve for the 48,000 unknowns a lot faster than you might think.
posted by kadonoishi at 11:07 PM on January 12 [3 favorites]


Thanks for this; some of the best presentation I've ever seen, and it works on mobile too.
posted by Acey at 12:36 AM on January 14


atoxyl: When I first saw static versions of those basis function visuals (on Wikipedia) I literally had to go and share them with people I knew because while I was familiar with the DFT/DCT in the audio context and from CS school I never quite got what "frequency" actually meant in an image until I saw it.

I actually found the version in the article difficult to explore, because it seems to operate off the page scrolling, so I was interested in seeing this. Is this the visual you were talking about? It's not static, but it did "parse" for me a lot more easily than the one in the article.
posted by bjrubble at 10:22 AM on January 14


how does one actually derive the Fourier transformation of a given signal?

Further to kadonoishi's explanation:

1. Background: bases for a vector space

Think about the xy-plane. Every vector (i.e. arrow from the origin to a point (x,y) in the plane) can be represented as a sum of it's x-component times the unit vector in the x direction (the vector pointing to the point (1,0)) plus it's y-component times the unit vector in the y direction (the vector pointing to the point (0,1)). For technical reasons, I'll write [x,y] to mean the vector pointing from the origin (0,0) to the point (x,y). In math notation,

[x,y] = x*[1,0] + y*[0,1],

where x and y can be any real numbers. This way of representing the vector [x,y] as a linear combination of the unit vectors [1,0] and [0,1] is unique - that is, if we changed the coefficients x or y, we'd end up with a different vector. Any other pair of vectors u1 and u2 with this same property, that there exist unique coefficients a1 and a2 such that

[x,y] = aau1 + a2u2

is called a basis for the plane.


In three dimensions, our standard basis consists of the now three unit vectors, [1,0,0], [0,1,0], and [0,0,1]. But similarly, other bases are possible. Eg. think of just rotating that "frame" of the x, y, and z axes in three dimensional space a little bit. We can go up to even higher dimensions by just adding more and more components. Eg. "points" in 5-dimensional space look like (x,y,z,u,v); vectors we'll write as [x,y,z,u,v], and there are now five unit vectors, [1,0,0,0,0], [0,1,0,0,0], [0,0,1,0,0], [0,0,0,1,0], and [0,0,0,0,1], along the now five coordinate axes. (No, I can't visualize this fully either. That's okay.)


No matter how many dimensions, n, we have, a basis is any set u1, u2, ... un such that there is a unique set of coefficients a1, a2, ..., an for each vector [xa2, ..., xn] so that the vector can be written as a linear combination of the basis vectors,

[xa2, ..., xn] = a1u1 + a2u2 + ... + anun.

In order for this to work out in finite dimensions, the basis vectors have to all be perpendicular to each other (called orthogonal in math terminology), just like the coordinate axes themselves are. So that visual of taking the "frame" of coordinate axes and just rotating it somehow? That's the main idea.


2. Background: Finding the coefficients for a vector relative to a non-standard basis in finite dimensions

Let's think about our example of the standard basis in the plane. How did we find the coefficients x and y?

| [x,y]
| *
|____|_
x

Well, if we look directly along the y-axis then the projection of our vector [x,y] onto the x-axis is just the point x. Similarly, if we look directly along the x-axis, then the projection of [x,y] onto the y-axis is the point y.

| [x,y]
y|___*
|______

Geometrically speaking, this is how we'll find the coefficients a1, a2, ..., an in any dimension n, relative to any basis u1, u2, ..., un.

Algebraically/computationally, we find the coefficients using an inner product - which is just a dot product in finite-dimensional vector spaces. Specifically,

ak = x . uk,

for any k = 1, 2, ..., n, where x = [xa2, ..., xn] is our vector, and uk is the basis vector that we want to find the coefficient for.

What is a dot product and why dot product? This is already a super long post, so I'll direct you to a linear algebra textbook for the details. Conceptually, the big idea is: we find the coefficients by projection onto the basis vectors.


3. Background: sine and cosine waves versus complex exponentials

A number of the form x + iy, where i is the square root of -1 (the "imaginary" number) and x and y are any real numbers, is called a complex number. Famous mathematician Euler discovered an identity that is, like many things, named after him, that relates the complex exponential functions (with base the natural base e) and the real-valued sine and cosine functions:

ei*x = cos(x) + i*sin(x).

So with Fourier transforms, you can work with sine and cosine waves, or you can work with complex exponential functions. They are equivalent. The algebra and notation is a little easier with complex exponentials, but for folks who haven't seen complex numbers, sines and cosines work just as well.


4. The main idea part 1: complex exponentials as a basis for space of signals

The complex exponential functions einx, where i is the square root of -1 (the imaginary component of complex numbers), n is an integer (0,+/-1, +/-2, +/-3, ...), and x is your variable (they are functions, so there needs to be an input variable) form a basis (more on this in a moment) for the space of periodic signals/functions. Equivalently, from Euler's identity you can think of the sine and cosine waves of different periods, sin(nx) and cos(mx) for all possible integers n and m, as forming a basis for the space of periodic signals.

If you change the n or m to be any real number, t, then the complex exponentials eitx - equivalently, the set of sine and cosine waves sin(tx) and cos(tx) for all possible real numbers t - form a basis for the space of finite energy signals (i.e. functions that tail off fast enough so that the integral of |f(x)|2 is finite).

Understanding why this is true also requires more explanation and background than I have space to provide here; and is not super important conceptually.

Notice that now we no longer have a nice finite-dimensional vector space, but we're talking about infinite-dimensional function spaces. The idea of a basis is still the same: when we say that the complex exponentials einx form a basis for periodic signals/functions, we mean that there are unique coefficients an so that we can write the periodic signal/function m(x) as

m(x) = a0ei*0*x + a1ei*1*x + a-1ei*(-1)*x + a2ei*2*x + ....

Or, for sine and cosine format,

m(x) = a0 + a1cos(1*x) + b1sin(1*x) + a-1cos(-1*x) + b-1sin(-1*x) + a2cos(2*x) + b2sin(2*x) + a-2cos(-2*x) + a-2sin(-2*x) + ....

When f(x) is a finite energy signal but not periodic, we need to "add" the products of a coefficient at times a basis function eitx for all real number t. This gives us an integral instead of a sum:

f(x) = int( a(t)eitx dt)

(the sine and cosine version gets more complicated looking, so I'll omit it here).


5. The main idea part 2: finding the Fourier coefficients

To find the coefficients ak, we still want to "project" our signal m(x) onto the basis function einx. We can still think of this somewhat geometrically - this means we want to see which multiple an of the basis function best fits into the function m(x). In other words, we want the difference between m(x) and aneinx to be as small as possible, appropriately measured (there are several different ways to think about measuring the distance between two functions, so we have to choose the right one for the given context). This will give us a generalization of the dot product (aka a more general inner product), which again uses an integral since now we have infinitely many dimensions.

an = int( m(x)einx dx)

(or, for finite energy but not necessarily periodic signals,

a(t) = int( f(x)eitx dx) ).


6. Simplifications used in practice on computers

Computers are finite. Signals that we want to take a Fourier transform of in general consist of a finite list of discrete values sampled at regular time intervals, with an initial and final cutoff on the time interval that we measure the signal over. So in practice our signal is just a discrete list of values, [x1, x2, ..., xn], and the Fourier transform problem reduces to a change of basis in a finite dimensional vector space problem. We need to know the right finite-dimensional basis vectors to more or less correspond to the complex exponential functions / sine and cosine waves. But once we have those, the process of finding the coefficients is just as described in background section 2 above, or in kadonoishi's briefer explanation.
posted by eviemath at 6:38 AM on January 15 [6 favorites]


This is very nicely done. There's a difference between understanding a thing, and being able to explain the thing clearly to a newcomer.

(I already knew the basics from tinkering around with synths and electronic music for 25+ years – but the 3D stuff and the JPEG stuff was new to me, and is pretty awesome. Would love to see similar non-technical explainers on mathematical topics. The visualizations make it much more accessible.)
posted by escape from the potato planet at 9:47 AM on January 15 [1 favorite]


eviemath, that was amazing, I have never thought about the Fourier transform in terms of function spaces, which may just be because I just learned enough to use it as a tool, and maybe the formalization you describe is well-known by people who come at it from a mathematical perspective, but still, thanks!

As an aside, I'm definitely curious about the nature of finite energy signals, and especially what their relationship is with continuity/real numbers.
posted by invitapriore at 4:12 PM on January 15


Also I’ve always wondered whether there’s some geometric intuition underlying Euler’s formula, or if it’s just conceived of as a useful consequence of the Taylor series definitions in question and there’s no real value (no pun intended) in trying to justify that consequence in an intuitive fashion.
posted by invitapriore at 5:52 PM on January 15


Also I’ve always wondered whether there’s some geometric intuition underlying Euler’s formula,

The book you're looking for is Visual Complex Analysis, though a search on complex geometry might turn up something useful?

curious about the nature of finite energy signals, and especially what their relationship is with continuity/real numbers.

Any signal is just a function. It's that simple. Mostly signals are continuous phenomena (temperature over time, sound over time, voltage over time, etc.) that we only measure at discrete time intervals. Still the same thing.

"Finite energy" just means that if you integrate (find the area under) |f(x)|2 over the entire real line, you get something finite, not infinite.
posted by eviemath at 6:56 AM on January 16 [3 favorites]


The relevance of "finite energy" (in math terms, functions in the function space L2) is that you can then define an inner product - akin to the dot product of vectors - and more or less treat your functions like a vector space, even though they are infinite dimensional rather than finite dimensional. Which means you can talk about a basis for the space - and thus your Fourier transforms are always actually defined. If you try to find Fourier coefficients for something that isn't finite energy, they may be undefined (i.e. not finite numbers).
posted by eviemath at 7:05 AM on January 16 [1 favorite]


(More about dot products (3 blue 1 brown).

Possibly also related: we're used to adding up finite numbers of things in a sum:

thing_1 + thing_2 + thing_3 + ... + thing_n.

Mefites who have taken two semesters or three trimesters of university calculus may have at least vague recollections of adding up infinitely many things in an infinite sum or series:

thing_1 + thing_2 + thing_3 + thing_4 + thing_5 + ....

Mefites who haven't taken calculus, might still have seen something like Zeno's paradox (from Zeno's bridge in the old computer game Beyond Zork, also appeared in an Asimov short story, probably other places) where you consider, say, having to cross a bridge. To cross the bridge, you first have to go half way across. Then you have to go half of the remaining distance. Then half of the new remaining distance. And so on. No matter how small the remaining distance is, you can always take half of it. So to cross the bridge, you have to add up the infinitely many (decreasing) lengths:

1/2 + 1/4 + 1/8 + 1/16 + 1/32 + ... + 1/(2n) + ....

Sometimes, like in the bridge case, it makes sense to talk about this infinite series having a value, or sum. In this case, we know empirically that we do eventually cross the whole bridge, so that the infinite series above adds up to 1. (The "paradox" part of Zeno's paradox is that if you took the same amount of time to go these diminishing distances, like if it took you one minute to go the first half, then another minute to go the next 1/4, then another minute to go the next 1/8, and so on, then it would take you literally forever to cross the bridge. In real life, we walk at more or less a constant pace, so the amount of time required to cross the bridge also adds up to 1, because it takes a correspondingly shorter amount of time to go a shorter distance.) In other cases, it doesn't make sense. There's no single value that we can assign to the infinite series

1 - 1 + 1 - 1 + 1 - 1 + ...

since you could group it one way to get 0 = (1-1) + (1-1) + (1-1) + ..., or group it another way to get 1 = 1 + (-1+1) + (-1+1) + ..., and there's no good reason to choose one option over the other, so we say that there is no "sum" to this infinite series. Or, 1 + 2 + 3 + 4 + 5 + ... grows larger and larger, off to infinity, so also doesn't have a "sum" in the traditional sense of a finite number that the other numbers add up to.

The point is that when we have infinite things, we have to be a little more careful about definitions. Thus going from regular finite-dimensional vector spaces to infinite-dimensional function spaces, we have this extra, seemingly nit-picky requirement that we only look at "finite energy signals", just to guarantee that stuff is defined.

Infinite series are what you get when you add up a countably infinite number of things (countably infinite means that there are infinitely many, but you can, literally, count them off as "thing 1", "thing 2", "thing 3", etc.).

You can think of the index (1, 2, 3, 4, 5, ...) as being the whole numbers on an x-axis, and you can think of the values thing_1, thing_2, thing_3, ... as representing heights on the y-axis, and then you can plot the points (1,thing_1), (2,thing_2), (3,thing_3), ... in the xy-plane, and think of "thing" as a function that takes whole numbers 1, 2, 3, ... as inputs and outputs the values thing_1, thing_2, thing_3, .... So in that sense, our discrete signals are in the same framework as continuous-time signals or what most people are used to thinking of as mathematical functions.

If you make a rectangle of height thing_1 and width 1, over the interval from 0 to 1 on the x-axis, then thing_1 = 1*thing_1 is the area of that rectangle. You can make another rectangle of height thing_2 over the interval from 1 to 2 on the x-axis, so that thing_2 is the area of that second, adjacent rectangle. And so on. You can then interpret the infinite series

thing_1 + thing_2 + thing_3 + thing_4 + ...

as the total area of all of these (countably infinite) rectangles. Then the infinite series has a "sum" if the total area under all (infinitely many) of the rectangles is finite. Which brings us back to finite energy signals (though there the rectangle heights and thus areas should be the squared values).

You can have more than a countably infinite number of things, however. For example, there's no way to count off the real numbers - there's just too many of them. So if you try to add up infinitely many things, but you have more than countably infinitely many things that you are trying to add up, then you get an integral! It has the same area interpretation as our rectangles above. (We can think of the rectangles as being there, just with infinitesimally small width, though the distinction between zero width and infinitesimally small width is not an intuitive one for most people, so that's probably rather unhelpful, unless you by chance also have a vague recollection of taking limits of Riemann sums in a calculus class to get integrals.) So that's why the generalization of a dot product from vectors becomes that integral that I was talking about as the inner product for the space of finite energy signals.)

posted by eviemath at 7:55 AM on January 16 [3 favorites]


Glad I remembered to check back to this post: kadonoishi and eviemath rocked it. Now I really wish Metafilter had MathJax | Beautiful math in all browsers support for these times when it could be pretty and useful.
posted by zengargoyle at 3:18 PM on January 16


« Older Bright lights, small city   |   The Embroidered Computer Newer »


This thread has been archived and is closed to new comments