The Abstract Representation of Things
June 20, 2021 2:23 AM   Subscribe

Combinators and the Story of Computation - "The idea of representing things in a formal, symbolic way has a long history... But was there perhaps some more general and fundamental infrastructure: some kind of abstract system that could ultimately model or represent anything? Today we understand that's what computation is. And it's becoming clear that the modern conception of computation is one of the single most powerful ideas in all of intellectual history—whose implications are only just beginning to unfold. But how did we finally get to it? Combinators had an important role to play, woven into a complex tapestry of ideas stretching across more than a century." (also btw The Nature of Computation previously)
In the end, combinators and lambda calculus are completely equivalent, and it’s quite easy to convert between them—but there’s a curious tradeoff. In lambda calculus one names variables, which is good for human readability, but can lead to problems at a formal level. In combinators, things are formally much cleaner, but the expressions one gets can be completely incomprehensible to humans.

The point is that in a lambda expression like λx λy x[y] one’s naming the variables (here x and y), but really these names are just placeholders: what they are doesn’t matter; they’re just showing where different arguments go. And in a simple case like this, everything is fine. But what happens if one substitutes for y another lambda expression, say λx f[x]? What is that x? Is it the same x as the one outside, or something different? In practice, there are all sorts of renaming schemes that can be used, but they tend to be quite hacky, and things can quickly get tangled up. And if one wants to make formal proofs about lambda calculus, this can potentially be a big problem, and indeed at the beginning it wasn’t clear it wouldn’t derail the whole idea of lambda calculus.

And that’s part of why the correspondence between lambda calculus and combinators was important. With combinators there are no variables, and so no variable names to get tangled up. So if one can show that something can be converted to combinators—even if one never looks at the potentially very long and ugly combinator expression that’s generated—one knows one’s safe from issues about variable names.

There are still plenty of other complicated issues, though. Prominent among them are questions about when combinator expressions can be considered equal. Let’s say you have a combinator expression, like s[s[s[s][k]]][k]. Well, you can repeatedly apply the rules for combinators to transform and reduce it. And it’ll often end up at a fixed point, where no rules apply anymore. But a basic question is whether it matters in which order the rules are applied. And in 1936 Church and Rosser proved it doesn’t.

Actually, what they specifically proved was the analogous result for lambda calculus. They drew a picture to indicate different possible orders in which lambdas could be reduced out, and showed it didn’t matter which path one takes.

This all might seem like a detail. But it turns out that generalizations of their result apply to all sorts of systems. In doing computations (or automatically proving theorems) it’s all about “it doesn’t matter what path you take; you’ll always get the same result”. And that’s important. But recently there’s been another important application that’s shown up. It turns out that a generalization of the “Church–Rosser property” is what we call causal invariance in our Physics Project.

And it’s causal invariance that leads in our models to relativistic invariance, general covariance, objective reality in quantum mechanics, and other central features of physics.
-Combinators: A Centennial View
-Announcing the S Combinator Challenge
-How Inevitable Is the Concept of Numbers?[1,2] How Claude Shannon Invented the Future - "Today's information age is only possible thanks to the groundbreaking work of a lone genius."

The Mystery at the Heart of Physics That Only Math Can Solve - "The accelerating effort to understand the mathematics of quantum field theory will have profound consequences for both math and physics."

Mathematicians Prove 2D Version of Quantum Gravity Really Works - "In three towering papers, a team of mathematicians has worked out the details of Liouville quantum field theory, a two-dimensional model of quantum gravity."

What Makes Quantum Computing So Hard to Explain? - "To understand what quantum computers can do — and what they can't — avoid falling for overly simple explanations."[3,4]

'Helgoland' Offers A New Way To Understand The World, And Our Place In It - "What quantum mechanics is teaching us, Rovelli says, is that reality is a vast net of interactions where there are no things, only relationships."[5,6]

An Interview with Tenstorrent: CEO Ljubisa Bajic and CTO Jim Keller - "What we have is a really nice hardware abstraction layer between the hardware and the software that makes all that work, and that’s what I really like. [In the world of AI], if you make a really big matrix multiplier, the problem is that the bigger it gets, the less power efficient it gets because you have to move data farther. So you don't want the engine to be too big, you want it to be small enough to be efficient, but you don’t want it to be too small, because then the chunk of data it's working on isn't that interesting... You know, there's a funny thing - your brain is, I just read this recently, your cerebral cortex is 1.3 square feet. And it's essentially, it's a flat thin layer but only six neurons high, but it's built out of these columns of about 100,000 neurons, and it's very uniform. So it's a little 3D-ish. It's very big in two dimensions, but only six deep in the other dimension. That's been stable for, you know, hundreds of millions of years."[7]

An AnandTech Interview with Jim Keller: 'The Laziest Person at Tesla' - "A friend of mine, who was at Digital Research Lab, I explained how it worked to him one day - he listened to me and he just shook his head. He said 'Jim, that's not the way you do this'. He explained how virtual channels worked, and how you could have separate abstract channels of information. You get that right before you start encoding commands. As a result of that educational seminar/ass-kicking, was HyperTransport. It has a lot of the S2K protocol, but it was built in a much more abstract way. So I would say that my move from AMD, from Digital to AMD, was where we had the ideas of how to build high-performance computing, but the methodologies were integrated, so from transistor up to architecture it couldn't be the same person... Since then, I've been more convinced that the abstraction layers were right. You don't overstep human capability - that's the biggest problem. If you want to build something bigger and more complicated, you better solve the abstraction layers, because people aren't getting smarter."
But as a human, you know, group beliefs are really interesting, because when you're building things, if you don't have a group belief that makes sense then you're not going to get anything done. Group beliefs are super powerful, and they move currencies, politics, companies, technologies, philosophies, self-fulfillment. You name it...

So already the complexity of a high-end AMD, Intel, or Apple chip, is almost unfathomable that any one person. But if you actually go down into details today, you can mostly read the RTL or look at the cell libraries and say, ‘I know what they do’, right? But if you go look inside a neural network that's been trained and say, why is this weight 0.015843? Nobody knows.

IC: Isn’t that more data than design, though?

JK: Well, somebody told me this. Scientists, traditionally, do a bunch of observations and they go, ‘hey, when I drop a rock, it accelerates like this’. They then calculate how fast it accelerated and then they curve fit, and they realize ‘holy crap, there's this equation’. Physicists for years have come up with all these equations, and then when they got to relativity, they had to bend space and quantum mechanics, and they had to introduce probability. But still there are mostly understandable equations.

There's a phenomenon now that a machine learning thing can learn, and predict. Physics is some equation, put inputs, equation outputs, or function output, right? But if there's a black box there, where the AI networks as inputs, a black box of AI outputs, and you if you looked in the box, you can't tell what it means. There's no equation. So now you could say that the design of the neurons is obvious, you know - the little processors, little four teraflop computers, but the design of the weights is not obvious. That's where the thing is. Now, let’s go use an AI computer to go build an AI calculator, what if you go look inside the AI calculator? You can't tell why it's getting a value, and you don't understand the weight. You don't understand the math or the circuits underneath them. That's possible. So now you have two levels of things you don't understand. But what result do you desire? You might still be designed in the human experience.

Computer designers used to design things with transistors, and now we design things with high-level languages. So those AI things will be building blocks in the future. But it's pretty weird that there's going to be parts of science where the function is not intelligible. There used to be physics by explanation, such as if I was Aristotle, 1500 years ago - he was wrong about a whole bunch of stuff. Then there was physics by equation, like Newton, Copernicus, and people like that. Stephen Wolfram says there’s now going to be physics by, by program. There are very few programs that you can write in one equation. Theorems are complicated, and he says, why isn’t physics like that? Well, protein folding in the computing world now we have programmed by AI, which has no intelligible equations, or statements, so why isn’t physics going to do the same thing?

IC: It's going to be those abstraction layers, down to the transistor. Eventually, each of those layers will be replaced by AI, by some unintelligible black box.

JK: The thing that assembles the transistors will make things that we don’t even understand as devices. It’s like people have been staring at the brain for how many years, they still can't tell you exactly why the brain does anything.

IC: It’s 20 Watts of fat and salt.

JK: Yeah and they see chemicals go back and forth, and electrical signals move around, and, you know, they're finding more stuff, but, it's fairly sophisticated.

IC: I wanted to ask you about going beyond silicon. We've been working on silicon now for 50+ years, and the silicon paradigm has been continually optimized. Do you ever think about what’s going to happen beyond silicon, if we ever reach a theoretical limit within our lifetime? Or will anything get there, because it won’t have 50 years of catch-up optimization?

JK: Oh yeah. Computers started, you know, with Abacuses, right? Then mechanical relays. Then vacuum tubes, transistors, and integrated circuits. Now the way we build transistors, it's like a 12th generation transistor. They're amazing, and there's more to do. The optical guys have been actually making some progress, because they can direct light through polysilicon, and do some really interesting switching things. But that's sort of been 10 years away for 20 years. But they actually seem to be making progress.[8]

It’s like the economics of biology. It’s 100 million times cheaper to make a complicated molecule than it is to make a transistor. The economics are amazing. Once you have something that can replicate proteins - I know a company that makes proteins for a living, and we did the math, and it was literally 100 million times less capital per molecule than we spent on transistors. So when you print transistors it’s something interesting because they're organized and connected in very sophisticated ways and in arrays. But our bodies are self-organizing - they get the proteins exactly where they need to be. So there's something amazing about that. There's so much room, as Feynman said, at the bottom, of how chemicals are made and organized, and how they’re convinced to go a certain way.

I was talking to some guys who were looking at doing a quantum computing startup, and they were using lasers to quiet down atoms, and hold them in 3D grids. It was super cool. So I think we've barely scratched the surface on what's possible. Physics is so complicated and apparently arbitrary that who the hell knows what we're going to build out of it. So yeah, I think about it. It could be that we need an AI kind of computation in order to organize the atoms in ways that takes us to that next level. But the possibilities are so unbelievable, it's literally crazy. Yeah I think about that.
-Math Has a Fatal Flaw
-The Future Of Reasoning
-An Antidote to Dissatisfaction
-The Secret of Synchronization
-The Final Border We Will Never Cross
-The Longest-Running Evolution Experiment
-Steering Civilization Away from Self-Destruction

The Thoughts of a Spiderweb - "Spiders appear to offload cognitive tasks to their webs, making them one of a number of species with a mind that isn't fully confined within the head."

Greater than the sum of our parts: The evolution of collective intelligence - "The period preceding the emergence of behaviourally modern humans was characterized by dramatic climatic and environmental variability—it is these pressures, occurring over hundreds of thousands of years that shaped human evolution."

The Category Theory Behind UMAP - "Some people working on applied category theory have been seeking a 'killer app': that is, an application of category theory to practical tasks that would be so compelling it would force the world to admit categories are useful. Meanwhile, the UMAP algorithm, based to some extent on category theory, has become very important in genomics."[9,10]

What Google's AI-designed chip tells us about the nature of intelligence - "We humans can draw out abstractions from a problem we solve and then apply those abstractions to a new problem. While we take these skills for granted, they're what makes us very good at transfer learning. This is why the researchers could reframe the chip floorplanning problem as a board game and could tackle it in the same way that other scientists had solved the game of Go. Deep reinforcement learning models can be especially good at searching very large spaces, a feat that is physically impossible with the computing power of the brain. But the scientists faced a problem that was orders of magnitude more complex than Go."

A simple model of the brain provides new directions for AI research - "In his presentation, Papadimitriou, a recipient of the Gödel Prize and Knuth Prize, discussed how our growing understanding of information-processing mechanisms in the brain might help create algorithms that are more robust in understanding and engaging in conversations. Papadimitriou presented a simple and efficient model that explains how different areas of the brain inter-communicate to solve cognitive problems."

Paul Di Filippo Reviews Unity by Elly Bangs - "What we eventually learn—and I will strive not to spoil things—is that Danae is just one segment of a larger entity, a quasi-immortal being capable of 'unity', whose purpose in life is greater than simple personal fulfillment. Danae's quest to reunite with this 'Whole' becomes akin to a salmon swimming upstream to spawn... 'We created unity because we recognized that separate minds are fundamentally unable to understand one another. What they call communication is just a complex of flawed assumptions and mutual misunderstandings.'"
posted by kliuless (27 comments total) 77 users marked this as a favorite
 
Well, that's it for my weekend... (seriously though: thanks!)
posted by DreamerFi at 2:48 AM on June 20 [2 favorites]


The math party was fun until Gödel came along and trashed the place. Great post - I love this stuff.
posted by whatevernot at 4:15 AM on June 20 [5 favorites]


MetaFilter: it’ll often end up at a fixed point, where no rules apply anymore.
posted by chavenet at 4:19 AM on June 20 [6 favorites]


Really interesting! I’ve just started the reading, and will be at it awhile, as I’m no mathematician.

Forgive me, as a person coming at this from the Humanities, but what thought has gone into the topic of meta-mathematics with regard to, let’s say, politics? I guess my stupid, base question is something like this: Is this type of abstraction a good or bad thing for people and their lives? I don’t know—that’s why I ask.
posted by Don.Kinsayder at 7:06 AM on June 20


An alarm bell goes off, is all, when I read of getting rid of all variables. I’m a variable, ya know. Okay, I’m not understanding the topic and warping it out of character. But, hmm, I still wonder about the, uh, drive to abstraction and its power in the world of people.
posted by Don.Kinsayder at 7:12 AM on June 20 [1 favorite]


So when you think of something, say a box with a ball inside, the box may have a label or not. But to think of the box do you first assign a synapse a label __box_identifier__ to allow your inner computational system to work? No, there is no coding in the brain, but it still can find the box to get the ball. Which is not to say the idea of abstracting can not be abused, omg, all us poor cogs in the illuminati's machine.
posted by sammyo at 7:27 AM on June 20


Here is a trivial case of what "getting rid of variables" means. Say I have a doubling machine (it turns 5 into 10 etc.) and a tripling machine. If I connect them together I get a sextupling machine. I could demonstrate this by saying "if you send in x (a variable), I will output x*2*3 = x*6, which is 6 times x, QED", or I could just say "multiplication is associative, therefore my machine multiples by 6" (don't worry about the jargon), in which case we didn't have to refer to any variable x at all.

As you can imagine, it can be very fruitful to talk about (much more complicated) computation in this higher-level abstract kind of way, since you don't have these explicit variables cluttering things up.

I don't think there's much of a connection to politics.
posted by dfan at 7:29 AM on June 20 [2 favorites]


Don, it's about embodying the interactions and relationships, not objectifying people into 'things in their boxes' as you would with variables.
posted by k3ninho at 7:30 AM on June 20 [2 favorites]


p.s. I feel a bit trolled by the title of the "Math [sic.] Has a Fatal Flaw" YouTube link.
I know that a systematic effort to characterise a field will leave holes due to Gödel's Incompleteness Theorem -- but "all models are flawed and some are useful" comes up against the video expecting that people forget it's all modelling. The maths is a shadow on the cave wall of the platonic thing it models. It may be that the most accurate simulation of our cosmos is the full cosmos itself, once again that the shadows on the cave wall can't capture the fullness of the platonic ideal.
posted by k3ninho at 7:42 AM on June 20 [2 favorites]


The math party was fun until Gödel came along and trashed the place. Great post - I love this stuff.

Everything post-Euclid is just made to confuse future students.
posted by Your Childhood Pet Rock at 8:08 AM on June 20


Don.Kinsayder, see the previously: How does pure mathematics apply to our daily lives? If not actual politics, at least some more social sorts of things.
posted by zengargoyle at 8:30 AM on June 20


It may be that the most accurate simulation of our cosmos is the full cosmos itself

It is axiomatic that no simulation of a thing can be more accurate than the thing that it simulates, since any departure within the simulation from the behaviour of that which is simulated is by definition an inaccuracy.
posted by flabdablet at 8:36 AM on June 20 [2 favorites]


In regards to the Rovelli Helgoland reference up above, in 1933, Korzybski postulated an ontology of point events and relationships. A point event was essentially a set of relationships, infinite in number. The cup of coffee sitting in front of me is in relationship with the table, which is in relationship with the floor, which is in relationship with the building, ad infinitum. There are different types of relationships. But as the point event is infinite in scope, it is ultimately unknowable. It cannot be expressed. The infinite cup is abstracted via my senses into a finite set of sense impressions, representations. That set is further abstracted by leaving out more elements of that set into a higher order abstraction. Now we get “cup.” This train of abstractions can continue, in each case with fewer and fewer elements of the set. And as we abstract, we move farther and farther away from the reality we started with. Korzybski then points out that the infinite point event is actually a high order abstraction itself based on his analysis of the ontological and epistemological elements. He left us with the idea that all we can know are just the relationships. The thing itself is unknowable in and of itself.

Loads of good stuff in this post….
posted by njohnson23 at 9:03 AM on June 20 [2 favorites]


"But what happens if one substitutes for y another lambda expression, say λx f[x]? What is that x? Is it the same x as the one outside, or something different?"

Reminds me of Bertrand Russell's "Who shaves the Barber?" problem with set theory. What happens when you allow self-referential loops.

(And Hofstader's (of GEB) "Strange loops")
posted by aleph at 9:15 AM on June 20


>Reminds me of Bertrand Russell's "Who shaves the Barber?" problem with set theory.
Oh no. I'm closing this account and leaving the island because I've made a deduction about my eye colour.
posted by k3ninho at 9:19 AM on June 20 [3 favorites]


But why was I even studying things like this back in 1979? I guess in retrospect I can say I was engaged in an activity that goes back to Frege or even Leibniz: I was trying to find a fundamental framework for representing mathematics and beyond...
Stephen Wolfram has spent decades making the most abstract stuff accessible. A worthy goal and one that will surely benefit people for generations to come.
Definitely a better kind of person than, for example, one that throws entire demographics under the trolley to obtain power and profit.

And yet, those decades spent finding a fundamental framework might be decades not necessarily focused on doing no harm to the people immediately around him.

I am in a quantum-like superposition of being excited by this new kind of science, and dreading revelations of news of bad behavior.
posted by otherchaz at 9:29 AM on June 20 [1 favorite]


The problem with named variables is not that they let you have self-referential expressions (they don't grant you any special powers in this regard), but that they introduce a lot of fiddly bookkeeping. By getting rid of them in the SK combinator system, the result is a system that's more straightforward (you might even say mechanical) to evaluate than the Lambda calculus, at the cost of being less intuitive to humans.
posted by Pyry at 9:34 AM on June 20 [1 favorite]


Bertrand Russell's "Who shaves the Barber?" problem

As Stabbing Westward put it: "I can not shave you, I can't even shave myself, so just shave yourself."
posted by Foosnark at 10:36 AM on June 20 [4 favorites]


i am not so smart
posted by j_curiouser at 11:14 AM on June 20 [2 favorites]


Forgive me, as a person coming at this from the Humanities, but what thought has gone into the topic of meta-mathematics with regard to, let’s say, politics?

The philosopher Alain Badiou (link to a somewhat critical article about Badiou & maths) grounds his philosophy on set theory; his politics, which emerges from this, is essentially a form of radically egalitarian communism.
posted by Saxon Kane at 3:47 PM on June 20 [1 favorite]


Most of this is over my head but the interview with Keller is amazingly useful bro me right now. Have a large project, >10x anything I've done before, and very messy, very complicated. Some things in the way Keller speaks that helps frame complicated things to shift them to complex.
posted by unearthed at 5:31 PM on June 20


p.s. I feel a bit trolled by the title of the "Math [sic.] Has a Fatal Flaw" YouTube link.

Veritasium likes his clickbait titles, that's for sure. I also noticed (because Youtube likes to recommend me his videos, even though I can't really stand him) that he must be A/B testing his titles incessantly, because I kept seeing other titles, like "There's a Hole at the Bottom of Math".

The maths is a shadow on the cave wall of the platonic thing it models.

Not everyone agrees with that! See Wikipedia, or (if you're feeling adventurous) the Stanford Encyclopedia.
posted by nosewings at 9:12 PM on June 20 [1 favorite]


Oh, and also

The problem with named variables is not that they let you have self-referential expressions

To expand, you can't make a (directly) self-referential expression in a combinator calculus, but you can make an expression with no normal form, which is just as good (bad). In the SKI calculus, (SII)(SII) = (I(SII))(I(SII)) = (SII)(SII) = (I(SII))(I(SII)) = ... etc. It's just the Ω-combinator (λx.xx)(λx.xx) expressed in a different language.
posted by nosewings at 9:27 PM on June 20 [1 favorite]


Wolfram’s quest to prove the universality of the S combinator is confusing.

He links to a paper proving that the halting problem for S alone is decidable, which, since we know it’s undecidable for general computation, should be a pretty big clue that S is not universal. And the very first comment on his S-combinator challenge prize page seems like a clear proof that S cannot encode the function that takes any natural number to 0.

His idea for getting around this seems to be that general computation can be somehow “embedded” in a non-terminating computation built from S, with some extra machinery that tells you how to identify when the computation has terminated and where the output is encoded within this tree of expressions growing without bound. But something tells me this machinery would need the equivalent of both S and K, never mind the fact that the universality of S + vague encoding procedure is not equivalent to the universality of S alone.

What the value of this universality would be is no clearer than what we’re learning by staring at all these graphs of the growth of combinator expressions.
posted by mubba at 7:54 AM on June 21 [3 favorites]


Forgive me, as a person coming at this from the Humanities, but what thought has gone into the topic of meta-mathematics with regard to, let’s say, politics?

Well I don't know if this answers your question, but it is adjacent, at least. Some of the motivation for the project to eliminate metaphysics in favor of formalized logical methods (Carnap, The Vienna Circle in general) was the increasingly anti-Semitic and proto-Fascist trends in Lebensphilosophie. You also had people like Hermann Cohen attacked as not fit to teach Kant, because they were Jews, and a "German" should represent the great philosopher. If Carnap, in his famous clashes with Heidegger, thought that Heidegger was a fellow-traveller to this, well, it looks like he was right.
posted by thelonius at 8:45 AM on June 21 [1 favorite]


Forgive me, as a person coming at this from the Humanities, but what thought has gone into the topic of meta-mathematics with regard to, let’s say, politics?


I'm still only halfway through the essay, but what the takeaway I would give to a layman so far is first the usual caveat (when Steven Wolfram claims profound implications from a piece of math, you can conclude that the body of math he's looking at is pretty damned cool and relevant to concerns of the day, and the implications he's drawing are silly), and then the reason why everyone else is looking at the same works nowadays.

In 1937, Alan Turing showed that the only way to prove how a computer calculation will unfold is to RUN it. Which is an important logical truth, but one that's been made entirely too much of. The obvious conclusion from it is that software developers can only establish the quality of their product with run-time testing, because compile-time analysis can only go so far. That's been the excuse for decades of shoddy software. Applying the works of Haskell Curry et al to software, in particular category theory, which this essay discusses at length, enables software writers to prove at compile time that a growing repertoire of categorized bugs is absent from their code. The computer languages Haskell, Erlang, OCaml, Rust, and a growing list of others, use this work to show their code free of memory management errors (the bane of the 90's), concurrency errors (bane of the aughts), and security model errors (bane of the teens).

And since Wolfram might be pointing to more things I should know about this field, I will now return to his essay.
posted by ocschwar at 11:26 AM on June 21 [1 favorite]


I read that all the way through and the best part was the last few paragraphs.
posted by hypnogogue at 7:45 PM on June 21


« Older ACT UP: A History Of AIDS/HIV Activism   |   The Lingua Franca of Booze is Inherently Nebulous Newer »


This thread has been archived and is closed to new comments