Why functional programming? Why Haskell?
July 15, 2018 6:30 PM   Subscribe

Haskell is most likely quite different from any language you've ever used before. In Haskell, we de-emphasise code that modifies data. Instead, we focus on functions that take immutable values as input and produce new values as output. Given the same inputs, these functions always return the same results. This is a core idea behind functional programming.

Along with not modifying data, our Haskell functions usually don't talk to the external world; we call these functions pure. We make a strong distinction between pure code and the parts of our programs that read or write files, communicate over network connections, or make robot arms move. This makes it easier to organize, reason about, and test our programs.

We abandon some ideas that might seem fundamental, such as having a for loop built into the language. We have other, more flexible, ways to perform repetitive tasks.

Even the way in which we evaluate expressions is different in Haskell. We defer every computation until its result is actually needed: Haskell is a lazy language. Laziness is not merely a matter of moving work around: it profoundly affects how we write programs.

Full text of Real World Haskell

How a purely functional programming language can change your life
posted by hexaflexagon (110 comments total) 58 users marked this as a favorite
 
I am currently having flashbacks to terribly delivered first year CompSci courses.
posted by threeze at 6:56 PM on July 15, 2018 [12 favorites]


Functional programming sounded a lot more interesting before I encountered a clump of functional programmers who’d taken over part of the code base and made it damn near incomprehensible.

There’s a lot of “it’s simpler!” Which consists of “its now on one incredibly dense line which is harder to work with.”
posted by Artw at 7:20 PM on July 15, 2018 [27 favorites]


I am currently having flashbacks to terribly delivered first year CompSci courses.

My first programming class was in Gofer, a simplified Haskell; I thought it was pretty well done. We wrote a package to represent and do arithmetic on big integers, over the course of the semester, after some initial assignments like the Towers Of Hanoi. When you fucked up your recursion, the Gofer environment would say "The stack has collided with the heap". I had no idea what that meant, and just appreciated it as some Bad Computer Thing koan.
posted by thelonius at 7:21 PM on July 15, 2018 [11 favorites]


Learning Haskell has made me a much better PHP programmer, not because I used the Haskell paradigm in PHP - that would be mostly unworkable and awkward, but it taught me to think in terms of inputs and outputs and how one kind of data gets turned into another kind. It was like how in chemistry you learn about making the units match up and can pretty much derive the formulas for yourself and you feel like a super genius.
posted by Space Coyote at 7:22 PM on July 15, 2018 [17 favorites]


I do like the bit about avoiding side effects and having functions that are very deterministic in terms of inputs and outputs - nice for testing that. And I guess immutability avoids a lot of potential hassle.
posted by Artw at 7:25 PM on July 15, 2018 [3 favorites]


(All the above is about my experience with FP in JS. Maybe it all makes a lot more sense in Haskel)
posted by Artw at 7:26 PM on July 15, 2018


I realize now there was a thread a decade ago also about Haskell, with a very similar first comment:

The first CS course I took in undergrad was all Haskell, and despite my general love for all things coding-related, using that language was such a miserable experience that I decided to switch from computer science to a liberal arts major.

So thanks, Haskell.
posted by spiderwire at 2:43 PM on October 18, 2008

posted by hexaflexagon at 7:30 PM on July 15, 2018 [10 favorites]


Monads for everyone!
posted by greenhornet at 7:46 PM on July 15, 2018 [3 favorites]


My first introduction to Haskell was from working with a guy who insisted that its combination of functional paradigms and strong typing made it almost impossible to write buggy code. He found my lack of faith disturbing, and I just barely managed to not say out loud that I knew that this could not be true because I had spent the last several weeks developing against an API he had written in Haskell.
posted by firechicago at 7:48 PM on July 15, 2018 [23 favorites]


If the pureness of Haskell isn't quite your thing, try ML (or it's more popular variant OCaML). It's not quite as pure, but it's a bit more readable/easy to get started (cue angry language bickering). Functional programming has been working it's way into mainstream languages (Java, C#, more usage of that style of programming in Javascript - which, ugh - see comment on weakly typed functional languages below) for better or worse. The general techniques are fantastic, they *can* work quite well on streams of data, but it's very, very easy to empower someone enthusiastic and not entirely aware of what they're doing to construct an absolute nightmare (like the novice user of C++ STL collections - good luck debugging those deeply nested list or map iterators).

I still have the occasional horrible flashback to my thesis research involving Lisp and code someone else wrote that would sometimes return a nested list structure, and sometimes return that nested list structure as a string. Ugh. Weakly typed functional programming languages are a nightmare in comparison to ML and Haskell (and AST).
posted by combinatorial explosion at 7:54 PM on July 15, 2018 [1 favorite]


Artw: Haskell's real strength comes from its type system which lets you do cool things like throw at compilation time if there are unaccounted-for cases in a pattern matching or switch block, and option types, which are the sensible way to represent nullable values instead of javascript's 52452 different flavors of same and ill-conceived notion of "falsey".

The only reason javascript's gotten anywhere is because it's functional "enough" to have functions as first-class citizens, which allows for callback and promise patterns, dependency injection, and all the other stuff that's the only reason it's even a basically usable language post 1998 or so. But attempting to force Haskell and ML-like patterns into it without the type safety (or really, really, REALLY good test and doc coverage) is a recipe for disaster. The proposal to add a pattern-matching syntax is the single dumbest idea the js community has had in a good, oh, two weeks.
posted by 7segment at 8:07 PM on July 15, 2018 [2 favorites]


encountered a clump of functional programmers who’d taken over part of the code base and made it damn near incomprehensible.

FWIW that's pretty much what it's like to have to edit/add to/debug code someone else has written in React, which tries to be as FP as it can.

Imagine instead of a nice electric lamp someone told you that you should use individual light photons because that's more "pure." For the most part we need software to do stuff with data in the real world.
posted by eustacescrubb at 8:14 PM on July 15, 2018 [7 favorites]


I've tried to buy into functional programming. I really have. I read things like this semi-regularly, including a good chunk of Real World Haskell, a good chunk of The Little Schemer, and numerous blog posts and articles, always hoping for enlightenment. I've taken a course using Scheme and guided a student through an independent study learning Haskell (during which I did come to understand monads at some point). The moment of understanding why so many people seem to think it is the superior paradigm has never come.

I see the value of pure functions in helping us reason about, test, and parallelize code. I see the value in throwing functions around like candy data and all of the fun and elegance that can allow. I see the value of a solid type system (and love that Mypy and TypeScript are here, hopefully with more to come). And I keep these things in mind and use various functional-ish patterns when coding in imperative languages, especially those that have functional-ish things baked in like Python and Javascript. But I've never been anywhere near the point where I've thought, "Hey, writing this in Haskell [or whatever] would be better!" So I feel like I'm still missing something.

Am I hoping for too much? Is the general appreciation I have for some of the main ideas most of it? It's possible that the rest — the enthusiasm I see in others that I'm missing — is some blend of elitism and Stockholm syndrome. But if there is some resource out there that might help someone like me bridge that gap, I'd love to hear about it. Or if my problem is just that I haven't had the patience to push all the way through Real World Haskell and/or The Little Schemer or to just buckle down and write something real and complicated in a functional language, then I'll put those back on the to-do list and see what happens.
posted by whatnotever at 8:16 PM on July 15, 2018 [1 favorite]


A monad is a monoid in the category of endofunctors.

So what's the problem?
posted by ocschwar at 8:18 PM on July 15, 2018 [12 favorites]


Metafilter: a monoid in the category of endofunctors.
posted by Death and Gravity at 8:22 PM on July 15, 2018 [11 favorites]


Instead of trying to write safe, functional code in javascript, why not compile haskell to javascript?
posted by jnnnnn at 8:25 PM on July 15, 2018 [2 favorites]


Ctrl-F “Clojure”

:(
posted by Going To Maine at 8:26 PM on July 15, 2018 [12 favorites]


I got as far as the first bit of code:
Lazy evaluation has some spooky effects. Let's say we want to find the k least-valued elements of an unsorted list. In a traditional language, the obvious approach would be to sort the list and take the first k elements, but this is expensive. For efficiency, we would instead write a special function that takes these values in one pass, and it would have to perform some moderately complex book-keeping. In Haskell, the sort-then-take approach actually performs well: laziness ensures that the list will only be sorted enough to find the k minimal elements.

Better yet, our Haskell code that operates so efficiently is tiny, and uses standard library functions.

minima k xs = take k (sort xs)
And thought "how could that possibly work the way they say (i.e. 'laziness ensures that the list will only be sorted enough to find the k minimal elements')?" Whether that takes less time than sorting the entire list would seem to be highly dependent on the input and the particular sorting algorithm used under the hood.

The comments on those paragraphs and the code sample were unhelpful, so I looked up how lazy evaluation works in Haskell and found this fairly approachable article. I got this far and threw up my hands:
We will now look at a prototypical example that shows how to use seq, and which every Haskell programmer should know about: The strict left fold. ... the foldl function is defined in the Haskell Prelude as ... [code omitted] ... As you can see, the accumulating parameter grows and grows without bounds -- a space leak. ... The following modification of the foldl function [called foldl'] will do the trick ... It can be found in the Data.List module. As a rule of thumb, the function foldl is prone to space leaks. You should use foldl' or foldr instead.
So Haskell has a function / construction that every Haskell programmer should know about, but the built-in version has a glaring but trivially fixed flaw, so you should instead use the patched version whose name differs by a single, visually small character? And they're semantically and syntatically identical, so you could easily end up using the flawed version by mistake and not realize it until your program unexpectedly runs out of memory? Yeesh.
posted by jedicus at 8:35 PM on July 15, 2018 [29 favorites]


For the most part we need software to do stuff with data in the real world.

By "do stuff" you mean change some persistent state? That is mostly true given how computers actually work but I buy that sometimes it's helpful to keep the amount of code that actually does that under control.

(The concept of transforming data is very FP.)
posted by atoxyl at 8:35 PM on July 15, 2018 [3 favorites]


The only reason javascript's gotten anywhere is because

… it runs on literally everyone's computer. That's the reason. Not any feature or design decision.
posted by scruss at 8:37 PM on July 15, 2018 [28 favorites]


I haven't done anything in Haskell, but I really enjoyed Scala's combination of functional and procedural. Some things solve nicely with functional concepts, others less so. I'm developing in golang now and I definitely miss scala's data structures sometimes.
posted by flaterik at 8:39 PM on July 15, 2018 [2 favorites]


Or if my problem is just that I haven't had the patience to push all the way through Real World Haskell and/or The Little Schemer or to just buckle down and write something real and complicated in a functional language, then I'll put those back on the to-do list and see what happens.

I mean ... yes, this is the problem. Getting partway through two books (about languages that have nothing in common other than both being functional) is not going to bring you to the level of productivity that you are used to. You are going to get right into the most frustrating part of learning a new way to do something you are already good at.

I was about 10 years in to a career and picked up Haskell to stretch myself. I learned a lot of useful ways to think about programming but was never productive with it. Then I picked up Clojure and it is the reason I'm still in the industry. So much of the daily soul crushing bullshit around programming was gone. And I was way more productive. And it was fun. I will never go back. I will change careers first.

There are a lot of Boot Camp type con artists telling you that you can learn it in 2 weeks. It will take you a couple big books, a bunch of tutorials and stackoverflow questions and at least one big project to get your feet back under you. You have some bad OO habits to kick and a whole new way of thinking about problems to learn. You will learn much faster this time, but you are a beginner again.
posted by Infracanophile at 8:40 PM on July 15, 2018 [9 favorites]


> And thought "how could that possibly work the way they say (i.e. 'laziness ensures that the list will only be sorted enough to find the k minimal elements')?"

Could a Haskell programmer come to the rescue here? Because that sounds a lot like "it was storing the list as a binary tree internally all along" or some other under-the-hood magic to me.
posted by Leon at 8:42 PM on July 15, 2018


Whether that takes less time than sorting the entire list would seem to be highly dependent on the input and the particular sorting algorithm used under the hood.

Well it's not going to take longer to do the partial sort - but it does feel like with some algorithms it won't take appreciably less long. It certainly doesn't seem like the clearest example of a case where lazy evaluation is helpful - perhaps it's just meant to illustrate the extent to which the principle is followed?
posted by atoxyl at 8:50 PM on July 15, 2018 [4 favorites]


Maybe this metaphorically correct explanation will help?

Your sorting function has some memory set aside to use as an accumulator during the sorting process. Some allocated array to put the sorted results in and hand back to you.

If you are taking the first 10 values out of that once you get it back from the sorting function and ignoring the rest ... don't you wish the sorting function had just returned the array when it had the first 10 values in it? Why did it finish sorting the last 1000 if you were just going to throw them away?

Laziness delays computation in a way that lets this happen. Obviously exactly how much computation it takes to get the first 10 values depends on the sorting algorithm and the input, but that is how much it will do because it will never be asked to do more.

a note: laziness isn't magic, you will very frequently shoot yourself in the foot with laziness if you aren't thinking about it, the interactions can be very complex.
posted by Infracanophile at 8:51 PM on July 15, 2018 [4 favorites]


I love functional programming in general, and I love Haskell in particular. It's a wonderful language to program in, and it's a wonderful language in the sense that it teaches you things that make you a better programmer in other languages. I think that the ideas that underpin Haskell will play a big part in the continuing evolution of programming.

All that said, there are several problems with using functional programming languages in practice:
  1. Functional programming does not match the way people think. When most people think about a problem like "find the length of a list of items", they think, "go through the list one by one and count each item". They don't think, "the length of this list is zero if it is empty, otherwise it is one plus the length of the list created by removing the first item from this list". Many people can teach themselves to program in imperative languages. I have never met a self-taught functional programmer. I'm sure they exist, but I'm confident saying that they're rare. In general, to learn functional programming you need to sit in a classroom and have someone teach it to you.
  2. There are not many good functional programmers out there. If you're trying to do a project in a functional language, it's hard to find enough qualified people. And, in my experience, the qualified people you do find are often difficult to work with, as Artw describes above. I believe that this is because most good functional programmers are purists (pun intended) who believe they've found a kind of holy grail of knowledge, and further believe that they must impose this knowledge on others, for their own good, at any cost. I have to admit that I committed this sin in my younger days. I once rewrote a python module in a functional style. This resulted in a minor mutiny in which the rest of the team went to my boss and told him that while the code was much shorter, no one other than me could understand it.
  3. As programmers, a huge part of what we do is getting data into and out of our programs. A computer program can be thought of as having three parts: 1) take in data as input, 2) do stuff with that data, 3) send out new data as output. Haskell (even more than other functional programming languages) concerns itself with #2. #1 and #3 are kind of an afterthought -- an exercise left to the reader. The problem with that is that even though data input and output aren't sexy, interesting or much fun, parsing input and formatting output represent a big fraction of the work programmers do, and screwing them up is no less costly than screwing up #2, the transformation of the data. To my mind, while Haskell excels at transforming data once you've got it, it doesn't make getting that data in or out very easy or comprehensible.
So, my advice to programmers? Learn a functional programming language. It will make you a better programmer. But keep in mind that you will have to work with lots of people who don't understand functional programming, and that no one language is good at everything.
posted by tom_r at 9:01 PM on July 15, 2018 [37 favorites]


but surely it's only a space optimization (only bothering to store the first N items), because there's no way to sort a list without consuming the entire thing
posted by idiopath at 9:01 PM on July 15, 2018 [3 favorites]


> If you are taking the first 10 values out of that once you get it back from the sorting function and ignoring the rest ... don't you wish the sorting function had just returned the array when it had the first 10 values in it? Why did it finish sorting the last 1000 if you were just going to throw them away?

What sorting algorithm are you using that knows that it has the lowest ten values in the list before it's finished sorting the list?
posted by Leon at 9:02 PM on July 15, 2018 [6 favorites]


a note: laziness isn't magic, you will very frequently shoot yourself in the foot with laziness if you aren't thinking about it, the interactions can be very complex.

I don't know if we need to make programming any more complex or requiring more of an attention overhead than it already does

The level of function manipulation in Python and Ruby is probably enough for me right now, thanks (and this is from someone who used to know a little StandardML).
posted by Merus at 9:02 PM on July 15, 2018 [2 favorites]


He says his output not functional or elegant
What does code monkey think
posted by RobotVoodooPower at 9:07 PM on July 15, 2018 [11 favorites]


What sorting algorithm are you using that knows that it has the lowest ten values in the list before it's finished sorting the list?

Quicksort will work. You only need to sort until you know the first N values in the list, you can ignore all the larger values and the partition bit will handle that for you.
posted by Death and Gravity at 9:08 PM on July 15, 2018 [7 favorites]


As someone who programs professionally in a language that includes many lazy collection functions in the core, there is a magic to laziness. In the sense that a novice thinks one thing (the intuitive one) is happening, and doesn't notice until some spectacular result occurs (often an error) that something much more complex that required a lot more work and complexity was actually taking place.

Space leaks caused by holding the head or spine of something lazy are commonplace.

Laziness being so close to the core means that eventually there evolved complex and subtle workarounds to minimize the overhead, where the best case is you get the behavior of a for loop and an array. Code that works in a sample or unit test fails because it had a neccessary side effect and was placed in a context where nothing forced the lazy result to be realized.

But, there are benefits. Many bog standard collection operations can be written without a looping construct or array indexing. But I would compare it to magic, in the "magic is evil and must not be trusted even if you choose to exploit it" sense.
posted by idiopath at 9:11 PM on July 15, 2018 [4 favorites]


I'm not a fan of Haskell's defaulting to laziness myself, I find it to be a bad trade-off. But having optional laziness for when it is useful, like in Clojure, is great.

These are much less complex systems in general, because your state modification is contained in a smaller area, the complexity is just in new places. So sure, now you have to deal with laziness but we didn't mention the 10 things you don't have to worry about anymore.

quicksort, heapsort, radix(wouldn't save much) ... most modern sorting algorithms outside of the merge sort classes should be able to save significant time. Here are some sort visualizations that might help (volume/noise warning!).
posted by Infracanophile at 9:13 PM on July 15, 2018 [4 favorites]


Getting partway through two books (about languages that have nothing in common other than both being functional) is not going to bring you to the level of productivity that you are used to.

Thanks, that helps clarify what I'm looking for. I'm not asking about how to get to a good level of productivity with any particular language. I'm wondering how one can get to a point where one believes that functional programming is that much better and understands why. I have assumed that such enlightenment can be found in some other way than putting in all of the effort to reach a high level of productivity in a novel paradigm, but perhaps not.
posted by whatnotever at 9:19 PM on July 15, 2018 [1 favorite]


Infracanophile: perhaps I'm biased because I often help people who are learning Clojure, but even with optional laziness, the amount of detail someone must learn before they can use it safely and effectively is quite a bit. A good example is the function `pmap` in the core library. The best advice you'll find about pmap is that it's not the function that you want - and this is probably true no matter what you are actually trying to do.

But if we look at the source of the function, we get into a domain of esoteric behaviors where even people who consider themselves experts on the language misinterpret what's going on.

Here's the source.

pmap is using laziness in quite sophisticated and tricky ways. As an exercise, try taking the definition of `n` on the first line of the internal let bindings, and then seeing how `n` relates to the number of parallel futures that will be executed at one time.

in the actual code that gets things started (remember map is lazy, so that call in the let binding is just setting something up for later consumption) - we call `step` on rets, and on `(drop n rets)`. The CPU usage (or more precisely, the number of items after the last consumed item that will be in flight, maximum), is being controlled by n, because at each execution of the `step` function we block on one item of the result, followed by launching the future of the +nth item (thanks to drop), if it exists.

This is already quite complex, but add to this the fact that many Clojure collection functions are "chunked", that is they are optimized for lazy consumption by realizing N items at a time, regardless of how many were asked for. So you pmap over some items, you ask for the first one, and the calcluation of the first chunk-size items area launched by realizing items of the map, and as it proceeds at least n items ahead are realized (often chunk size, which is usually 32, well more than n on a normal CPU).

Perhaps it's unfair to pick on one weird function in clojure.core, but I find the number of ways that that function is hard to read thanks to laziness to be instructive.
posted by idiopath at 9:29 PM on July 15, 2018 [3 favorites]


Honestly, though, I don't think I've used a for loop in years. Each loops (where you step through a list of items and perform an operation on them) are much more generally useful, and many programming languages these days have some way of easily creating a range of numbers that can be iterated over with an each loop, which gets you the functionality of an each loop in a way that's far more intuitive and much less likely to lead to off-by-one errors.
posted by Merus at 9:30 PM on July 15, 2018


I’ve started to learn Haskell a couple of times. Every thing makes sense until one of my solutions to the exercises has an infinite loop, or simply gives the wrong answer. I’ve not managed to find a good functional debugging tutorial anywhere on teh interwebz.

Haskell seems just like APL, a great way to write beautiful write-only code.
posted by monotreme at 9:36 PM on July 15, 2018 [1 favorite]


I don't do much programming anymore, but experience has taught me that purity in programming languages is good for making programmers better and for thinking differently about problems, but bad for actually writing stuff that people will want to use. I have no reason to think Haskell will be any different. Also, it's hard to see how Haskell will change the world if LISP didn't, and god knows I've known enough really smart, brilliant programmers pushing it.
posted by Joakim Ziegler at 9:40 PM on July 15, 2018 [2 favorites]


When I say "even people who consider themselves experts on the language misinterpret what is going on" I refer to an extensive argument between experienced, professional Clojure users where multiple counterfactual assertions about what the function was doing were made. In the end it only resolved when someone proved they were right empirically by inserting a println into the function being lazily evaluated.
posted by idiopath at 9:40 PM on July 15, 2018 [5 favorites]


I'm not a fan of Haskell's defaulting to laziness myself, I find it to be a bad trade-off. But having optional laziness for when it is useful, like in Clojure, is great.

See, I am the opposite. You can't recover laziness from strict data or functions, so if you want the benefits of laziness, you need it everywhere. But you can force lazy data or functions any time if strictness is what you need.
posted by eruonna at 10:00 PM on July 15, 2018 [1 favorite]


Also, it's hard to see how Haskell will change the world if LISP didn't

Haskell and LISP both changed the world. Many other languages have taken ideas from them.
posted by dilaudid at 10:00 PM on July 15, 2018 [16 favorites]


I've only used Haskell to run XMonad, the window manager, and a lot of that was cut/paste from samples. Seems to me it's a great language to write a library in, with functions called from it in $language. Like Fortran was, if you liked your own math library...
posted by mikelieman at 10:11 PM on July 15, 2018


A lot of negative comments about React, and JS generally.

I can assure you from experience, that you can get a really good, readable and maintainable front end codebase using React. I've done it. People should really write more about "How to avoid footguns in React" than, "How to get started in React".

I can also say that the idea that functional programming gives us some really good tools for doing certain things well. Laziness, I can take or leave. The ability to replace for loops with pipelines of composed functions that are self-explanatory enhances any language. And, I find that strong typing is more likely to get in the way of making these things easier to understand, than it is to make it possible.

Consider this: listOfThings.map(thing -> applyAChangeTo(thing));
Just putting type annotations into this makes it really messy. Trying to statically guarantee that "thing" will always be the thing you think it is could take you longer to fight with the compiler about it than writing the whole feature that you're working on will, in some cases. (My answer is to be really good about writing tests. )

So, what's my point?
1) JavaScript isn't so bad,
2) Functional programming helps you write better code, as long as you don't take it too far. Use the good parts, leave behind the things that are incomprehensible.
3) Flexible languages help you use the right technique for the right task at hand. JavaScript is pretty good for this, as is Python, and maybe Ruby. Lisps essentially let you write your own DSL for anything, so learning basic FP before you dive into Clojure will allow you to build better tools within it.
posted by Citrus at 10:23 PM on July 15, 2018 [1 favorite]


I met my wife in an intro CS class that was taught in Scheme so I am pro functional programming.
posted by dismas at 10:23 PM on July 15, 2018 [19 favorites]


Heh. Despite comments above I really like React, it’s a really nice way to do things. Really right now I wouldn’t start a project in anything else.

Though the problem FP team I mean ruined really is all about “how can we make React as complicated and unlike regular JS as possible”, and then some.
posted by Artw at 10:27 PM on July 15, 2018 [1 favorite]


Trying to statically guarantee that "thing" will always be the thing you think it is could take you longer to fight with the compiler about it than writing the whole feature that you're working on will, in some cases. (My answer is to be really good about writing tests. )

This is only really true if you aren't already typing everything in the program. If you are, then the types are already there, and you have to be doing something really unusual or advanced for it to be difficult to get the compiler to accept them.
posted by eruonna at 11:05 PM on July 15, 2018 [4 favorites]


idiopath: yeah, pmap is a damn hard function to follow for sure. I don't know that you are picking on it, although it is a weird outlier, as it is really instructive on how adding laziness to an already not simple situation can make it very hard to model in your head.

I can kinda follow it, when reading slowly, and I've been doing Clojure full time for years (pmap is pretty nice considering how much it is doing honestly). When I'm teaching people I mention laziness so they understand what is happening but it doesn't really come up beyond that, for the exact reasons you described. Taking advantage of laziness for performance reasons is an advanced topic (outside of Haskell).

Luckily, as you said, pmap is almost never what you want to use. It is for when map takes a few seconds in the REPL and you get annoyed and use pmap. Or maybe in a small script or something. It works, someone did all the complicated work to make it work. But in an application or production setting where you need real control you would build something much better out of the other features. I don't teach beginners about pmap either most of the time.
posted by Infracanophile at 11:47 PM on July 15, 2018


Ctrl-F “Clojure”

:(


Ctrl-F "Rust"

:(
posted by nnethercote at 12:48 AM on July 16, 2018 [4 favorites]


We had a programmer for 5 years at my current gig whose degree was in category theory, and who therefore obviously enjoyed functional programming.

His code was obviously correct, a pleasure to use as a consumer, and a complete pain in the ass to extend. It was full of extensibility points, obviously; all you had to do was add another value to an enumeration or another case to a switch block. Except that the business change that required the extension was never in a direction he'd predicted, and tended to involve deep refactoring that redistributed functionality over the type hierarchy.

Maybe he was just bad at it? I don't know, I haven't ever worked with anyone else who even tried to do FP well.
posted by Fraxas at 1:14 AM on July 16, 2018 [4 favorites]


As an iOS developer who just spent a year wandering in the wilderness of React Native and its tortured Redux ways of accomplishing simple tasks, I'm waiting for OOP to come back into vogue like in 1994. If SQL and cyberpunk can be cool again, why not?
posted by johngoren at 3:22 AM on July 16, 2018 [6 favorites]


I *think* I understand how it could be possible to lazy about collecting the 10 lowest items in a list.

It's that checking a value is different from and cheaper than sorting that value.

So, you sort the first ten values. If a successive value is lower than the highest value on that list, sort it in and dump the value it replaces. If it is higher than the highest value on your list of 10, ignore it and keep going.

Is that it?

This makes sense if you're reasonably sure you won't need to have to answer a lot of other questions about the order of items in the big list, which striking me as a challenging judgement call.
posted by Nancy Lebovitz at 5:20 AM on July 16, 2018


I *think* I understand how it could be possible to lazy about collecting the 10 lowest items in a list.

Check out this visualization of quicksort. See how it touches the leftmost value for the last time at 0:44, and the rightmost value at 1:03?

That's probably all you need to see, given how you're asking the question. But for those who are less familiar with laziness, imagine that sorting function was streaming out each leftmost value when it knew it had touched it for the last time. It would return the first value -- the smallest -- at 0:44. Then suppose you had another function, "take_10", that accepted the first ten values, and then said, "great, we're done here, shut it down." The sorting function could then stop immediately, instead of doing all the work between 0:45 and 1:03.

And if you did need to answer a lot of other questions about the order of items in the big list, the exact same lazy function would work -- you would just collect the results in a sorted list to refer back to.

This would definitely influence your choice of sorting function -- there are others in the visualization that touch all of the values near the end of doing all the work, instead of the beginning.
posted by john hadron collider at 5:42 AM on July 16, 2018 [4 favorites]


Just putting type annotations into this makes it really messy.

... Which is why we have type inference through various mechanisms.

One of the ugly things about CurrentJobs in-house C++ framework is the insane number of single-argument constructors from fairly primitive types (our own implementation of String, especially - yes of course we have our own string class that is basically std::string but worse because it's that pathological a code base ). I have internalized deeply the value of keyword 'explicit'.

OCaml seems like the next language I want to check out.
posted by PMdixon at 5:54 AM on July 16, 2018 [2 favorites]


So, you sort the first ten values. If a successive value is lower than the highest value on that list, sort it in and dump the value it replaces. If it is higher than the highest value on your list of 10, ignore it and keep going.

Yes, "take the top 10" is an example of a selection problem and your idea is one of the solutions for it.

I think that people may end up talking past each other in FP discussions because of the ambiguity in exactly what programming should be considered functional. I suspect that part of this difference correlates with one's preference for statically- or dynamically-typed languages.

I'm not going to say anything about dynamic FP because I don't live in that world and it scares me.

For me, FP involves being explicit about what monads your code is executing in. Does your code make it easy to tell if data is being mutated? Does your code make it easy to tell if an exception (or alternative result type) could result? Does your code make it easy to tell if IO is being performed? Does your code make a visible distinction between null and non-null? Between scalar values and lists (there are languages that don't!)?

The point of these distinctions is to help people write NON-monadic ("unlifted") code when they want to: code that *doesn't* mutate state, that *doesn't* throw exceptions, that operates on a single value that is guaranteed to be present, etc. Because you can always lift such code into the relevant monad(s) with a wrapper if necessary, but if all you have is a monadic function you can't easily go the other way. There are a lot of popular languages out there that make writing unlifted code more difficult because they hide the monads from you, by making every value nullable and every statement a possible source of exceptions or IO calls.

Gary Bernhardt delivered a talk about the desirability of having a functional core and imperative shell; there is a correspondence here with writing unlifted code and wrapping it in the relevant monads as necessary, and a correspondence with the well-known notion of separating policy and mechanism.
posted by a snickering nuthatch at 5:54 AM on July 16, 2018 [8 favorites]


Most of us have learned programming procedurally (algorithms as recipes for doing a sequence of branching or conditional tasks). If you want to start learning Haskell you have to start really thinking mathematically in a sense. It's hard. Go get out your math textbooks and read a few math proofs etc. Also a lot of Haskell syntax and documentation is written for people who are already good at that, are into functional programming, and are or are working on becoming experts at writing efficient and interesting Haskell code. You can start out learning some of the basic concepts in a language you already know (Javascript eg, especially looking at generators and other new functional-ish stuff) or one that looks more similar to something you already know.. Clojure or Scala or even ML..
posted by thefool at 6:06 AM on July 16, 2018 [1 favorite]


thelonius:
"The stack has collided with the heap"
Conceptually in many models the stack often starts at the beginning or bottom of some memory (maybe after the program code) and the heap is allocated back from the end or starting at some offset from the beginning of memory. Especially in an educational language running in a simple virtual or emulated simplified computer model of some kind this could literally be what happens.
posted by thefool at 6:11 AM on July 16, 2018 [2 favorites]


good functional programmers … believe that they must impose this knowledge on others, for their own good, at any cost

It turns out that I went to primary school with one of the early luminaries of Haskell. Even at age 8 he was an unbearable smartarse, the kind of kid who roamed the playground imparting unwanted wisdom on those too slow to get out the way. From what I understand now, he's left the Haskell community and for the last few years has been creating a new language even purer than Haskell.

I don't get the quest for language purity. Underneath everything, you're just shifting and testing bits. It's okay to mythologize the practice of programming, but the same concepts that ran IBM card-based computing 60 years ago pay the bills today.
posted by scruss at 6:16 AM on July 16, 2018 [7 favorites]


"The stack has collided with the heap"

thefool is of course correct but it also may be a subversive corewars virus injected via a fauxTyped variable and there is a silent war between imperative and functional code.
posted by sammyo at 6:20 AM on July 16, 2018 [1 favorite]


I really loved Haskell (or maybe it was Gofer) when I took Intro to CS at UT back in the day. But it's been such a long time now and I've been in procedural/OOP world for so long. Maybe I should pick up some functional programming again just for fun.
posted by kmz at 6:22 AM on July 16, 2018 [2 favorites]


I'm currently a postdoc in computational neuroscience, and I do everything exclusively in Haskell, e.g. machine learning, simulations, and data analysis. For me the practical trade off always came down to: Do I want to spend my time debugging code, or convincing the the type checker that what I'm doing is correct?

For me the latter was always preferable, which is why I jumped ship ages ago. YMMV.
posted by Alex404 at 6:24 AM on July 16, 2018 [5 favorites]


Also: laziness allowed me to write a super compact, recursive implementation of backprop where the function depends on both the upstream values and the downstream errors, which of course both depend on each other. That is, it's kind of like the classic trick for defining the Fibonacci sequence:

fibs = 0 : 1 : zipWith (+) fibs (tail fibs)

but with lazy recursive calls on two distinct variables. If that doesn't seem cool/fun to you, then you're probably better off not wasting your time with functional programming :)
posted by Alex404 at 6:33 AM on July 16, 2018 [5 favorites]


The more I contemplate languages like this, the more I want to retreat to the comfortingly familiar dangers of C++.
posted by Foosnark at 6:54 AM on July 16, 2018 [2 favorites]


Haskell is the language I really want to love, but I think my last bounce out of it came when I had the perfect model for what the program should do to integers, but couldn't get a straight answer on how to run that model on integers from a file or stdin because io integers are not "pure" integers. It's what drove me right back into the lap of Racket. While 95% of problems can be done without mutability, there are algorithms were immutability makes the task a lot harder than it should be. I love lisps but library coverage kind of sucks in some domains, which is a problem when you don't want to do stuff like write your own parser for someone else's data language.

While you can always get the length of a list/sequence/vector/whatever through recursive nibbling or iteration, that's a detail that's usually abstracted away in favor of higher-order forms that do the iteration or recursion for you. It's nice to have things like car/cdr recursion in my toolkit, but I hardly ever do it explicitly.
posted by GenderNullPointerException at 7:06 AM on July 16, 2018 [3 favorites]


Since nobody else has mentioned it, I just want to say that Learn You a Haskell for Great Good is a really good introduction to the language for non-FP programmers. Plus, it's kind of funny and has poorly-drawn cartoons.

There's value in learning FP (via Haskell or some other language) even if you'll never use it professionally because a) a lot of FP stuff is getting incorporated into bill-paying languages (e.g. C++11's type inference and lambdas) and b) it will make you think about programming in new and useful ways that will help you write better code.
posted by suetanvil at 7:11 AM on July 16, 2018 [3 favorites]


I never got much practical use out of the OCaml I learned in college but I find myself more and more often using sprinkles of functional programming in all the languages I use (mostly JS these days). It just so happens that many problems of code quality and maintainability seem suited for a functional(ish) solution, or rather a solution where ideas popularized by the FP community play an important role.

It happened very organically, in the sense that specific abstractions presented solutions for specific problems I was already looking for better solutions to. It's a less dogmatic, and I would imagine far less frustrating, approach than trying to digest a bunch of heavy tomes and learning a purely functional language right off the bat.

We're engineers/devs/programmers/whatevers, right? We're each trying to solve concrete problems that we already know about and can hopefully articulate well enough to define what a solution to them would need to do for us. Starting from there seems like the right angle to me, not "oh, here's a trendy all-encompassing programming philosophy, let me scratch my head trying to imagine all the implications it might have for me in these situations I haven't actually encountered but are apparently a thing according to this book written by someone who might be trying to solve very different kinds of problems".
posted by hyperbolic at 7:52 AM on July 16, 2018 [2 favorites]


Artw, the little I've seen of FP in JS looked kind of square peg round hole, although JS programmers have done some impressive shaving around the edges.

That lazy sort example always struck me as a very bad example. Good examples include things to do with streaming -- think unix pipes, but available at any point in your program without the seralization needed for a shell interface. Also control flow, for example "when tooCold powerOff". You want that to always run tooCold, but *not* always run powerOff, and "when" is not built into the language but is a function someone wrote, so there must be a way to avoid evaluating some parameters to a function immediately -- which is what laziness is.

I'm pretty deep into the haskell; wrote this comment while using my configuration management system that's configured in haskell to fix a server, while waiting for my haskell programmed fridge to get enough solar power to start its morning cooldown, before starting my day job of using haskell to manage scientific data. Needless to say, I disagree with tom_r about Haskell's ability to manage real world effects.
posted by joeyh at 7:57 AM on July 16, 2018 [2 favorites]


One thing I'd like to try someday is learning how a language like Haskell can be a key component of a larger system, where a user can write Haskell programs/functions to define behavior of specific parts of the system.

When a programmer can focus on solving a particular piece of the puzzle with Haskell rather than trying to write a large practical system, does that make it easier? In other words use a systems programming language like C++ (Or if we're into obscure interesting languages, Erlang!) to set up an infrastructure to do the IO and other boring stuff, and let a programmer then use Haskell for specific data processing or logic components.

Here's an article about building DSLs on top of Haskell:

* https://queue.acm.org/detail.cfm?ref=rss&id=2617811

I've also found these libraries for Arduino, anyone ever try them?

* https://hackage.haskell.org/package/frp-arduino

* https://github.com/ku-fpg/haskino

Thoughts? Advice?
posted by thefool at 7:58 AM on July 16, 2018


thefool, I've not used the FRP Adruino libraries, beyond reading their docs, but I am doing very much the same thing on Linux for my home automation systems. FRP is a very natural fit for that.

Facebook uses a haskell DSL for spam filtering or something, with large numbers of junior developers writing rules in it. It's designed in a way that makes it automatically parallize. Only one of the higher-profile examples.
posted by joeyh at 8:03 AM on July 16, 2018 [2 favorites]


One thing that's nice about Haskell is GHC, the commonly used compiler, works with other standard Unix build tools. In particular you can create shared libraries with it, binaries you can then embed in and call from other code. There are remarkably few languages you can do that with: C, C++, Fortran. And Haskell.

It makes Haskell an interesting candidate for writing critical system components in. I was particularly curious if it would be feasible to write an OpenSSL alternative in Haskell. OpenSSL has had a lot of horrible security bugs because it's written in shitty C code. Clean Haskell should be more secure.
posted by Nelson at 8:03 AM on July 16, 2018 [1 favorite]



His code was obviously correct, a pleasure to use as a consumer, and a complete pain in the ass to extend. It was full of extensibility points, obviously; all you had to do was add another value to an enumeration or another case to a switch block. Except that the business change that required the extension was never in a direction he'd predicted, and tended to involve deep refactoring that redistributed functionality over the type hierarchy.


Sounds like he'd have done much better if he had been in more meetings with the stakeholders who use his code and listened to their needs.

Which is something that insufferable FP programmers are very, very bad at.

As for me, I love Rust. It has the lovely property of checking for memory and concurrency bugs at compile time, while still looking and feeling like an ordinary systems language, with all the FP and Category Theory insanity in the compiler.
posted by ocschwar at 8:05 AM on July 16, 2018 [1 favorite]


Also: I got into Haskell a couple of years back. I worked through most of Learn You a Haskell and wrote this non-trival program in it.

I ended up giving up on the language when I realized that its laziness was more trouble than it's worth.

See, for most languages with Scary Hard Features (e.g. macros in Lisp, templates in C++), you can avoid the problems with them by not using them. But in Haskell, the laziness is pervasive. You need to understand how it works all the time or your program will run out of heap and die as soon as a parameter gets too big. And not just understand it, but have a really good mental model of how the compiler is going to evaluate things.

This article goes into one example, the tl; dr of which is that these two expressions:

foldl (+) 0 [1..1000000]
foldr (+) 0 [1..1000000]

will both do the same thing (find the sum of all numbers from 1 to 1000000 by applying the add function to each successive element and a running total) but one will fill your heap with unevaluatable thunks until it reaches the last number and the other will not. At one point, I think I understood why this happens but it's complicated and I've recycled those brain cells.

The cognitive overhead of dealing with this ends up being much higher than that of just managing some local state. And anyway, I have yet to see a use of laziness that couldn't be done just as easily with anonymous functions.

(Also, laziness isn't faster, at least according to the compiler expert I asked about this some years back. He told me that the most effective optimization for lazy languages was figuring out if it was possible to eagerly evaluate an expression.)

Okay, I'm done ranting. I'm going to go back and read the thread now.
posted by suetanvil at 8:07 AM on July 16, 2018 [3 favorites]


Nelson, such a library has to garbage collect though, so all the cool kids these days who want to do something like that are using rust instead..

The canonical openssl alternative in haskell is http://hackage.haskell.org/package/tls , but opinions vary about whether having yet another TLS implementation to audit for eg timing oracles is a good thing.. I am a big fan of http://hackage.haskell.org/package/raaz which exposes cryptographic primitives in a way that avoids many of the common pitfalls of using them.
posted by joeyh at 8:09 AM on July 16, 2018 [2 favorites]


Alex404 I want to see this example. Is it online?

Once upon a time, I wrote an explanation of the Y combinator. I think it was the best exposition I ever achieved. And if any of you read and understand it on the first go, you're waaay smarter than I.
posted by tirutiru at 8:15 AM on July 16, 2018


Yeah joeyh I've been wondering if Rust can also serve as a better language for stuff like OpenSSL. It both benefits and suffers from not being some weirdo functional language like Haskell. Really just any language where you can't accidentally overrun memory buffers would be an enormous improvement over the state of the art in 2018.
posted by Nelson at 8:15 AM on July 16, 2018 [1 favorite]


The wall I hit with Haskell came from two different ends of Haskell culture:

1. Here's how to create provably correct systems of functions.
2. Here's Pandoc, an example of a Haskell system that can handle the horrible ugliness of both real-world HTML and Microsoft Word files.

Bridging that gap, even for the simple task of applying a provably correct system of functions to a list of numbers written into a file, ended up involving hours of technical and theoretical jargon before I just gave up and did the thing in 20 minutes of scheme.
posted by GenderNullPointerException at 8:18 AM on July 16, 2018 [2 favorites]


For all you people who are thinking of trying out OCaml, please do so. One of the best ways of learning it is via the Real World OCaml book (the full text is available at that link).

If you've mostly been a very practical programmer, building larger programs that actually get shit done, you'll appreciate the way they teach FP in that book, and the way they use OCaml. The main author Yaron Minsky works at a hedge fund (Jane Street) that uses solely OCaml for computational trading. So they care a lot about building robust and fast systems.

One of the problems with FP languages is that a lot of them have a centre of gravity in academia, where most of the programs people write are smaller and not really meant to be used long-term. So this results in a culture where people write messy code, don't bother to use meaningful variable names, and don't maintain discipline when using the greater expressive power their languages give them. I've seen so much horrendous OCaml code, and I've written some myself in the past. In contrast, the Jane Street code is generally well engineered; it's written to be easy to understand and make sense of.
posted by destrius at 8:37 AM on July 16, 2018 [4 favorites]


Although there's some wisdom in there, I pretty strongly object to tom_r's characterization of functional programming. I taught myself Haskell exactly because I felt like I could naturally express my thoughts in it. In particular, I'm a mathematically inclined person who finds it very natural to look at equations and translate them into Haskell code, and I know I'm not the only one.

As for backprop (to answer tirutiru's question), posting my code wouldn't be helpful because it's mixed up with the functions and combinators of my rather large library for doing my work (it's a library for doing numerical optimization based around concepts from differential geometry). Intuitively though, my implementation is built around the class 'Propagate' of parametric functions f with the method

propagate :: Error f -> Input f -> f -> (Derivative f, Output f)

where 'Error f' is the mismatch between the output of f and the target f, 'Input f' and 'Output f' are the inputs and outputs of the function, and 'Derivative f' is the derivative of f computed by propagate. If f is a linear function, then you can instantiate 'Propagate f' easily: 'Derivative f' is just the outer product of 'Error f' and 'Input f'.

If f is a multilayer perceptron, then you can define 'propagate' recursively, that is, we instantiate

'(Propagate f, Propagate g) => Propagate (NeuralNetwork f g)'

where (NeuralNetwork f g) is the composition of f and g, which also applies some static nonlinearity (e.g. sigmoid) to the output of g. The implementation of 'propagate' in this case involves both forward and backward propagating through f and g. The trick is that to compute the backpropped error, you will already needed to have computed the forward pass. However, because of laziness, we can just write

propagate dz x fg =
let (f,g) = splitNeuralNetwork fg
(df,z) = propagate dz y f
y = sigmoid y0
(dg,y0) = propagate dy x g
dy = dz < (linearPart f <> sigmoidJacobian y0)
in (joinNeuralNetworkDerivatives df dg, z)

where < is transpose application, linearPart extracts the linear part of f (i.e. throws away the biases), <> is matrix matrix multiplication, and splitNeuralNetwork and joinNeuralNetworkDerivatives manipulate the containers of parameters and their derivatives (sorry about the lack of indentation). To turn this into the backpropagation algorithm, you just need to wrap it in the error function of your choice.

P.S.: If this seems like an insane way of implementing backpropagation, I will say that this is the most natural way for me. For my own sake I rederived backpropagation, noticed you can essentially boil it down to a single equation, and then translated that equation into Haskell, which resulted in the aforementioned function.
posted by Alex404 at 9:07 AM on July 16, 2018 [7 favorites]


I enjoyed programming in Haskell-like languages (never did much in Haskell), but the problem I ran into was when I wanted to do 'real world' stuff. Most languages these days are a massive standard library first, an even more massive "not official part of the language, but definitely everyone uses them" set of third-party libraries second, and the actual language third.

A lot of the academic languages reverse that, which gives you a language that is usually great if you are writing all the code yourself, but not always so hot if you don't.

Still, learning the languages can be very useful. Immutability as default is, IMHO, a great thing and there is a certain bizarre feeling of enlightenment that you get when you write some code in Java and think "This would have been so much easier in Haskell" (which happened. I was doing something that would have benefited hugely from laziness).
posted by It's Never Lurgi at 9:51 AM on July 16, 2018 [2 favorites]


A love of FP and/or Forth are usually a sure sign that the person you just met is probably deeply odd.
posted by Dr. Twist at 9:53 AM on July 16, 2018 [3 favorites]


About "learn you a Haskell":

"Plus, it's kind of funny and has poorly-drawn cartoons."

I wonder if there is psychological research that explains why textbooks with "jokes" are uniquely frustrating. There is something absolutely insulting about your professor making jokes about the subject while you have trouble grasping the material to begin with.
posted by victotronics at 10:23 AM on July 16, 2018 [1 favorite]


Since it seems like the thread is on a kick about Backprop, this is a fun talk about the history of automatic differentiation and its modern functional incarnations. There are also a ton of great references.

Haskell is definitely worth learning. There are a lot of things that people complain about, but in the end it is the product of remarkable thoughtfulness. This talk about how typeclasses work is really quite interesting even if you have no interest in ever writing anything in the language.
posted by ethansr at 11:15 AM on July 16, 2018 [1 favorite]


I'm thinking it's kind of awe-inspiring that I know like a half dozen programming languages and have written programs for Fortune 10 companies and tons of personal front-end web server applications and moderately heavy math processing (monte carlo simulations for financial applications estimating), could do every single of their examples (podcasts, barcodes, autotesting, etc) in multiple languages, and I have no clue what is being discussed here.

If you want to go deep in programming, jump in, it's an abyss!
posted by The_Vegetables at 11:44 AM on July 16, 2018 [10 favorites]


I like Perl6
> my \fib := 0, 1, * + * ... *; say fib[^10]; say fib[100];
(0 1 1 2 3 5 8 13 21 34)
354224848179261915075
The fun bit is when you make a unary postfix operator named '!' and then do:
> say [!] ^10;
My reason for mostly just reading about Haskell comes down to not wanting to deal with the hassle of purity when it comes to world interface like IO and Randomness. But the rest of FP in general has been of much help over the years.

Whatever you do, don't write another Monad explanation/tutorial please.
posted by zengargoyle at 12:15 PM on July 16, 2018 [3 favorites]


I enjoyed programming in Haskell-like languages (never did much in Haskell), but the problem I ran into was when I wanted to do 'real world' stuff. Most languages these days are a massive standard library first, an even more massive "not official part of the language, but definitely everyone uses them" set of third-party libraries second, and the actual language third.

They're not Haskell, but note that there are some functional languages (of varying purity) that give you access to massive ecosystems of third party code. Clojure and Scala both run on the Java Virtual Machine and make it easy to use existing Java libraries, and F# runs in .NET.

The cognitive overhead of dealing with [the difference between foldl and foldr] ends up being much higher than that of just managing some local state. And anyway, I have yet to see a use of laziness that couldn't be done just as easily with anonymous functions.

I disagree with both of these statements, but especially the first one :) I don't deny that laziness can introduce additional complexity (and I've personally been bitten by that for sure), but the magnitude of the problems introduced by local state is enough that it gave rise to entire fields of research and industry. And sometimes you just need to learn why two functions exist and what they do.

Lisp-style macros can make implementing laziness easier in languages that don't have it built-in, but either way you usually still need either 1. a set of functions/macros that are separate from but parallel to built-ins, e.g. cons and lazy-cons, car and lazy-car, or 2. to write function invocations yourself, e.g. (let ((realized-value (funcall (car my-sequence)))) ...). Either way, the programmer needs to keep straight whether they're dealing with lazy values or not.

Re: purity, Jpfed's mention of the idea of "functional core and imperative shell" seems right on to me. At some point, you're going to write impure code. That's OK, but I seem to have the most success in trying to minimize the amount of that impure code, and "quarantine" it: Keep as much code as is practical purely functional, for all the usual reasons (easier to debug, easier to reason about, often easier to actually use in an engineering sense), and try to keep it separate so there's a clean boundary between pure and impure.
posted by jjwiseman at 1:27 PM on July 16, 2018 [2 favorites]


I enjoy programming in Functional programming languages with its huge emphasis on generics. They make programming fun and neat. However, I think that there's quite a significant bit of mental overhead between Scala and Haskell. I don't believe the Scala community has gone crazy with the use of Scala plugins. In Haskell, there's quite a bit of learning on the necessary plugins after learning the syntax of the language. GADTs and Template Haskell are used frequently.

Scala can use inheritance rather Haskell's forall existential. Scala can use classes as first class modules rather than Haskell's Generalized Algebraic Data Types. Scala has reflection, while Haskell has to have template haskell compile things different to preserve metadata information on classes. Every feature in Haskell is an additional compiler extension. It would be useful for the Haskell community to declare GHC as the official standard and just incorporate the plugins into the language standard.

Also disappointed that Haskell's only solid ODBC ORM Persistent doesn't support database schema identifiers. HDBC-odbc can use raw queries to get database objects in other schemas, but I like using ORMs when programming.
posted by DetriusXii at 1:40 PM on July 16, 2018 [1 favorite]


On the other side of FP Just because it hasn't come up here, yet - I've enjoyed messing around off and on with Elixir which is basically Erlang reimagined by Ruby people (but better than that probably sounds).

Like Erlang itself, though, it's easy to see it as a collection of cool solutions waiting for someone to come around with a project that needs them badly enough to bother figuring out how it works.
posted by atoxyl at 2:16 PM on July 16, 2018


I like using ORMs when programming.

I have had some experiences in life more unpleasant than using Hibernate, but not many.
posted by thelonius at 2:49 PM on July 16, 2018 [2 favorites]


Yeah joeyh I've been wondering if Rust can also serve as a better language for stuff like OpenSSL.

It is absolutely f***ing perfect for stuff like OpenSSL.
posted by nnethercote at 3:33 PM on July 16, 2018 [1 favorite]


For those of us who have been working primarily in the Microsoft stack, F# is a functional language that runs on the .NET platform. F Sharp For Fun and Profit is a great site that has tutorials and "why the hell would I want to do this" articles.
posted by matildaben at 4:30 PM on July 16, 2018 [1 favorite]


"We abandon some ideas that might seem fundamental, such as having a for loop built into the language. We have other, more flexible, ways to perform repetitive tasks."

"More flexible" seems an overly-big claim on that one, so much so that it makes me skeptical of the rest, and I know a few functional languages.
posted by talldean at 4:55 PM on July 16, 2018


Although there's some wisdom in there, I pretty strongly object to tom_r's characterization of functional programming. I taught myself Haskell exactly because I felt like I could naturally express my thoughts in it.

This is my take on it, too. It’s true that many applications do in practice devote a much greater portion of their functionality to marshaling and demarshaling their inputs and outputs* than to performing computations over those inputs, but for the applications that don’t, if you think in terms of transformations over data more readily than you think in terms of recipes to accomplish each transformation, then Haskell not only gives you a superlatively capable means of expressing those transformations but also rigorous compile-time guarantees that your intended semantics are reflected in the program itself. I promise that some of us legitimately benefit from this model.

* On the other hand, if you’re in the position of needing to roll your own parser for some custom input schema, I can think of no more convenient language than Haskell to represent the relevant parsing rules in a concise and readable way and to ensure the validity of inputs according to those rules.
posted by invitapriore at 5:45 PM on July 16, 2018 [3 favorites]


I increasingly think of Haskell, and functional programming in general, akin to Christianity where there are many well-meaning people trying to do the best they can but, especially online, it can be hard to find them due to all of the noise from the people who are primarily motivated by the joy of telling everyone else how they're Doing It Wrong™️ and certainly doomed unless they adopt the true faith. One of the things I've been most struck by is how rarely the vocal zealots understand the fundamentals (CS, hardware, the problems they're working on, etc.) which are allegedly the basis for their strong beliefs, whereas the actual experts are generally fair more reluctant to make such huge sweeping claims[1].

I'll second the comments other people have made about Rust. Learning it was refreshing with regular reminders that the developers are interested in advancing the state of the art but also have to ship real products so you get practical things like encouraging immutability but recognizing that while it's important to be careful about changing shared state it's not a holy cause and so they make it safe for the legitimate cases where your program needs to reflect how the hardware actually works.

1. I'm reminded of Dan Luu's review of static vs. dynamic typing studies , which concluded that most people pass around ones which favor their existing position while ignoring the general trend of an inverse relationship between the reported effect size and the quality of the methodology.
posted by adamsc at 6:19 PM on July 16, 2018 [5 favorites]


As an example of a use-case where Haskell was far and away the easiest implementation language at hand to use, I wrote a ~90 line utility early last year to pretty-print the short debug string serialization format that you get from protobufs, because I had a hard time reading them in the single-line format, and plugging them into a JSON pretty-printer and hoping for the best only occasionally resulted in readable results. If you go through the commits you'll notice that I made a lot of updates to deal with cases that I hadn't originally thought of, since I wasn't going off a formal grammar of any kind, but nonetheless, I'd say that the total time expended in making the thing do what I wanted was on the order of an hour. That was as a pretty bad lower-intermediate Haskell programmer (who was only intending to write a quick, very context-embedded utility for himself and his teammates), but it was still a much smoother experience in terms of translating the AST in my mind into code and then debugging the cases where my internal AST and the actual AST diverged than I could imagine ever happening in any other language, and I've written parsers of similarly low complexity in other languages and had a much worse time of it.
posted by invitapriore at 6:52 PM on July 16, 2018 [4 favorites]


Learning [Rust] was refreshing with regular reminders that the developers are interested in advancing the state of the art but also have to ship real products so you get practical things like encouraging immutability but recognizing that while it's important to be careful about changing shared state it's not a holy cause and so they make it safe for the legitimate cases where your program needs to reflect how the hardware actually works.

Are you referring to Haskell? It has all kinds of shared mutable state; off the top of my head, there are IORefs, STRefs, TVars, MVars, mutable arrays, vectors, and hashtables. If immutability is a holy cause, the designers of Haskell are certainly apostates.
posted by eruonna at 9:59 PM on July 16, 2018 [2 favorites]


I write "C with Classes"-style C++ for little embedded microcontrollers with 8k of memory. A whole project might be 1,000 lines of code.

Life is good.
posted by ryanrs at 10:09 PM on July 16, 2018


Are you referring to Haskell? It has all kinds of shared mutable state
I’m aware of that — wasn’t referring to the language but a noxious strain of advocate culture. I’ve heard those mentioned as things you should never use, historical mistakes, etc. without asking why the language designers felt differently.
posted by adamsc at 4:32 AM on July 17, 2018


I got here way too late! I use Clojure almost exclusively at work, so I live in the dynamic FP world full time. Didn't realise other people thought it was so scary - I actually love it, but I also unironically love JavaScript and assembly, so maybe I just have a thing for ridiculous nightmare scenarios.

I actually learned Clojure specifically to get away from the JS team I was on. I was on a team of self-taught men who thought they were the bee's knees, and refused to listen to my crazy lady ideas like "not everything needs to be a global variable" and "the bulk of your application should not execute IN the config file". The real problem was that I would usually lose, because the team lead was also one of these guys and they were all friends. I once received THREE messages of pity from other devs on other teams who overheard me losing an argument about putting private keys on GitHub (... I was anti).

So I learned Clojure, at least in part because the other guys on the team mentioned trying to learn it and failing; I was driven by spite and a desire to never work on the same project with them again.

I've learned to legitimately enjoy it too, despite not enjoying my experiences with Haskell as my introduction to FP years back. It's also kind of come full circle, since in university I actually worked on a JVM with the specific goal of improving JIT performance for dynamic and functional languages! So that's fun, for some definitions of fun, I guess.
posted by one of these days at 7:16 AM on July 17, 2018 [6 favorites]


So, I'm a web developer. 95% of what I do boils down to taking data from HTML forms and stuffing it into a database, or retrieving data from a database and rendering it to HTML (or JSON or CSV).

Is there a functional language with a mature ecosystem of tools for solving web-specific problems? Tools for handling routine tasks such as routing incoming HTTP requests, validating input, rendering view templates, ORM (or whatever the analogue is in the FP world), etc.?

Because every introduction to functional programming that I've seen focuses on wonky, low-level, compsci-type stuff. This is understandable, since we're talking about general-purpose languages, not web frameworks or something like PHP (which is a general-purpose language with a massive, web-specific standard library).

But if the authors of these articles want to sell me on FP, I need to see how it's gonna help me solve real-world problems, with real deadlines, and without having to write my own framework from scratch.
posted by escape from the potato planet at 8:29 AM on July 17, 2018 [2 favorites]


I do web stuff in Clojure! I'd say that that's probably its main "real-world" use case, but I could be wrong.

Compojure and Ring are widely used for middleware and routing HTTP requests, and there's Hiccup for templating HTML (although I've never had to do that bit, so I've never used it).
posted by one of these days at 9:47 AM on July 17, 2018 [1 favorite]


Thanks, one of these days. I guess I'm kind of a dinosaur – I still build content-centric, honest-to-gosh, CMS-backed websites (not "web applications"), with a monolithic architecture (mainly Laravel and CakePHP, lately). So the notion of building (for example) a routing layer in a separate language is kind of alien to me.

I guess I was asking for a FP equivalent to a web framework such as Laravel – a single "thing" which receives HTTP requests, routes them, does the business logic, renders the view, and serves it to the client. Sounds like that isn't really a thing in the FP world?

Of course, I work at a company that has a grand total of two developers (including myself), in a tech-adjacent but not tech-focused industry. And it sounds like microservice architectures call for a fairly large team that's versed in a bunch of different technologies.
posted by escape from the potato planet at 10:11 AM on July 17, 2018 [1 favorite]


And it sounds like microservice architectures call for a fairly large team that's versed in a bunch of different technologies.

Conway's Law
posted by rhizome at 12:44 PM on July 17, 2018 [2 favorites]


Is Yesod the kind of thing you are looking for? It does routing, rendering html, css, and javascript, databases, static file serving, auth, and some other things through plugins. Admittedly, a lot of this uses templates that are not quite Haskell but which are translated into Haskell (using the language extension TemplateHaskell).
posted by eruonna at 10:01 PM on July 17, 2018


Compojure, Ring, and Hiccup are all Clojure libraries. They aren't separate languages. Compojure and Hiccup could be considered as DSLs embedded in Clojure, but they've still got access to the rest of the language. I don't know about the wider FP world, but Clojure web stuff at least involves picking and choosing from different libraries to build a full stack. I like that personally, but it does take a bit of research and understanding if your particular needs are met by a set of libraries.
posted by Mister Cheese at 10:14 PM on July 17, 2018


eruonna: yep, that's the kind of thing I have in mind. I'll check it out. Thanks!
posted by escape from the potato planet at 11:20 AM on July 18, 2018


Haskell is the language used for the Cardano cryptocurrency. The organization behind it is pitching it as an academic project, in order to test the assumptions that are behind other blockchains in a peer-reviewed way. They say that writing in Haskell will allow them to 'prove' that a smart contract or application does what it is supposed to do --- something that can't be done using imperative programming languages.

As one who is mildly proficient as a python/JS tinkerer, these functional languages are very intimidating, and I have the impression that those who use it are on an nth-level higher plane of programming skill than I am right now.
posted by daHIFI at 9:10 AM on July 24, 2018


As a functional programming dabbler, I don't see it as all that much higher. A typical process is:

1. I have an XML file, I need a string of contents.
2. I have a string, I need a data structure.
3. I have a data structure, I need a list of stuff from that structure.
4. I have a list, sort it and remove duplicates.
5. Give that list back to me.

Since functions give me the results of their processing by default, each step is reasonably easy to test, debug, and chain together at the REPL. Python's behavior where some functions return expected results but other functions modify data in place confuses me.

Advocates tend to push recursion and low-level list operations like nibbling and consing, and while that's nice to have, I almost always use an iteration function or macro.
posted by GenderNullPointerException at 12:40 PM on July 24, 2018 [1 favorite]


My dilletante's sense of the FP philosophy makes it seem like "object-oriented Unix pipes."
posted by rhizome at 12:53 PM on July 24, 2018


I'd say "typed Unix pipes" instead. I've been doing a lot of shell scripting work (in ksh93, which expands on POSIX in nice ways without being a complete aggregated dust bunny of a language, or hell a dust mephit of a language) and dealing with the choice between stream-of-text and space-delimited-argument-list-of-text is the major frustration.
posted by GenderNullPointerException at 1:35 PM on July 24, 2018 [2 favorites]


Here's another typical process:

1) Initiate a transaction against some data store.
2) Read some data.
3) Perform computations against that data.
4) Write the results.
5) Close the transaction.

Note here that there are two orthogonal concerns: the datastore's transaction semantics, and the reads/writes/computation over the datastore's contents. In a typical imperative language, it is both easy to blend these two concerns into a single unit (impeding testability) and difficult to separate them typefully (impeding developer velocity and correctness guarantees). In a language like Haskell, I can very easily separate the logic of the transaction (represented as a free monad, something like an abstract syntax tree except that the richness of the type system lets me define acceptable sequences of potentially branching operations in such a way that the compiler will alert me when I mess them up) from the execution. So now I have an internal and statically-typed language for expressing my program logic, and a pluggable set of interpreters that can use that logic to create an actual sequence of datastore operations or just test them for validity. So now I've significantly reduced my test load; a few system tests suffice for a sanity check, and then we can orthogonally test the logic flow against the translation functions that actually operate against some real datastore. Again, once the "computation" step becomes significant in a system with many computation nodes, a functional and rigorously-typed language like Haskell really shines.

Now, it is totally the case that the semantics of the internal logic and the semantics of the datastore being modified can diverge, and so this isn't a perfect system, but the ability to separate concerns makes diagnosing such an issue a lot more localized.
posted by invitapriore at 6:29 PM on July 24, 2018 [3 favorites]


I came to appreciate declarative, functional programming practices and concepts organically. After more than a decade of writing applications in imperative, procedural and/or object-oriented styles, I found underscore, which served as an (imperfect) introduction to a number of important FP concepts.

After rewriting an existing application (one that I had written, originally) using underscore and FP techniques, I haven't looked back.

From underscore, I moved on to Ramda (like underscore, but truer to FP principles), partial lenses and calmm-js (something like a Functional Reactive Programming toolkit for JavaScript). The Sanctuary JS ecosystem is another interesting option for JavaScript programmers.

I've also dabbled in Haskell, Elixir and Erlang. I'm not a purist, but I am 100% certain that becoming familiar with FP concepts has helped me become a much better developer.

If you code in JavaScript and are interested in dabbling in FP, Ramda is an excellent place to start.

Programs written in an FP style can be difficult for the uninitiated to interpret, at first, but that gets much easier with a little practice, and for me, at least, FP concepts seem to be a good match for my way of devising solutions to puzzles and problems.
posted by syzygy at 5:26 AM on July 26, 2018 [2 favorites]


« Older "To see oursels as others see us"   |   that strangely tender malice, at once so delicious... Newer »


This thread has been archived and is closed to new comments