How to Sell Excellence (Haskell)

Haskell is a beautiful, elegant, useful language. But I have seen arguments similar to the ones made hear for almost 20 years now, and Haskell has been lingering on the edge of obscurity for its entire existence. Three of the five best developers I know have never even heard of it. Why, because they happened to not have studied CS at university (2 are physicists, and one is an electrical engineers). I’ll get back to them , but now, lets look at one of the claims made here: “selling a negative is a losing strategy”. That’s not Haskell’s “business” problem. Its problem is not listening. Not listening is a losing strategy; great salespeople listen.

What is the Haskell’s community not listening to? Well, [this paper], for one. It is a fascinating study of how programmers (and businesses) choose languages. Of the possible factors influencing adoption, safety/correctness is number 10 out of 14, following such factors as libraries, familiarity, performance and tooling. So, placing correctness as the #1 feature of a language (we’ll get back to the cost that entails momentarily), is misunderstanding the needs of the industry. The industry does not need and does not want to produce correct software. It needs to produce software that is correct enough and performant enough to meet requirements as cheaply as possible. The placement of correctness on that list suggests that the industry perceives our current correctness levels as good enough. Also, it suggests something that is readily apparent to anyone who follows industry trends: people are willing to pay more for performance than correctness.

Which brings us to the costs. At what costs does Haskell achieve its alleged superior correctness — I say alleged because too few (hardly any, in fact) — large scale programs have been written in Haskell throughout its 20 years of existence (the Haskell compiler is probably the largest among them, and compilers are a unique outlier among all software domains)? The slides give us the answer: "Haskell crushes imprecision of thought”. I normally respond to this claim with the following example: suppose you were asked to teach someone to catch a ball. You come up with two possible strategies: teach them Newtonian dynamics and aerodynamics, or let them practice. The former method “crushes imprecision of thought”, and perhaps produces better results. The latter one might be more widely applicable and produce results that are good enough. Is there really only one way to get correct (enough) results, so that we could so boldly claim, as these slides do, that “FP is the answer”?

In fact, the slides make it clear that it’s not FP, but pure, or referentially transparent FP (which we’ll call PFP), which is the answer. Writing programs translates human thought to machine instructions. Haskell, as the slides tell us, "crushes imprecision of thought”. Is that the best way to speak to humans? If that works, the result is beautiful, elegant, referentially transparent PFP code which is then compiled (after precisely thinking through unforgiving compilation errors) to machine instructions. Is PFP the best way to talk to a machine, then? Well, it is so far removed from how machines “think”, that the resulting execution model is often unclear. So I’d say that the cost of that particular kind of alleged correctness is using a language that is equally far from the way humans think as it is from the way machines think, and it is farther from both more than probably most other programming paradigms.

So, should you or shouldn’t you use Haskell? That’s completely up to you. I’m sure it can teach you a lot. I myself have not used it beyond going through some tutorials; I haven’t found it particularly useful, but others most certainly have. Now let’s get back to those three star programmers I know. They are experts in video compression, physics simulation and error-correcting distributed algorithms. The last thing they want to do is invest too much thought into proper abstractions or correctness proofs (at least while they’re programming). Their preferred way of work is lots of trial and error. Is that “imprecise thought”? Is the expression of an algorithm as program code the most important part of programming which requires the most precision? I think not. Is valuing upfront “precise” coding at the cost of other ways of reasoning about programs after and during they run the right way to improve software quality and to drive down costs? I am still unconvinced.

/r/programming Thread Link - docs.google.com