Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Destroy All Ifs – A Perspective from Functional Programming (degoes.net)
362 points by buffyoda on July 16, 2016 | hide | past | favorite | 222 comments


Define true as a lambda taking two lazy values that returns the first, and false as one that returns the second, and you can turn all booleans into lambdas with no increase in code clarity.

The straw man in the post - talking about a case-sensitive matcher that selectively called one of two different functions based on a boolean - is indeed trivially converted into calling a single function passed as an argument, but it's hard to say that it's an improvement. Now the knowledge of how the comparison is done is inlined at every call point, and if you want to change the mechanism of comparison (perhaps introduce locale sensitive comparison), you need to change a lot more code.

That's one of the downsides of over-abstraction and over-generalization: instead of a tool, a library gives you a box of kit components and you have to assemble the tool yourself. Sure, it might be more flexible, but sometimes you want just the tool, without needing to understand how it's put together. And a good tool for a single purpose is usually surprisingly better than a multi-tool gizmo. If you have a lot of need for different tools that have similar substructure, then compromises make more sense.

This is just another case of the tradeoff between abstraction and concreteness, and as usual, context, taste and the experience of the maintainers (i.e. go with what other people are most likely to be familiar with) matters more than any absolute dictum.


Someone else addressed the details of your counter argument, but I'd like to respond to it generally.

It seems like every time someone writes an article on how to write better code, there are responses about how it doesn't make sense when taken to some logical extreme, or some special case, as if that invalidates the argument. (FP techniques in particular seem to provoke this.) But code design is like other design disciplines-- good techniques aren't always absolutes.

Do you really think that because the given example doesn't apply to every situation it's a 'straw man'? It is a little tiring to hear all code design advice dismissed this way.


Reading this article I immediately saw a couple of drawbacks. Since then I've thought of a few more. But several of the points made were not lost on me. This article made me think a bit, and I'm still thinking about it. That's worth something right there.

To anyone out there who clicked through to these comments and is thinking it's not worth reading the article, please go ahead and read it. It's short enough. You may or may not use fewer if-statements in the future, but it might give you a better sense of why you choose to do things one way over another.


The article isn't balanced. It's suggesting that the direction of the refactoring is an unalloyed good, as I read it. I disagree.

I've seen junior devs take this kind of stuff literally and over-apply it, like it's a religious ritual that they get a pious buzz from adhering to. I'd prefer people to think first before regurgitating what they most recently learned.


I agree wholeheartedly. I think the boolean blindness concept that's in the background of this article is incredibly important. But if you're going to propose an actual concrete solution, you need to assess whether it will be right all of the time, most of the time, or situationally (edit: and any of those answers is ok--it's fine to have a pattern that sometimes works if its presented as such). That requires looking at the ways it can go wrong. And this article just didn't do that.


The thing is that the tone of the article seems to suggest taking such an extreme: I mean, an "anti-if" campaign? There's like, only one sentence of concession near the end towards those unconvinced by the argument.


FWIW, I'm pretty unimpressed by the anti-if campaign's website. They've clearly put style over substance. It's a beautiful website, but I spent some time poking through it and I can still only guess at what exactly they're on about. It seems to be something about if-statements being bad, but beyond that it's rather a muddle.

I'm trying to be charitable, though, so let's assume that the core of their idea is something coherent. I'm guessing it's really about something I do think is an important point: How inversion of control is a design pattern that lets you create code that's much easier to manage, because it greatly limits the extent to which certain kinds of decisions need to be federated throughout the codebase.

If that's the case, then real sin (and the article author's) is mistaking if statements for the problem. Conditional branching is not a problem; I think most of us can agree it's an essential operation. The real problem they should be after is poor encapsulation. Where if statements come into it is that, if you've got badly architected code with poor encapsulation, one of the symptoms you'll see is that there will be a proliferation of if-statements that crop up all throughout the code. Every single frobinator will need to stop and check whether the widget it's operating on is a whosit or a whatsit before it can take any sort of action whatsoever. Lord help us if we ever try to introduce wheresits into the system; we'll have to go modify 50 different files so we can replace all those if-statements with switch statements.

It's probably nowhere near as fun to write an article that advocates a high level design methodology as it is to write an article that makes a bold contrarian claim like "If statements bad", though.


You are totally right.

As an example, long if/else chains that check state can mean that you need another object, or another virtual function, or some other niblet of orchestration.

Likewise, I'm not really impressed by the anti-if campaign. At some point, abstractions cause the exact same problem they were designed to solve, and produce code that is difficult to reason about or change.


The author is just examining a common design mistake-- there's no sin there. Many times, it's a mistake to pass in a boolean switch when you could instead pass in the predicate function itself. That's a solid example that supports the author's claim. Maybe you're not convinced, but that doesn't mean the article is completely misguided.


Absolutely agree - if a campaign spouts "Destroy All Ifs" that kind of sets the tone for the discussion...


> There's like, only one sentence of concession near the end towards those unconvinced by the argument.

But that sentence is "I’m just joking about the Anti-IF campaign". Why would extra words help?


The title says "destroy all ifs". The author already did take the idea to the extreme himself.


I can't read the author's mind and can't speak for him but I don't think each word (especially the word "all") is meant to be parsed literally.

Here's another website called "Destroy All Software" using a similar phrase: https://www.destroyallsoftware.com/blog

Gary Bernhardt obviously doesn't advocate removing all software from the face of the Earth. Also notice that it includes blog titles with more bombastic titles:

  "One Base Class to Rule Them All"
  "Burn Your Controllers"
Those are probably not meant to be interpreted literally. There really is no single universal class that can be used for all cases in every circumstance. Instead of parsing it literally, it may be a riff on LOTR "the one ring to rule them all."

Likewise, don't eliminate your controllers because he said it's universal advice. Maybe the title is a riff on Cortez "burn your ships" or some other cultural meme like women's liberation of "burn your bra."


It's true. Personally, I took that to be a bit tongue in cheek. I would compare it to "GOTO Considered Harmful"-- where the author wants you to imagine a world without such a technique in order to expand your abilities. (Even though there are probably edge cases where such usage is justifiable.)


The stance is rather different though - "GOTO Considered Harmful" as a phrase is both inviting a discussion and making a limited statement. "Destroy all ifs" is definitive; the argument is over at the end of the phrase and there will be no negotiation or concessions. I know that this is trivial in this case, but I think it would help discourse in the world generally if we could move away from this kind of position taking and offer our opinions more gently.

As a community we should reward more nuanced and open statements.


Absolutely.

Simply replacing "ifs" with anything else.

I think it's clear how this is simply juvenile hyperbolic invective.


"Ifs considered suboptimal" carries the spirit of Dijkstra and the general argument of the anti if folks.


That would work


It is a little tiring to hear all code design advice dismissed this way.

I notice this form of dismissal in virtually all internet arguments. It's like most people aren't aware of the difference between a strong argument and a sound argument.


I think the problem is that most of these types of articles don't take your advice - the "broken" code that they are improving is absolutely wrong, and there's no room for contextual arguments whatsoever. I mean, the article we're discussing is on the topic of eliminating conditionals wherever possible - that's a hardline stance against something so commonplace in programming it's hard to imagine working without it.


> Define true as a lambda taking two lazy values that returns the first, and false as one that returns the second, and you can turn all booleans into lambdas with no increase in code clarity.

This is trivially true, any datatype can be encoded as a function. The post is not saying that we can pass any type of lambda whatsoever, but that we should pass lambdas that implement the required functionality.

> The straw man in the post - talking about a case-sensitive matcher that selectively called one of two different functions based on a boolean - is indeed trivially converted into calling a single function passed as an argument, but it's hard to say that it's an improvement. Now the knowledge of how the comparison is done is inlined at every call point

If call sites shouldn't choose wich lambda (or boolean) to pass, simply define a new function that always passes the same lambda to the original function, and use it everywhere. (This could also be a good case for partial application.)


> This is trivially true, any datatype can be encoded as a function.

To elaborate: this is called the church encoding of the data type. Particularly interesting for recursive data types.

The most common example is probably 'foldr' (or 'reduce' in Lisp-parlance) for linked lists.


That's one of the downsides of over-abstraction and over-generalization: instead of a tool, a library gives you a box of kit components and you have to assemble the tool yourself.

...and a framework is likely to give you a box of components to build a tool-making factory factory factory...

http://discuss.joelonsoftware.com/?joel.3.219431.12


A church encoded boolean is precisely isomorphic to every language's standard booleans (modulo strictness, perhaps) and doesn't offer any benefits; you're still forking the program based on the information content of a single bit.

Let's take the following function invocation, which can be expressed with Boolean literals or Church encoded booleans, I don't care:

  match true false
If you want to determine the significance of the boolean values passed to this function, it does not suffice to go to the definition of 'true' or the definition of 'false'.

Now take something like this:

  match caseInsensitive contains
Even though I have used descriptive names here, it's almost beside the point; I could just have easily have used nonsense names:

  match foobar quux
If you want to know what 'foobar' means, you can go to its definition, and see how it preprocesses a string and a pattern. You don't have to guess about the meaning of a bit.

As a result, the semantics of 'match' and its parameters are all communicated more clearly, with less room for error, and much more generality.

There need not be any syntactic overhead: it is merely the replacement of some flag with a lambda which cleanly encapsulates the effect that would otherwise be encoded in the flag. The way you invoke the function is the same, but instead of twiddling bits to get what you want, you pass functions whose meaning does not require (as much) subjective and possibly error-prone interpretation.

Note this also objectively simplifies the functions themselves, because they formerly contained conditional logic, but once you rip that out and give them no choice (invert the control!), they have less room to err, which makes them easier to get right, easier to maintain, and easier to test.

There is also another way to view the issue: with booleans, we first encode our intentions into a data structure (at the caller site), and then we decode the data structure into intentions (at the callee site).

Well, why are we packing and unpacking our intentions into data structures? Why not just pass them through?

Indeed, we do that by pulling out the code and propagating it to the caller site (possibly with names so you don't need significantly different syntax and can benefit from reuse). Then our code more directly reflects our intentions, because we're not serializing them into and out of bits.

I think the general principle applies to more than booleans, but it's easiest to see with booleans.


Inversion of control both increases the user's power (anything that implements a certain interface can be used) and adds an extra burden. Especially here,

  match caseInsensitive contains
it takes a bit of thought to match the regex-like concept of "Case insensitive match flag" to "case insensitivity can be achieved by a transformation of the pattern and target so that case doesn't matter". Perhaps the right way to relieve this burden is to provide some simple functions that can be used for the common cases (caseInsensitive, caseSensitive) and a sensible default (caseSensitive).


since I'm not a fp expert what about a function like

    ctx.arc(10, 20, 30, 0, 6.28, false);
There's nothing special about boolean. How do you encode all of those above into types in fp so that it's impossible to get them wrong and so they're self documenting? I hope you're not suggesting there be a horizontalFloat type and a verticalFloat type or are you?


How about keyword parameters...

    ctx.arc(center=Point(10,20), radius=30, beginAngle=0, endAngle=6.28, Clockwise);
...and don't forget about units of measurement/dimensional analysis.

https://stackoverflow.com/questions/107243/are-units-of-meas...

    ctx.arc(center=Point(10cm,20cm), radius=30mm, beginAngle=0rad, endAngle=6.28rad, Clockwise);


That helps quite a lot, although for several, it's more programming-by-name than programming-by-semantics.

I'd like to be able to say the end angle has to be less than the start angle, that the unit has to be radians (AKA unit-less :), the unit of radius & that negative values are sensical, and so forth; and have all these properties checked by a compiler.

Which I can do in some modern languages, surprisingly. :)


Assuming you're right about the guesses for those parameters, we could go a little further. Let's define

   data Directionality = Clockwise | Anticlockwise

   data AngularInterval = {
     beginAngle :: Double,
     endAngle :: Double,
     directionality :: Directionality
   }
... and then we're down to three parameters, all of different types (so no opportunity for mistakes, assuming static checking) and who knows, you might even have other uses for AngularInterval.


I'm a big fan of the philosophy that "Every literal in a program is a bug." :)

But I know what you're getting at! Personally, I'm a fan of programming with units and dimensions, and safely representing the distinction between absolute quantities and relative quantities.

That doesn't mean I'd want an infinite number of "float" values for all possible units and dimensions, however; just a powerful enough type system I can give myself some help at compile-time for properly threading sensical values through my programs.


> "Every literal in a program is a bug."

But we use plenty of literals in our programs all the time. Eg lambdas are function literals. (And definitions of named functions are just a special case folding binding and a lambda.)


A library can very easily provide, along with the kit components, convenience functions that perform common tasks - like matchCaseInsensitive or whatever. The point I took from the post is that, regardless of how the final public API is presented (and indeed, hopefully it doesn't involve piecing together umpteen bits), the code implementing it can be written by composing simple components rather than unwieldy conditionals.


#destroyallifs

#notallifs


#carefulwiththoseifseugene


#allifsmatter


#unlessallelse


Will you marry me?


>> you want to change the mechanism of comparison (perhaps introduce locale sensitive comparison)

I couldn't agree more, and this is why I think most FP programs are about as intellectual stimulating as `std::min_element`


How does using if statements make it easier to introduce locale sensitive computation? A locale should be represented in the arguments or as a transformation, very similar to what the article is doing.


I'm surprised that the article and none of the comments so far mentioned the "Expression Problem": http://c2.com/cgi/wiki?ExpressionProblem

Basically, if you structure the control flow in object oriented style (or church encoding...) then its easy to extend your program with new "classes" but if you want to add a new methods then you must go back and rewrite all your classes. On the other hand, if you use if-statements (or switch or pattern matching ...) then its hard to add new "classes" but its very easy to add new "methods".

I'm a bit disappointed that this isn't totally common knowledge by now. I think its because until recently pattern matching and algebraic data types (a more robust alternative to switch statements) were a niche functional programming feature and because "expression problem" is not a very catchy name.


Another alternative is "table-oriented programming", where you define the "classes" and "methods" as an m-by-n structure of code pointers; to add either "methods" or "classes", you would just add a new row/column to the table along with the appropriate code definitions.

and because "expression problem" is not a very catchy name.

It's also not particularly descriptive either, but the page mentions that it's a form of "cross-cutting concern", to which the table-oriented approach basically says "do not explicitly separate the concerns."

(More discussion and an article on that approach here: https://news.ycombinator.com/item?id=9406815 )

As a bit of a fun fact, doing table-oriented stuff in C is one of the few actual uses for a triple-indirection. :-)


Is that you, TopMind? I used to be on Wiki years ago too...


As a side note, I was pretty surprised that the name isn't as in "I have a great mind" but rather "I have a table oriented programming mind". At first, I had a knee jerk "wow arrogant" reaction and then felt guilty when I realised!


    > I think its because until recently pattern matching and
    > algebraic data types (a more robust alternative to 
    > switch statements) [...]
Could you elaborate a bit on what this accomplishes, eg. pattern matching vs a "case" statement? As I've programmed in Haskell for the past year or two, I've observed exactly this change in my style of writing - that I've started to get rid of "case" statements inside function definitions, and have moved them into the pattern-matching part instead ("outside" the function definition).

But I have to admit, I'm not entirely sure why I do this. It just feels more robust to me in some way.


I was just comparing the pattern matching from FP languages with the more primitive C-like switch statement. The big advantage comes from the algebraic data types (tagged unions), which let you model data with many "cases" in a type-safe manner. For example, in Haskell we don't have null pointers because we can use the Maybe type instead.

The case-expression vs function-definition difference you mentioned from Haskell is just syntactic sugar. In both situations you are doing exactly the same pattern matching under the hood.


Just FYI, I asked a pretty similar question a few months ago here [1]. The main arguments for pattern matching seemed to be that:

* [at least some] compilers will check for exhaustiveness

* "Pattern matching isn't just conditional matching. It's also binding, and even some common operations. "

[1] https://news.ycombinator.com/item?id=11159321


FP and particularly the Haskell community is very aware of the expression problem and you can find many blog posts and papers on solutions.


That 'ufo' knows about the name expression problem at all betrays that they are very aware of that work in the FP community.


I was familiar with the problem but didn't have a name for it; thanks for providing me with one.

What kind of work has there been on creating programming paradigms that make it easy to both add new types and new methods? Is it a CAP-theorem-type problem where every solution is a trade-off, or is there a way to have your cake and eat it too?


There is plenty of research out there about trying to solve the expression problem and the wikipedia article has links to some of them: https://en.wikipedia.org/wiki/Expression_problem

That said, there is a complexity and readability trade-off (that is hard to quantify) because these more flexible programming patterns that can solve the expression problem are more complicated than plain method dispatching or switch statements.


The comments in this thread and the link to ltu in this thread talk about one of the simplest and most elegant solutions to the expression problem https://m.reddit.com/r/haskell/comments/4gjf7g/is_solving_th...


There are languages (libraries) that solve it. For reference, check Clojure's multimethods and OCaml's polymorphic variants.


The tradeoff is that techniques like multimethods significantly weaken the contract that people normally expect from methods/classes.

For example, if I'm writing a normal Java class, I know where I go to find methods dealing specifically with instances Foo (namely Foo and its children and direct users); with multimethods, it's more likely that there is some multimethod out there in an unrelated class that looks for instances of Foo.


Good tooling (think something along the lines of Hoogle) can help here.


The OCaml solution with polymorphic variants: http://www.math.nagoya-u.ac.jp/~garrigue/papers/fose2000.htm....

A very good summary of that paper is available here: http://lambda-the-ultimate.org/node/1518#comment-17566

Another alternative based on recursive modules: http://www.math.nagoya-u.ac.jp/~garrigue/papers/#privaterows


here was an interesting proposal for a solution in C++ https://channel9.msdn.com/Events/CPP/C-PP-Con-2014/0007-Acce...


Problem is, a decision has to be made somewhere about which function to pass into that "if-free" block of code. The if-like decision has just moved elsewhere. That is a win if it reduces duplication: if a lambda can be decided upon and then used in several places, that's better than making the same Boolean decision in those several places.

Programs that are full of function indirection aren't necessarily easier to understand than ones which are full of boolean conditions and if.

The call graph is harder to trace. What does this call? Oh, it calls something passed in as an argument. Now you have to know what calls here if you want to know what is called from here.

A few days ago, there was this HN submission: https://news.ycombinator.com/item?id=12092107 "The Power of Ten – Rules for Developing Safety Critical Code"

One of the rules is: no function pointers. Rationale: Function pointers, similarly, can seriously restrict the types of checks that can be performed by static analyzers and should only be used if there is a strong justification for their use, and ideally alternate means are provided to assist tool-based checkers determine flow of control and function call hierarchies. For instance, if function pointers are used, it can become impossible for a tool to prove absence of recursion, so alternate guarantees would have to be provided to make up for this loss in analytical capabilities.


In a language like Haskell you wouldn't want to prove the absence of recursion, but that all recursions in the program fit into a handful of patterns. (Eg 'structural-recursion' or 'tail-recursion'.)

Some type systems are strong enough to put that kind of analysis / constraints directly into the language. (Haskell might already be strong enough with GADTs and other language extensions enabled.)

In any case, the Addendum at the end of the blog post provide a different perspective on the problem you mentioned.


Tee hee, Haskell doesn't have tail recursion (e.g. foldl takes linear space), and structural recursion in Haskell isn't guaranteed to terminate (e.g. if you're given an infinite list).

If I were in charge of developing a safety critical system, and someone came to me with a proposal to write it in Haskell, I'd be very skeptical.


??? Haskell absolutely has tail recursion; foldl just evaluates non-strictly and therefore can leave thunks in memory. This is fine for e.g. reversing a cons-list. Regardless, it is tail recursive (and uses constant stack space). foldl' is also tail recursive and has strict semantics.

Structural recursion can't be guaranteed to terminate in any language that supports codata unless you have some sort of totality checker (e.g. via a monotonically structurally decreasing requirement imposed at the type or value level). I don't think any mainstream language supports this out of the box. Liquid Haskell does offer this, though.

I agree that standard Haskell is inappropriate for safety critical software, but only because it allows dynamic allocation. Any program using dynamic allocation is probably unsuitable for safety critical software. Now, a terminating and fixed-memory subset of Haskell a la Clash would be interesting for safety critical software...


The point of tail recursion is using constant space, not constant stack space (does Haskell even have a stack?) Anyways, the Haskell spec allows foldl' to use linear space just like its lazier counterparts. The fact that it uses constant space is an implementation detail of GHC. Reference: https://github.com/quchen/articles/blob/master/fbut.md#seq-d...

Structural recursion always terminates in SML. Supporting infinite/cyclic values in algebraic data types is a misfeature, and they are trivial to rule out without using a totality checker. Heck, I can implement a guaranteed finite linked list in Java :-)

I think something like MLKit would be a more promising start for implementing a safety critical system. Tail and structural recursion actually work there, and it statically replaces most uses of GC with region inference. Though it's still a very long shot, I'd prefer something more proven.


Tail recursion can't use constant space if it's strictly generating another data structure of the same size. That doesn't even make sense.

Interesting fact about foldl'. Regardless, in practice it is strict and tail recursive. As I mentioned earlier, this does not mean the same thing as constant space unless the reduction function returns a fixed size result.

Yes, you can guarantee that a linked list in Java is finite because Java does not support codata.

Haskell's tail call recursion is also often optimized to be allocation-free, unless, again, it is generating some data structure.


> Yes, you can guarantee that a linked list in Java is finite because Java does not support codata.

What about another thread running that keeps generating pieces to the end of the linked list? (No problem, with mutation.)


To prevent these and similar "what abouts", here's an implementation of a guaranteed finite linked list in Java.

    class LinkedList<T> {
      public final T value;
      public final LinkedList<T> next;
      public LinkedList(T value, LinkedList<T> next) {
        this.value = value;
        this.next = next;
      }
    }
Here's how you construct it:

    LinkedList<String> myList =
      new LinkedList("Hello",
        new LinkedList("World", null));
Here's how you iterate over it in constant space:

    while (myList != null) {
      System.out.println(myList.value);
      myList = myList.next;
    }


The point of tail recursion is using constant space for the environment and control flow meta info, not constant space absolutely.


> does Haskell even have a stack?

Yes


While this is true, the contents of the STG stack aren't necessarily obviously related to the conceptual "call stack", right?


Well it's case that pushes onto the stack rather than (syntactic) function call but if you're willing to be generous with what you consider "call" then, yes, they're related.


> Now, a terminating and fixed-memory subset of Haskell a la Clash would be interesting for safety critical software...

Definitely.

I something think Haskell should treat non-terminating code the same way they treat IO: only allow it in portions of the code-base that are tagged somehow.

If there's something that doesn't cause a side-effect, but the compiler doesn't know that, you can tell it with unsafePerformIO. If there's something that terminates, but the compiler doesn't know, (eg calculating the hailstone sequence for a number), there could also be a suitable escape hatch to be used with care.


That coding guideline simply rejects all recursion, even cases that are correct by inspection, or by easy proof.


Interesting. I assume they allow the special cases of tail recursion introduced by 'while', 'for' and similar constructs?


goto is banned; loops must be statically bounded.


Thanks for the information!


All valid points.

If you're going to do this sort of thing with much success, you really need to have a language with a fairly powerful type system. If function pointers are your only option for higher-order programming, I wouldn't even try. First class functions or interface polymorphism help, but I'd also want to have a language that makes it relatively easy to create (and enforce) types so that your extension points don't end up being overly generic.


What distinction are you drawing between "first-class functions" and "function pointers"?


About the same distinction as I'd draw between an integer and a pointer to an integer.


Both function pointers and "first class functions" refer to function indirection.

In its treatment of expressions, C doesn't draw the distinction between function and pointer to function. When you call printf("foo\n"), the printf part is a primary expression which designates a function, and evaluates to a function pointer. That pointer is then dereferenced by the postfix ().

The main difference between a "first class" function and a function pointer is that a first class function carries an environment, which allows the body of the function to make references to lexically scoped names outside of the function. A function pointer carries a reference to the code only.


>The main difference between a "first class" function and a function pointer is that a first class function carries an environment

Isn't that specifically a closure? I think first class functions have a more general definition.


This is, as many commenters have noted, just another overzealous programming doctrine. Just like 'GOTO considered harmful.'

Here's the deal: if is a flow control primitive. Just like goto and while. If (heh) that primitive isn't high-level enough to handle the problem you are facing, it is incumbent upon you as a programmer to use another, higher level construct. That construct may be pattern matching, it may be polymorphism (or any other form of type-based dynamic dispatch). It may be a function that wraps a complex chain of repeated logic, and is handed lambdas to execute based upon the result. It may, as in the article given here, be a funtion that is handed lambdas which apply or do not apply the transformation described.

The point is, there are many branch constructs, or features that can be used as branch constructs, in most modern programming languages. Use the one that fits your situation. And if that situation isn't a that complex, that construct may be if.

Fizzbuzz using guards is the most clean and modifiable fizzbuzz that I've seen in Haskell.

Although now that I think about it, if you provide a function with a list of numbers...


Not all control-flow primitives are necessary.

Eg Haskell and Scheme get by without 'while' and 'goto'.

Haskell would do just fine without a built-in 'if': you can define 'if' as a function via pattern matching.

Given that perspective, the article would be a call to use more expressive types than Booleans to match on---and in lots of cases not to match at all, but provide what would be the result of the match as an argument to the function.


Scheme and Haskell have other primitives that take the place of while and goto.

But yes, using more expressive match types or parameters is a good idea. As for providing the result as an argument, that can be a good pattern, but isn't always practical. Note what I said in my original comment about using your own discretion.


They don't have `other primitives': they have function calls. Most languages have function calls these days.


Yes, I read LTUI and LTUD. But in most languages, function calls and loops don't have the same semantics. I'll call that a different loop primitive.


For C, this seems to be implementation defined.

(At least for C as encountered in the wild, I don't know about C the standard.)

Most modern C compilers support tail call optimization.

I don't know about `most languages'. Eg I know Java on the JVM doesn't do tail call optimization. Lots of languages probably do not require TCO of their implementations, though.


This whole campaigned is misguided.

"Bad IFs" are a code smell, and they're being scapegoated when the real problems are management demanding that simple hackish prototypes & tests be deployed into production, management that doesn't allow time for refactoring, and poor programmers who think that "bad IFs" are good code.

But the main site also doesn't do any reasonable job of defining what a "Bad IF" even is.

The crux of the matter is that programmers need time to craft the details of a project to avoid or correct technical debt. These sort of reactions just point out one tiny portion of technical debt itself and doesn't solve any fundamental problems at all.

(and yeah, I known I'm ranting against the Anti-IF campaign, not the particular take on the linked site. But this article just seems to parameterize the exact same parameters that are branched on anyway.)


I think that aiming at the management of the coders and the business users is putting the emphasis in exactly the right place. Once we get to womdering if eliminating IF stmts will help, we have passed by so many opportunities for 10x value delivery.


The "technical debt" metaphor gets so much better if you take the analogy more literally than most people do. Like for financial debt, the optimal amount is not necessarily zero. Oftentimes taking on or carrying debt allows you to generate more profit than you could by avoiding it or paying it down.

That said, most places I've worked manage it poorly. Few people really understand that, just like financial debt, it's something that needs to be taken on and managed in a mindful and deliberate manner.


Also, if you do take the finance metaphor, going into debt is not good by itself. It's the investments you make with that debt that are good, and potentially outweigh the burden of debt. (And can be cheaper than equity financing.)

Going back to programming: debt-fuelled programming should buy you something, eg speed to market, and is not a good in itself.


If statements are code smells in languages that are not strictly imperative. Is an if statement literally the only option? If not, that's a "Bad If."


The idea that each type has its own control flow primitives is bothersome. It's taken over Rust:

    argv.nth(1)
        .ok_or("Please give at least one argument".to_owned())
        .and_then(|arg| arg.parse::<i32>().map_err(|err| err.to_string()))
        .map(|n| 2 * n)
I'm waiting for

    date.if_weekday(|arg| ...)
Reading this kind of thing is hard. All those subexpressions are nameless, and usually comment-less. This isn't pure functional programming, either; those expressions can have side effects.


I don't agree here at all. The methods you show operate on an Optional, and it's incredibly common to perform those kinds of comparisons so it makes sense that they have convenience methods. This is not at all comparable to something like if_weekday.

This has not "taken over" rust. Result is another type that does this, but this makes sense for the same reasons.


Rust basically offers a Monad-like API there. That's perfectly fine and a well established pattern.

That has nothing to do with primitive control flow nor is that an indication of if_weekday appearing anytime soon.

That being said having primitive control flow implemented as methods also has precedent with languages like Smalltalk or Self. That may be unusual but I don't think that's necessarily bad. I would be interested in reading about why this is bad design though.


In Haskell, realizing that data flow and control flow are of the same spirit and that data structures are control structures is one of the key epiphanies to be had.

This article mentions 'if' and 'Boolean'. Loops and lists are another example. (And for the same reason that most languages make such extensive use of loops, Haskell programs can often have a lot of lists.)


The annoying part there is the repeated "|x| x.". Rust should have syntax to reference a method of an object, instead of having to write a wrapper. So it'd look like .map_err(???.to_string()).


Groovy and Kotlin use the implicit "it" parameter for lambdas that take just one parameter, which is very convenient:

    listOf(1, 2, 3, 4).filter { it % 2 == 0 }


Scala allows underscores, and sequential underscores refer to the next element, so you can do e.g.

    list(1, 2, 3, 4).reduce(_ + _) == 10


Doesn't that make the parameter anonymous, though? Can you println that _ and see the value of the current element?


Use a function that prints then returns its parameter:

  list(1, 2, 3, 4).reduce{print(_) + _} == 10
Use it if you or your language hasn't defined such a function:

  list(1, 2, 3, 4).reduce{it:= _; println(it); it + _} == 10
In fact, any name for it will do.


Yeah, that's the tradeoff–gain the ability to work with multiple parameters but lose the ability to reuse a single one.


And Haskell uses operator sections: (==0) would be similar to { it == 0 }. Alas, the section syntax get a bit cumbersome when you want to compose two or more of them, like in this translation of your example:

    filter ((0==) . (%2)) [1, 2, 3, 4]


so does LiveScript


It does. This could have been written

    .map_err(ToString::to_string)
as well. Works just fine with methods.


Why can't it be written as:

    .map_err(to_string)
When using the lambda the type is inferred, so why should there be a need for the ToString?


It'd need syntax to distinguish from a local function called to_string. Anything less allows ambiguity and wouldn't be equivalent (like using type::method won't do auto-borrow). So it'd need to be "\to_string", "|to_string" or something. Ample opportunity for bikeshedding.

Functional style is hampered by excessive verbosity. (Non functional style is so verbose a bit of extra noise doesn't hurt _as much_.) Rust could use a lot more inference, custom operators[1], and so on. They seem to sort of agree, with auto-deref, auto-borrow, some type inference, but won't go all the way. I suppose being conservative can be defended -- can't go back without breaking code. Hopefully, in the future, the verbosity will annoy more people and there will be enough support to head in a more Haskell/ML direction. But they seem very opposed to it at the moment.

1: The rationale apparently being "someone might abuse it!" instead of "it makes good libraries even better". Parser combinators, UI toolkit code do great with custom operators. Require a method name (i.e. operator !!= as foo) if it's too great a concern. Can't save yourself from bad writers. Crippling yourself to avoid this seems like a poor tradeoff.


If `to_string` was a function that was imported into the local namespace, you could. But since it's a method, you can't; you need to provide the trait name.


If there's a function and a method named to_string the user would've to be explicit about which one he uses by adding a namespace, like ToString.

If there's only one to_string function or method, the compiler could just take this one.


To follow up on this slightly, it's not really that this is special syntax. It's that map_err takes a function as an argument, and this is how you refer to this method by name.


But that's actually longer. It does look a bit better though, maybe.


Yeah, it's not about saving characters to me, it's about clarity.


Actually, this suggestion does not even work. Writing trait::method is not equivalent to writing "|x|x.method()". The latter will use method lookup rules, the former requires the programmer to decide which impl. For instance, in the above example, if the type impl'd to_string, that would be the one used, not ToString's implementation. From what I can tell anyways: https://is.gd/E8pdWc

Edit: Also, auto-borrow does not seem to work with this syntax.

This is a common enough pattern that reducing the visual noise will increase clarity.


Yes, this is correct. It's what I was getting at in the other thread; this is choosing a method manually.


Rust does have such a syntax: map_err(ToString::to_string).


It's not the same though; it can resolve to different methods based on non-local code. It also doesn't do autoborrow.


The method whose to_string method you want to reference isn't in scope. You need a function that calls the method on the argument it's called with.

Why add a feature for this - worse syntax, if you can just use an anonymous function?

Rust is already not the simplest of languages, adding further syntax and features of questionable benefit won't make the language any simpler or easier to understand.


Because repeated "|x| x" is a common noise pattern.


Most of this should be obviated when the ? operator is ready. But until then, there is no primitive for 'work on the type you wrapped in Option, short-circuiting and returning None at the first sign of failure', so it has to be done in the library.


If this is an accepted idiom, people will be using it for years to come. Sometimes only to be cool.

With "?" and "try!()", Rust is sort of emulating exceptions in a weird way.


I started using a similar idiom in Java 8 recently, inspired by Optional class and some new functions in Map interface. It took me an hour or three, but at some point I noticed that in few places I was writing like this just because it's "cool", even though it was less readable (and actually made it harder to use one code-navigation IDE extension I liked).

A lesson this strengthens in me again is that sometimes a cool-looking idea turns out to be pretty bad in practice, but you only figure it out after you go ahead with it. It isn't bad to try out things (that's how we learn), but you need to be extra honest with yourself about how the thing really feels when you're first using it, and never ignore that sense this new idea actually doesn't fit well and should be rejected.


  >  Rust is sort of emulating exceptions in a weird way.
Except at the level Rust operates at, the code generation for exceptions vs. not is a thing. Not relying on landing pads, etc, is a thing people care about, and is why there's significant difference between return values vs exceptions can matter, even if at some higher semantic level, they're roughly equivalent.


I like it, because you get somehow the best of both worlds, on one side you're getting explicit error handling and on the other the convenience of exceptions, that you can just "raise" them and they propagate upwards in the calling stack.


I'd prefer to use `if let` and/or `try!()` instead of the methods here.


Oh god it's like ActionSupport came back with a vengeance.


Rust does not let you globally add things to existing stuff, so it's a very different situation than ActiveSupport, even if it may look superficially so.


date.if_weekday looks like something one would find in Smalltalk class library, but (at least for GNU Smalltalk) only such construct I've found is this: http://www.gnu.org/software/smalltalk/manual-base/html_node/...


This just seems to obscure the logic. Not unlike how polymorphism can make code flow harder to read, though feel more clever.

There is a place for it - like when you're trying to express a set of logic that will be guarded by the same condition, but always at the cost of some complexity.

A set of conditionals is probably the most obvious way to express branching.


Try it before you knock it. I might have said something similar before getting into Haskell, but now I'm nodding along happily. Dealing in meaningful data types with small composable functions is very pleasant for me now.


> Try it before you knock it.

That's what I recommend too[0] - with the added caveat that you shouldn't be afraid to "knock it" if it turns out to be honestly bad.

Sometimes the idea turns out bad, sometimes it turns out great - but you'll never know it if you don't try; just be honest with yourself during that trial.

[0] - https://news.ycombinator.com/item?id=12108138


Absolutely, but I'd also add that what works in one language/environment may not work in another. I wouldn't be surprised if writing Haskell-style code in Python didn't work out that well, for example!


I agree. I used to earn my bread in PHP and now I do it in Java, while after hours I mostly code in Common Lisp. I have some experience trying to port idioms from the latter one to the former two. Sometimes it turns into a good idea (God, how much my life in one PHP project was simplified by porting over #'mapcan), sometimes you quickly realize it's stupid and makes zero sense. But I think that's again a case of taste acquired by experimentation :). A big enemy here is simply sunk costs fallacy - no matter how emotionally invested you're in some idiom, sometimes it really doesn't fit the other language.


I have tried. Two things get in the way quickly, and that's even just expressing thing, not even looking at performance yet:

  * Python standard library functions, especially the ones on dicts, mutate and don't return the new dictionary.
  * Python's syntax for creating functions is awkward: lambdas are cumbersome, and so are the operator package and eg functools.partial; there's no really convenient way to compose functions.


Your first point is actually something I really like about Python's API design: in general, methods operating on collections either mutate the collection /or/ they return it. So it's clear at the point of use whether you're dealing with the same object or a new one.

This is something that bugs me about the fluent builder pattern in Java -- continuing to return `this` until suddenly you don't any more, and you can't re-use 'intermediate' values because they're actually all the same object.


Sure. I'd just like to have a nice set of operations to manipulate dicts that don't mutate and return the result, too.


> A set of conditionals is probably the most obvious way to express branching.

Besides object polymorphism and sets of conditionals, there's also generalized predicate dispatch, but that's probably an overkill for many things.


I recommend Bob Harper's essay on "boolean blindness": https://existentialtype.wordpress.com/2011/03/15/boolean-bli...

An excerpt:

> The problem is computing the bit in the first place. Having done so, you have blinded yourself by reducing the information you have at hand to a bit, and then trying to recover that information later by remembering the provenance of that bit.


Thats why you use Lua, it lets you have multiple return values. So you can get a boolean back to let you know if the strings were the same, an int to know where they ceased matching and a boolean to let you know if they are case different. It's then up to the programmer to decide how much enlightenment they want.

The destroy all IF reminds me of GOTO considered harmful of the 70's. There are other ways to fix the problem.


Multiple return values are a great idea IMO, and I learned to really appreciate them in Common Lisp. They're best used as additional bits of information that the programmer may or may not find useful. Like that string comparison example of yours. Or #'gethash[0], that will return the value from hash table you're looking for or a default value (which is optional and by default NIL) if there is no entry with a given key, but it'll also return a second, boolean value that tells you whether your return value was actually found or not - which cleanly solves the problem of storing NILs in hash tables, in the places you care about it.

Multiple return values feel elegant also because the compiler can optimize them away when you're not using them, which is the most common case.

[0] - http://clhs.lisp.se/Body/f_gethas.htm


That still has the problem of boolean blindness. The boolean you get back doesn't tell you what it means, you have to go looking for that information.


What's the difference between multiple return values and returning a tuple?

(Apart from that languages with multiple return values tend to have some special syntax for binding only the first few members of the returned tuple?)


Apart from what you mention, it's often not possible to pass along all of the multiple return values as a single value.


The difference, in terms of type theory, is that a tuple is a product type [1] but a type representing multiple possible return values (to represent different outcomes) would be encoded using a sum type [2].

[1] https://en.wikipedia.org/wiki/Product_type

[2] https://en.wikipedia.org/wiki/Tagged_union


I don't think eg Go uses a sum type:

https://gobyexample.com/multiple-return-values

Typically they use product types to simulate sum types.


The Lua solution is clutter. Why return a boolean just to switch on it and then discard it?


Why create a lambda just to execute it and then discard it?


The problem with the Boolean is rather that people mix up the two values all the time.

The lambda also has more type safety. A Boolean is always a Boolean, but the compiler (and in a dynamic language the runtime) can tell you when you are calling your passed functions with the wrong arguments, because you mixed them up.


Use an enum.


No need to stop there. Not only use an enum, but also make it bear different types of values for different cases, and you arrive at Algebraic Datatypes. Eg, for trees you can have:

    data Tree = Empty
              | Leaf Int
              | Node Tree Tree


Can't that argument be extended to, say, uint8_t, and from there to all sorts of computed values?


Often times when I read about "ideal" ways of programming, I'm curious if it's ever implemented in a production code base built by a team.


Me too. Particularly because every programmer has their own idea of what a "right"/ideal style of programming is. Here, apparently, we must not use conditionals.

The more I write code the more I realize that the entire purpose of the code is to have some effect on reality, and the more reliably it can do this, the better the code. I find I code a lot better without design principles, because trying to remember which patterns are "good" and "bad" just obscures the attention I would have used to look at the code and sense whether something would work in this particular situation.


The more I write code the more I realize that the entire purpose of the code is to have some effect on reality, and the more reliably it can do this, the better the code.

Very nicely put. The only "principles" I keep in mind when I write code are simplicity, correctness, and efficiency, and those tend to all be correlated.


Not functional code though. The aim of functional code is to be side-effect free, and affecting reality really gets in the way of that.

/snark, but articles like this really do fall into that trap...


I know you're making a joke, so I'm not writing this to correct you, but I'd like to point out that Haskell/pure FP is different not because it denies affecting reality, but because it only offers a one-way interface to affecting reality: it allows you to alter values in reality using pure functions, but it denies you the ability to "pull in" values from reality, into your pure functions.

This paradigm is powerful because it accurately reflects how our universe works: it is possible for a thought to affect reality (through a human being acting on it), but it is not possible to "pull in" an object from reality, into your mind. The only way to form a thought about something is to look at that thing, and try to construct - in your mind - a thought that reflects certain properties of the thing you're looking at. You can't "pull" that thing from reality into your mind, thus creating a thought. No such interface exists in this universe, as far as I'm aware.

It is, however, very possible for a human being to choose to act on a thought, thereby causing the thought to have a side effect. The analog to this in Haskell is applying a pure function to a value in IO. The function is pure, but we can use it to alter a value that resides in IO (reality). Similarly, a thought, in and of itself, does not affect reality (it is pure); it requires a human being to act on it - "apply it to reality" - in order for it to have an effect.

In short: Haskell allows your program to alter reality, but it does not allow reality to alter your program.


Didn't they have some gratituous IO-monad stuff in the article?


> The more I write code the more I realize that the entire purpose of the code is to have some effect on reality, [...]

Perhaps. But there's the effect you get from running the code on a computer. And the effect reading the code has on humans.


I'm not sure it's possible to write a piece of software that only you understand, that is of high quality (works reliably). I feel like code correctness (what a computer does with it) and readability (how well a human understands what is written) are two sides of the same coin.

I think I would argue that if someone has happened to write a program that is impossible to understand for human readers (unreadable), and yet it does exactly what it's supposed to do (is correct), at the very least this program will break when the author starts to refactor it, which always needs to be done at some point.

I agree that conveying an idea to another human being through code is useful, but I think the number of times a human being has written a piece of computer code only to convey an idea to someone else is fairly small. If I want to convey an idea to another person I write in words and concepts, but if I want something faithfully executed every time, I need to write the code. And perhaps someone will look at this code later, but the origin of the code was always to get something done, not conveying an idea to someone else.


You can write a program that's hard to understand for humans, but comes with a computer-checkable proof of its correctness.

But aside from that pedantic possibility, I agree with most of the spirit of your comment.

Code can be useful to communicate with humans. In fact, the most expressive programming languages should be better at conveying precise descriptions of algorithms to other humans than natural language.

(Not a lot of programming languages reliably reach that ideal. Haskell sometimes comes close.)

I regularly write code that's meant to convey ideas, eg when explaining concepts, algorithms and data structures to people.


I'd love to see what ways of programming tend to be most effective with production teams.

My gut thinks the solutions will be a little more boring than our inner magpies will want to admit.


> I'd love to see what ways of programming tend to be most effective with production teams.

The one the whole team understands and can agree upon.


I recently did some refactoring at Google---we have a few bits of Haskell here and there---and did some similar things to what the author of the article proposed.

(Though the biggest impact of the refactoring was to remove two home-grown abstractions and a whole bunch of ad hoc transformations and replace them with the appropriate use of the very powerful, and well-understood Applicative.)


Usually in teams of younger engineers on the first project they have control over themselves.


I read everything I could find on the Anti-IF site and didn't understand what the mission is exactly. They qualify and mention they want to remove the bad and dangerous IFs, but I couldn't find examples that differentiate between bad ones and good ones -- are there good ones according to this campaign?

I like using functional as much as anyone, and removing branching often does make the code clearer and remove the potential for mistakes.

But I admit I have a hard time with suggesting people prefer a lambda to an IF, or to not ever use an IF. A lambda is, both complexity wise, and performance wise, much heavier than an IF. And isn't is just as bad to abstract conditionals before any abstractions are actually called for?


I read everything I could find on the Anti-IF site and didn't understand what the mission is exactly.

I have a similar problem, in that every time I try to understand the perspective of functional-programming advocates, I find that the authors always seem to illustrate their points with examples like this:

   match :: String -> Boolean -> Boolean -> String -> Bool
   match pattern ignoreCase globalMatch target = ...
If I'm already literate in Haskell or Clojure or Brainfuck or whatever godawful language that is, then chances are, I'm already familiar with the strengths of the functional approach, and I'm consequently not part of the audience that the author is supposedly trying to reach.

So: are there any good pages or articles that argue for for functional programming where the examples can be followed by a traditional C/C++ programmer, or by someone who otherwise hasn't already drunk the functional Kool-Aid?


The problem's not on your end -- a lot of these blogs are just junk, probably the vast majority of ones that fall under "advocacy". As far as I can tell, the author's objection to conditionals is based on a misunderstanding of a different blog post[0]. It's nonsense.

Really understanding where FP is coming from requires an introduction to programming language semantics[1]. Interesting stuff, but not immediately useful to a working C programmer.

[0] https://existentialtype.wordpress.com/2011/03/15/boolean-bli...

[1] http://www.cs.cmu.edu/~rwh/pfpl.html


    bool match(char *pattern, bool ignoreCase, bool globalMatch, char *target) { ...


Well, at some point you are looking at transistors and 'if's .

How many layers above that you want to hide that fact is entirely dependant on you and the requirements of solving there problem.

I have a problem with people assuming 'their' way is there only way, and generally being oblivious about the vast variety of problems the rest of us encounter.


I tried to ask the author the follow: (kept getting deleted as spam). Perhaps he will see it here but its unlikely due to the fact there are many comments as it is.

Hi John,

Are you familiar with Jackson Structured Programming?

https://en.wikipedia.org/wiki/Jackson_structured_programming

Notice how the focus in on using control flows that are derived from the structure of the data being processed and the processed data. Notice how the JSP derived solution in the Wikipedia example lack if-statements.

Pattern matching allows ones to map control flow to the structure of data. What are your thoughts on that? I think inversion of control has other benefits but I don't think it has much to do with elimination of `if` conditionals, the pattern matching does that.

Also, I noticed one thing:

In the article you mention `doX :: State -> IO ()` as being called for its value and suggest that if you ignore the value the function call has no effect. Isn't it the case that a function of that type usually denotes that one is calling the function for its effect and not for any return value? Its value is just an unevaluated `IO ()`.


The return value of the function is a description of an effect. Calling the function doesn't cause the effect to happen. That's why you could, for example, call the function many times and get a list of IO actions which you then execute in parallel or backwards or whatever. Hence "inversion of control".


I was debating whether on not to put that last sentence because I knew that it would lead to a technical discussion that was aside from the meaning of the question. My question is more -- why choose an `IO ()` as an example of something being called for its value (especially since the article isn't aimed at a Haskell audience)


Yeah, that's probably not a wise decision on part of the author. The IO monad is nifty but of minor importance in the grand scheme of things, and distracts when making a mostly language independent point.


The author seems to ignore the fact that passing lambdas like this merely changes where the IF or SWITCH statement is made. I can agree that passing functions instead of booleans is better and more general. But pretending that IF/SWITCH are thus avoided, is delusional.

For instance, at some point there will be a decision made whether the string matching must be case sensitive or not. If the program can do both at runtime, the IF will be, perhaps, in the main (or equiv.).


Indeed, that's the whole point of inversion of control, is pulling the control out of the caller and into the callee. That's the primary reasoning benefit of functional programming.


I see no benefit to that. It makes more work for the caller. I want that function to do something for me and I want the leadt amount of unnecessary work on my side. Just like a good boss who delegates.


That's why not everyone's a functional programmer. :)


Why don't we just treat this like writing?

Good writing has one clear imperative: communicate meaningfully the intent of the author to the reader. Good code is no different; it is merely expressive writing in a different language, with, perhaps, greater constraint on its intent.

Some people make up rules like "don't use adverbs", or "don't split infinitives", in an effort to write better. But this doesn't necessarily produce good writing; sometimes an adverb is just what you need.

The same is true of code. These are useful things to think about, but "destroy all ifs" is akin to "never use a conjunction".


I get what you're saying, but that's definitely not what good writing means in the context of, say, poetry, or literary fiction. Programming is best compared to technical writing or cookbooks, I think.

I realize this is one of those irritating "actually," replies, but what can I say, I'm sensitive about this topic. =)


If I understood correctly, the article suggests that as a general principle you should replace your union types and case-by-case code with lambdas. I feel almost the opposite.

Article: "In functional programming, the use of lambdas allows us to propagate not merely a serialized version of our intentions, but our actual intentions!"

Counterpoint: The use of structured objects instead of black box lambdas allows us to do more than just evaluate them. For example, Redux gets a lot of power by separating JSON-like action objects from the reducer that carries out the action.

But let's take instead the article's example of case-insensitive string matching. One tricky case is that normalization can change the length of the string: we might want the german "ß" to match "SS". Sure, the lambda approach can handle that. But now suppose that we want a new function that gives the location of the first match. It should support the same case-sensitivity options (because why not?). But now there is no way to get the pre-normalization location, because we encoded our normalization as a black box function. Case-by-case code would have handled this easily.


The first problem is that the "match" function is considered in the first place. It's too general. It should only be used in higher order constructs where its flexibility is actually needed.

Second: The enum based refractor is actually valuable and fine IMO. If you need string functions, stop there.

Now, shipping control flow as a library is a cool feature of Haskell. But, if those arguments are turned into functions, the match function itself isn't needed! It just applies the first argument to arguments 3 and 4, then passes them to the second argument.

match :: (a -> b) -> (b -> b -> Bool) -> a -> b match case sub needle haystack = sub (case needle) (case haystack)

Does that even need to be a function? Perhaps. But if so, it's typed in a and b and functions thereof, and no longer a "string" function at all. And, honestly, why are you writing that function?

Typing it out where you need it is typically less mental impact, because I don't need to worry about the implementation of a fifth symbol named "match."

sub (case needle) (case haystack)


Isn't this exactly the Smalltalk way? In ST what looks like if-statements actually are messages passed to instances of Boolean, with lambdas (in Smalltalk: BlockClosures) as argument. The boolean then makes the decision whether it will evaluate the lambda or not.


When I read things like "anti-if" I recall this brilliant illustration that I saw several years ago - http://blog.crisp.se/henrikkniberg/images/ToolWrongVsWrongTo...


The inversion of control flow from called to calling function is an interesting way to describe (part of) functional programming style. I hadn't thought of that this way, even though I use it for quite some time.


General principle: for every possible refactoring, the opposite refactoring is sometimes a good idea.

So, yes, replacing booleans with a callback is sometimes a good idea. But in other situations, replacing a callback with a simple booleans might also be a good idea.

Also, advice like this is often language-specific. In languages whose functions support named parameters, boolean flags are easy to use and easy to read. If you only have positional parameters, it's more error-prone, so you might want to pass arguments using enums or inside a struct instead.


Someone found a hammer, and now everything looks like thumbs


tl;dr: prefer callback hell instead of straight forward ifs and somehow that's progress.


Yeah, and the true fun starts when you try to debug it. Debugging streams in java is nightmare compared to debugging the same logic written in a simple foreach loop with a bunch of IFs.


The idea that functional programming is a type of inversion of control reminds me of similar idea I had, when comparing OOP and FP.

In OOP, you encapsulate data into objects and then pass those around. The data themselves are invisible, they only have interface of methods that you can apply on them. So methods receive data as package on which they can call methods.

In FP, in contrast, the data are naked. But instead of sending them out to functions and getting them back, the reference frame is sort of changed; now the data stays at the function but what is passed around is the type of processing (another functions) you want to do with them.

For example, when doing sort; in OOP, we encapsulate the sortable things into objects that have compare interface, and let the sort method act on those objects. So at the time sort method is called, the data are prepared to be compared. In FP, the sort function takes both comparison function as an argument, together with the data of proper type; thus you can also look at it as that the generic sort function gets passed back into the caller. In other words, in FP, the data types are the interfaces.

So it is somewhat dual, like a different reference frame in physics.

The FP approach reminds me of Unix pipes, which are very composable. It stands on the principle that the data are the interface surface (inputs and outputs from small programs are well defined, or rather easy to understand), and these naked data are operated on by different functions (Unix commands). (Also the duality is kind of similar to MapReduce idea, to pass around functions on data in the distributed system rather than data itself, which probably explains why MapReduce is so amenable to FP rather than OOP.)

It also seems to me that utilizing this "inversion of control" one could convert any OOP pattern into FP pattern - just instead of passing objects, pass the function (method which takes the object as an argument) in the opposite direction.

I am not 100% convinced that FP approach is superior to OOP, but there are two reasons why it could be:

1. The "nakedness" of the data in FP approach makes composition much easier. In OOP, data are deliberately hidden from plain sight, which destroys some opportunities.

2. In OOP, what often happens is that you have methods that do nothing rather than pass the data around (encapsulate them differently). In FP approach, this would become very easy to spot, because the function that is passed in the other direction would be identity. So in FP, it's trivial to cut through those layers.



The article seems to advocate type synonyms like the following:

    type Case = String -> String
    -- ...
    type Announcer = String -> IO String
I would argue that these are actually much worse than not having type synonyms at all.

(String -> String) functions could do anything to your query parameter and text, the type is too coarse, and the inhabitants too opaque for us to reason about them easily. Naming the type suggests the problem is solved without actually having solved it. It is like finding a hole in the ground, and covering it with leaves, so you don't have to look at it anymore. You are literally making a trap for the next person to come this way.

In an ideal world you would be able to use refinements to say that you want any (f :: String -> String) such that `toUpper . f = toUpper` but without such facilities, I think I may just settle for:

    newtype Case = CaseSensitive Bool
Sometimes, your type really does only have two inhabitants.


    data Case = CaseSensitive | CaseInsensative
This is just as efficient as the newtype, and leads to clearer code when matching on the value.

Also, sometimes types you thought only had two inhabitants get a third one added later, which this facilitates.


Clarity is a bit subjective, I think. The difference between:

    CaseSensitive
    CaseInsensitive
Is harder to spot (for me) than between:

    CaseSensitive True
    CaseSensitive False
This is because the bit that is the same is all on one side, and the bit that is difference is all on the other side. Case in point, your data definition has a typo: `CaseInsensative`, which occurs after the `In` shifts it away from the bit it should be the same as in `CaseSensitive`. Every little bit helps.

What's more, while you may be right that at the surface, the two representations are equally performant, what the newtype has that the data declaration does not, is the Prelude's definitions of all the boolean operators. If you wish to perform any more complicated logic with your data declarations treating them as booleans, you must either cast them to booleans (which comes at a runtime cost), or you must replicate the functionality of the Prelude for your custom type (which comes at a development cost).

Your branching logic (which, let us suspend disbelief and say is "not so bad", just for now) may require the combination of multiple such booleans, which in your encoding scheme would each get a different type due to their semantics, then we can't even viably define our custom boolean operators, so are forced to cast everything to booleans.

The point I'm making here is that outwardly, you want the type to reflect the semantics of how its values are used, but inwardly, you want access to its representation in a way that makes it easy to combine (or put another way, depending on who's looking, the semantics of a value changes).

Also, there is nothing stopping you from changing code later to meet changing needs. Using a newtype now doesn't preclude you from ever using a data declaration in the future. Certainly, you will have to change the patterns and constructors used in a couple of places, but that is a matter of minutes: Time you have already spent weighing the future implications of this decision in your mind right now, so this sensation of time saved is a fallacy.


I thought the argument was going to be "Conditionals are bad for running on GPUs."


    It’s no wonder that conditionals (and with them, booleans) are so widely despised!
They are?


Granted, I'm a mostly self-taught programmer, but I would have thought that if something appears in formal logic,[0] it should have an analog in a programming language.

Even standard algorithms like quicksort[1] use conditionals.

And, while I can see how massive switch statements suck, normal conditionals are common in everyday life: "If they don't have a dark roast coffee, get me a medium roast."

All of which is to say, I really don't understand what he's getting at. The last example he gave seemed to make things even more complicated, and it basically renamed "true" and "false" to more descriptive things (forRealOptions, dryRunOptions), which seems to my untrained eye to boil down to the moral equivalent of a C enum.

[0] https://en.wikipedia.org/wiki/Material_conditional

[1] https://en.wikipedia.org/wiki/Quicksort#Algorithm


> normal conditionals are common in everyday life: "If they don't have a dark roast coffee, get me a medium roast."

"They had dark roast so I got you nothing as requested."

IOW, this program is either incomplete or wrong. Cf. "Get me the darkest roast they have." - ifless, concise, robust.


So if it's an undrinkable mud you are still happy, code executed perfectly :)


What if I want a vanilla latte instead?


view-source:http://antiifcampaign.com/

Find in page: 'if('

2 hits.

So, yeah.


This is the starter code:

    publish :: Bool -> IO ()
    publish isDryRun =
      if isDryRun
        then do
          _ <- unsafePreparePackage dryRunOptions
          putStrLn "Dry run completed, no errors."
        else do
          pkg <- unsafePreparePackage defaultPublishOptions
          putStrLn (A.encode pkg)

This would be nicer if you could do multiple functions with pattern matching. In Elixir this would be:

    @spec publish(boolean) :: any
    def publish(true = _isDryRun) do
          _ = unsafePreparePackage dryRunOptions
          IO.puts "Dry run completed, no errors."
    end

    def publish(false = _isDryRun) do
          pkg = unsafePreparePackage defaultPublishOptions
          IO.puts (A.encode pkg)
    end

Pattern matching is pretty powerful, even going as far to give a dynamic, non-statically types language like Elixir the ability to 'destroy all iffs' too.


You can do exactly the same kind of pattern matching in Haskell, but that's not at all the point of the article. It's equivalent to writing the conditional, it doesn't remove it.

    publish :: Bool -> IO ()
    publish True =
      unsafePreparePackage dryRunOptions >>
        putStrLn "Dry run completed, no errors."
    publish False = do
      pkg <- unsafePreparePackage defaultPublishOptions
      putStrLn (A.encode pkg)


Pattern matching is just as explicit as an if loop. In languages that implement it for null values, it is just as explicit as typing "if (foo == null)" in an imperative language. You have to think about it, and type just as much code to deal with it, as you would in a language without pattern matching.

The only upside to pattern matching that I can see is that you are forced by the compiler to match all possible inputs and check for nulls in some languages, which can help you avoid null pointer exceptions and such. But you haven't encapsulated anything, or saved yourself any thinking or typing, by using pattern matching. You've basically turned every function into a switch statement. It's vastly overrated.


Another advantage of pattern matching is extensibility.

Suppose you wish to add a new branch case. Under the traditional if/else (or switch) model, you'd need to modify the function containing the if statements. With pattern matching, you simply introduce a new function; it decentralizes the change and acts as a sort of simple, intuitive polymorphism.


The main advantage of pattern matching is that you can't forget it. If you forget to check for null, the customer complains that the program crashed. If you forget to handle the cases in a pattern, the compiler complains to you.


Or procedurally, you could just have two functions:

    publishLive
    publishDryRun
Which, of course, is not the point of the article either.


Reducing if statements does shrink the possible state space, however using additional abstraction might increase it even further.


Bad programmers will mess any syntax restrictions/guidelines/styles we put on them. If you let them make any function were they can put launchNukes(); into doX(); then they will. though running things as a service may be the future, this launchNukes(); function is over here....safe from you.


Functional programmers love to emphasize how all the aspects of programming that their pet language is uniquely good at dealing with also happen to be the biggest problems in code maintenance. Is there any actual data on what the biggest problem sources are?


I'd love more data on this too, but I do think it's worth pointing out that it's pretty uncontroversial that the more control flow paths you have, the harder your code is to reason about. That's the basic assumption of the notion of cyclomatic complexity, after all.


I think pattern matching is fine, I don't see how it is still "boolean". The additional techniques shown are interesting, but heavy abstractions that should not be prescribed in general.



Paul Blasucci had a good talk on Active Patterns (an F# language feature):

https://github.com/pblasucci/DeepDive_ActivePatterns

This feature allows to encapsulate conditional matching on arbitrary input and dispatching.

For those who know ML, it is making the concept of pattern matching extensible to any construct.


Since this is about FP, we have to have recursion:

https://www.reddit.com/r/functionalprogramming/comments/4t91...


This is the best bit I think:

> The problem is fundamentally a protocol problem: booleans (and other types) are often used to encode program semantics.

> Therefore, in a sense, a boolean is a serialization protocol for communicating intention from the caller site to the callee site.


Less if is better, I agree on that. Lamdas technique is interesting because they "encapsulete" a specific case. In OOP this is achieved by using polymorphism on the objects instantiated for the right case. Right?


If 'if' could support single 'expression' and multiple 'case's like 'switch/match', it would make easier the transition.


only a sith speaks in absolutes


sounds like what's really being said is..

It is recommended that programmers use abstractions whenever suitable in order to avoid duplication, and associated errors


i can't think of use of 'if' in a math function; however, if is implicitly used in input, say 0<x<1, f(x)=x, 1<x<3, f(x)=x^2

i see a lot of loop though, summation is so a double integral is loop within loop. i can't think a code analogue with derivative

fta, i take that if in function body makes an ugly code.


Lots of math functions are defined with 'if' -- the absolute value, the Heaviside step function, etc.


That's because you are thinking of continuous functions. You have actually gave an example of a non-continuous one, it's just that you think of it as two functions connected with input conditional.

Other replier already gave you some common example, but let me add another one: signum(x), which returns if number is negative, positive or zero.


Sandi Metz talks about ifs a bit here: https://www.youtube.com/watch?v=OMPfEXIlTVE


You can go a long way without Ifs in a pattern-matching language like Prolog or Erlang, too.


Ummm. Many common day to day languages don't use lambdas. Also I have no idea what they are. So - yeah I don't think you can just replace if so easily.


Lambas are actually supported in most popular languages: C++, Java, C#, Go, JavaScript, even C. Sometimes they're called function literals or anonymous functions, but basically they involve creating a function without a name that can be passed around and executed. In some languages (Haskell, OCaml, etc) the anonymous functions can be extremely generic, whereas they are sometimes a bit less flexible in other languages. If you want a quick intro you can find one here: http://stackoverflow.com/questions/16501/what-is-a-lambda-fu...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: