I've encountered knee-jerk negative reactions to recursion from most firmware engineers.
Then one day I got one to laugh at this byte-saving routine to do some multiplications:
X32: BSR X16
X16: BSR X8
X8: BSR X4
X4: BSR X2
X2: ADD A
RET
Recursion. In assembly language. We were six bytes shy of filling our code space, but we had gobs of stack, and I got about 60 bytes back from our stupid compiler with this. When my cow-orker saw it, he thought for a moment, smiled, said "You're one sick dude, you know that?" and we checked it in.
The "calling itself inline" pattern is well known in the demoscene/sizecoding community as a way to repeat something a small fixed number of times with neither an explicit loop nor duplicating code, and probably was independently discovered by quite a few individuals.
Here is one of the more interesting applications, a function that converts a 16-bit word in a register to a string of 4 hex digits in only 24 bytes of code, and can also be called to convert bytes and nybbles (x86):
; output value of AX as four hex digits to ES:DI
write_hex_word:
xchg al, ah
call write_hex_byte
xchg al, ah
; output value of AL as two hex digits to ES:DI
write_hex_byte:
push ax
shr al, 4
call write_hex_nybble
pop ax
; output lower nybble of AL as hex digit to ES:DI
write_hex_nybble:
and al, 15
cmp al, 10
sbb al, 69h
das
stosb
ret
"X32:" etc. are labels, e.g. just names of functions or destinations of jump. Suppose you want to multiply A-reg by 4. Your code calls subroutine X4. That's immediately calls subroutine X2 (BSR X2), which doubles A (ADD A), and return. The return location is the next instruction of BSR X2--which is actually overlapping the subroutine X2. It doubles A again (ADD A), and returns to your code.
I should add this: Why not just shift left? It's been a while, but I think the ADD A was really some 16-bit ADD. The key to making it small was this "inline recursion" thing, which wouldn't have occurred to most of the crusty firmware folks I've worked with because of fear.
It's not recursion in the sense that it calls itself. However, it does divide the problem into smaller (though explicitly defined) chunks, which is still recursion.
You could state it that the multiple by a power of two is the sum of the multiples of one order of magnitude base 2 lower.
Yes from the answers it seems it is recursion, as long as you stretch the definition of recursion to mean making subroutine calls, or code that can be refactored into a recursive solution.
By these definitions, all but the most trivial code is recursive.
Brilliant! Let's make sure to include this in the standard tech interview questions set. We only hire the top 1% of the top 1%. Some Assembly required. :D
> Brilliant! Let's make sure to include this in the standard tech interview questions set. We only hire the top 1% of the top 1%.
First of all, that's the top 0.01%. Second of all, with that strategy, no you don't because the top 0.01% are going to walk out of interviews that ask dumb brain teasers. The best programmers are ones who can solve problems in the context of a team and codebase, with the help of the internet and iteration, not solve obscure but not-really-that-hard problems in 15 minutes.
Shouldn't this be "how recursion got into Algol 60"? After all, part of the background is that McCarthy wanted it in Algol, coming off his success with Lisp, that had, in its earliest form, only recursion and no iteration. So recursion had already "got into programming" when this story of "How recursion got into programming" kicks off.
One can argue that many features of programming languages have existed in theory since days of Euclid and Archimedes. This article argues that ALGOL was the first attempt to "standardized" features of a programming language that was expected to be in wide spread usage. It was designed by committee of big name experts and the "comedy" part is that lot of these people considered recursion as unnecessary or even harmful. No one knows what would have happened if ALGOL indeed didn't supported recursion, became widespread and was considered as gold standard. How many years before someone would have broken the barrier and said experts were wrong and recursion is one of the most important part of language?
> How many years before someone would have broken the barrier and said experts were wrong and recursion is one of the most important part of language?
Lisp was used quite a lot even after Algol 60 was standardized; presumably that would have occurred at least as much if Algol 60 was standardized without support for recursion -- inevitably, actually, moreso if Algol 60 didn't support recursion.
Maybe, including recursion in Algol 60 didn't make recursion mainstream, it just stopped Lisp from being more mainstream and made Algol 60 and its descendants successful.
I was referring to the title, which suggests that it is the story of how recursion got into programming. But recursion being in Lisp is, in the story, part of the background -- its the story of how recursion got included in Algol 60, not how it got included in programming. That it was already included in programming -- in Lisp -- is part of the background and starting point of the story, and how that happened isn't part of the story.
Ah recursion, my old nemesis. We meet again. Very interesting and informative article, but I'm with the SUBSET people on this one:
> Do not write recursive procedures. Do not use procedures recursively.
I've probably failed 90% of recursion "gotcha'" questions I've had during interviews. It's my uninformed opinion that recursion is uninteresting and unintuitive. This is why whenever I encounter it, I either ignore it or refactor it. I'm exaggerating, of course, but I really do hate it. There are some very interesting theoretical applications, however. The article briefly mentions Lambda Calculus (which I love) where I've always enjoyed playing with the divergent Omega combinator[1]:
Ω = (λx. x x) (λx. x x)
I'm may be entirely too stupid to appreciate the beautiful intricacies of recursive programming, but I do think that recursive programming can be susceptible to some unique bugs, such as exceeding stack/heap, that are very difficult to notice prima facia (more difficult than when the same is done via procedural code, at least).
I find recursion extremely intuitive, if not necessary, for pretty much any data structure with a bunch of things ("nodes") pointing to other things. That includes, most notably, linked lists and trees. I can probably work up recursive pseudocode for tree traversals pretty easily, despite not having studied computer science or doing it at work for several years, yet I find an iterative solution somewhat elusive and unnatural.
I also consider the Fibonacci sequence as one of the textbook examples of a recursive problem. Even the English description of the sequence ("each new number is the sum of the last two numbers") is plainly recursive. To explain the iterative solution in English is rather difficult.
Excuse me, but I read each new number is the sum of the last two numbers as a description of iteration. You have the two last numbers from which you can then make a new.
Isn't it the recursive version which is hard to explain?
The n'th Fibonacci number is the sum of ...
It depends on your perspective, which can depend on how the topic was presented. One approach (perhaps the mathematical one) is F_0 = F_1 = 1, F_n = F_{n-1} + F_{n-2}, so you generate the series 1, 1, 2, 3, 5, 8, ... That is, at the time you want F_n, you have already generated the series up to that point, and so you just add 2 numbers you already have.
On the other hand, if I ask you to compute F_1000, it might appear very recursive. You know you could make F_1000 from F_999 and F_998. And you get F_999 from... But it wouldn't be impossible to answer this question with the same approach as the first. It's not a priori obvious that "I'll calculate F_0 ... F_999" is an efficient way to calculate F_1000, but of course that turns out to be much more efficient than the naive recursive solution.
But I think Fibonacci is not the best introduction because of this. I think something like GCD calculation would be a better problem to define recursively, because there really isn't an obvious "forwards" approach to oppose the "backwards" Euclidean algorithm.
I think of "the last two numbers" as a description of recursion. You are effectively saying that the nth Fibonacci number is the sum of the (n-1)th and (n-2)th Fibonacci number, which is the textbook recursive solution.
The iterative solution is more like "start with 0 and 1, add them together to get a new number, then add that new number to the larger of the previous two numbers, and do that n times to get the nth Fibonacci number." That works just as well (or better, for computers and large n) as the recursive solution, but it's harder to explain, and it barely even hints at the most notable property of the sequence (that each number is the sum of the previous two numbers).
The recursive definition is intuitive and easy to come up with given its mathematical definition. To avoid redundant computation, the function can be memoized. Yes, you can write it linearly as well, but oftentimes the linear algorithm is harder to come up with. SICP spends a lot of time on this subject in one of its early chapters and its worth a read. Also, some languages support recursion better than others. Scheme is top notch in this regard as the standards do not specify a stack, meaning that implementations can do things like heap allocate stack frames that grow and shrink according to demand.
It depends somewhat on what you are doing. If you are on a walk through a (metaphorical) forest, do you think of yourself on a subpath of a subpath of a subpath, or is it just "well I guess I'm here now!"?
If you are just doing a simple search, maybe you don't need to keep that whole stack around. An iterative approach makes a lot of sense. If you are building a map of the tree and you're not even sure what scale to make it, you might want to reserve judgement about the supertree while you explore some sub trees so you might be glad that stack is there, and you might profit from thinking about that situation as recursion.
But the answer is: yes, a lot of things you do with trees make a lot of sense as simple procedures.
Funnily, I personally think linked lists are unintuitive.
"We have a sequential block of memory, and we want to store a sequence in it... hrm... I know, why don't we put the first element in this struct, and... I'll deal with the rest of the items elsewhere."
Linked lists would only be unintuitive if you wanted to apply them to a sequential block of memory like that. Linked lists are used in non-sequential blocks of memory, though you can use them in sequential blocks. It sometimes makes sense to, but really only in an embedded systems context where you're doing some other fun things like writing your own memory management system using preallocated arrays as heaps so that your memory usage doesn't grow unbounded.
I'm aware there are circumstances where linked lists are useful, but IMHO they're rarely introduced alongside a context that justifies them. Unintuitive doesn't mean bad, FWIW.
I wrote that after a very, very long day and long after I should've gone to bed (stupid needing to wear clothes, and not wanting to wear dirty clothes).
The main point I wanted to make was that you described linked lists as unintuitive specifically in the context of having access to sequential memory space. Linked lists are not, typically, used in that circumstance. They make much more sense, and I think become more intuitive, when you consider them in the context of non-sequential memory space.
Okay... the sequence has length measured in the billions, and we want to be able to rapidly remove its elements, or insert new elements in front of existing elements.
Suddenly linked lists (doubly-linked) are looking pretty intuitive :)
I'm not convinced. I obviously can't say what I'd do if I had to think these up, but I think I'd be more likely to try and impart ordering on a contiguous array than building a multiple-allocation one.
Going down this route would probably result in a contiguous linked list. You can argue that this is still a linked list, but I still find it conceptually more obvious. For a billion elements, it's probably also a lot faster, too.
You are not alone. NASA/JPL explicitly prohibits it in their coding standards [0].
Rule 4 (recursion)
There shall be no direct or indirect use of recursive function calls.
[MISRA-C:2004 Rule 16.2; Power of Ten Rule 1]
The presence of statically verifiable loop bounds and the absence of recursion
prevent runaway code, and help to secure predictable performance for all
tasks. The absence of recursion also simplifies the task of deriving reliable
bounds on stack use. The two rules combined secure a strictly acyclic function
call graph and control-flow structure, which in turn enhances the capabilities
for static checking tools to catch a broad range of coding defects.
Though, for context, here's the first paragraph after the introduction:
The coding rules defined here primarily target the development of mission
critical flight software written in the C programming language. This means
that the rules are focused on embedded software applications, which
generally operate under stricter resource constraints than, e.g., ground
software.
I don't think it's at all surprising that the coding standards governing critical software running in resource-constrained environments should be so conservative. But I also don't know that the priorities motivating those standards generalize well to what most programmers do for a living.
Recursion is prohibited in embedded contexts because it allows for, potentially, unbounded growth of the call stack. In the embedded world we have to know how much space things need. You'll note they also ban malloc. But few would suggest that every C programmer should abandon malloc in their software.
> Recursion is prohibited in embedded contexts because it allows for, potentially, unbounded growth of the call stack.
Only if you're sloppy about the inputs given to the recursive code.
Recursion isn't magical. It's deterministic, and is a function of some aspect of the input values. (Corollary: If it isn't, what the fuck are you doing?)
So NASA decided to solve the stack growth problem by limiting recursion. (Zero is a limit.) It could have done the other thing and limited inputs to recursive code, which would have required more careful code analysis (static and dynamic) to figure out what the limits should be. It's an understandable decision, but don't forget it's a trade-off, and one that could have gone another way.
Sure, it's deterministic. But in embedded contexts we typically have no malloc either, and often operate in very memory constrained environments. By removing recursion and favoring iteration you know the size of your stack at any given point in program execution. There is no size range that you have to consider (for instance, by limiting recursion to a depth of 20 for a particular algorithm or something). We also have to remember that TCO is not, or historically not, guaranteed in C compilers, the language in question.
So any recursive algorithm using tail-calls would be better implemented as a for/while loop in embedded C contexts, the performance will be similar (should actually be better) and the memory is now constant instead of increasing.
Any recursive algorithms that can't use tail-calls, I'm honestly not thinking of common situations for these in an embedded context. Perhaps traversing certain data structures, but you can probably find a different data structure that will be sufficiently efficient and not require non-tail-calls. Even with sorting, if you have a small, fixed sized array you could implement a sorting network. If it's too large for that to be feasible in code, it's probably still a small enough array that insertion sort is efficient.
>It could have done the other thing and limited inputs to recursive code, which would have required more careful code analysis (static and dynamic) to figure out what the limits should be. It's an understandable decision, but don't forget it's a trade-off, and one that could have gone another way.
I don't see the trade-off.
On one hand (if you allow recursion) you need "more careful code analysis".
On the other hand, if you don't allow it, you don't lose anything. You can write the algorithm in another way, and it would be just as fast and easier to reason about.
So what's the tradeoff? It's not like recursion brings some benefit that you lose if you disallow it.
> On the other hand, if you don't allow it, you don't lose anything. You can write the algorithm in another way, and it would be just as fast and easier to reason about.
This is where we differ, then: Some algorithms are more natural to write recursively. Sorting is a big example.
I'm fairly sure Dijkstra didn't say that, since he was advocating for recursion to be added. It was one of the splinter Algol groups that added that sentence trying to remove it.
> It's my uninformed opinion that recursion is uninteresting and unintuitive.
Recursion is the use of the built in stack data structure that the compiler manages for you.
If you are working your way through a data set with a well determined end state that you incrementally approach (be it the end of the list, or getting closer and closer to finding a value in a tree), and you have a need to backtrack, then recursion is a good way to solve the problem.
All recursive code can be rewritten to be non-recursive, you just keep track of your own stack data structure. This can be either not that much work, or a major pain, depending on the exact problem.
> All recursive code can be rewritten to be non-recursive, you just keep track of your own stack data structure. This can be either not that much work, or a major pain, depending on the exact problem.
Bull, a function that is not primitive recursive cannot be rewritten to be non recursive.
I agree, I don't think I have ever used recursion in production code. You can normally avoid it by using a queue or stack data structure. Or rethinking the problem.
Haskell tutorials usually advise against explicit recursion, instead favouring combinations of map, filter etc. as the best choice, and failing that use foldl', foldr etc. With explicit recursion only being used if you really have a good reason.
I need to convert every object object embedded in my top object to a {"M" : {...}} structure, and every array/list to a {"L" : [...]} structure. So something like -
{"a" : {"b" : [1]}} should become {"a" : {"M" : {"b" : {"L" : ["N" : "1"]}}}}.
The easiest way to write one function that can handle that in a generic way, no matter how complicated the data structure (barring hitting the stack limit), is recursion.
Eh. I don't think writing a custom stack implementation would be any more difficult and you'd get to scale to the memory of your machine rather than your language's runtime.
I'd love to see the code for that if you've got it (I have a recursive version in Javascript). Because the recursive one is really simple to read, intuitively grasp what is going on, confirm is correct, test, modify, etc. And rewriting it to use a stack, while possible, feels harder to me to think up, reason about, and no matter how I try comes out uglier. I haven't spent a lot of time on it, mind you (in part because the recursive one hits everything I care about), but I'd love to see one as elegant written iteratively.
Why would you ever work on the internals of how trees work? That has 0 business value 99.99% of the time. It's all solved problems. And even so recursion might not be the best approach.
Trees don't always call themselves "trees." They can sneak up on you. Nested lists, for instance, are equivalent to trees, and nested lists show up all the time. Perhaps your programming languages has some sort of recursive map function builtin, but a lot of common languages don't, and it really helps to be able to intuitively think about recursive traversal.
I once worked on a website that analyzed Python code in order to index it into elasticsearch for structural search queries (things like "find me all classes that inherit from SomeBaseClass"). Python's built-in ast library outputs a tree for you to traverse, and even with their provided tree walker you still deal with the internals of the tree and recursion.
Furthermore, part of our indexing needed to follow imports to find the canonical name of a value, which is a problem very well suited to a recursive solution.
You may not have encountered these situations where recursion and trees have to be dealt with, but they absolutely exist.
Well, I imagine if your background/education wasn't computer science, and you didn't have an intro to algorithms and data structures, (and also not that into math) recursion would feel unnatural and strange.
I say this because I was originally a self-taught programmer, and I hated recursion too. Once I went to school for a CS degree, I learned how awesome it really was, and now I use it all the damn time.
So what is it about recursion that eludes you? It is very strange to me that someone would find one of those concepts easy and the other not -- as you say, they are two forms of the same thing.
I guess it's more reading and writing it in code. In my case, it makes sense to read code like I read a book: it has a beginning, a middle, and an end. (Procedural, see?)
A recursive function makes me do all these convoluted mental gymnastics when I'm just trying to figure out if Cinderella lived happily ever after or not.
If you ever develop any abstraction over the raw DOM when writing a rich web app, you'll be dealing with trees, and if your abstraction supports nesting, you'll be dealing with recursion.
I don't understand what's weird and special about recursion other than the ways it can be used to easily solve some problems.
I would find it very unintuitive that in a language I couldn't write a function that calls itself. That feels like an obvious thing (if we ignore all the implementation details).
When I first started programming I used recursion without knowing that it's called like that and I didn't expect it to be called in any special way at all.
> I don't understand what's weird and special about recursion
Here's one way to see why recursion is special. Look at it from the perspective of implementing the language.
If your language doesn't have recursion, you can treat every local variable in your program as if it were a static global variable. It's local in terms of name scope, but you can give it a fixed blob of memory that you pick at compile time.
This is exactly what early Fortran compilers did. foo()'s local variables go here, bar()s go here, etc. You can do away with the stack entirely, I believe.
So, without recursion, you can make your entire program only use statically allocated memory. You can be sure it will never, ever ever run out of memory.
Yep, exactly. For instance the natural calling convention on the PDP-8 was that the return address was stored before the contents of the called function (i.e. in C terms it looked like "funcaddr[-1] = pc; pc = funcaddr;") I think other machines of that vintage and older were similar.
This meant you could support function calling without the need for a stack. However you can't directly support recursive (or co-recursive) functions. Nor could you ever do things like multithreading or shared libraries, although hose obviously weren't concerns at that time.
Today we think of the callstack as a fundamental part of the machine architecture, but at the time of algol-60 it was still a very contentious subject.
Thank You for that interesting view. Do you have a reference for the early Fortran compiler design? I've always wondered why people say Fortran is blazing fast, even compared to C.
From what I've read, Fortran's performance advantage over C is mainly due to a lack of pointer aliasing[1] which permits the compiler to optimize more aggressively.
I am curious to know if modern Fortran compilers still use static allocation for local variables when compiling in Fortran 77 mode. I'd guess the cache advantages of keeping parameters close together in memory compensates for the extra work of manipulating the stack, but who knows...
I don't offhand, but that's what the article is referring to when it says:
> And this is what they wanted to remove because they wanted static allocation of procedure activation records.
From what I can tell with a bit of digging, machines/compilers back then didn't even use return stacks for subroutines. Instead, they used self-modifying code (!).
To return from a subroutine call, you'd do something like this:
1. When compiling a subroutine, add a JUMP followed by an address to the end.
2. When you call a subroutine, modify the code to change that address to point back to the callee.
3. Jump to the subroutine.
4. When it's done running, it hits the last JUMP and jumps back to the caller.
With this, you don't need any concept of a call stack at all. But it obviously prohibits recursion since the later call would trash the earlier one's return address.
Nothing wild about that. Implementations using frame pointers effectively have exactly that and in some architectures, like SPARC, this is mandatory. Furthermore, some functional languages (SML and some Scheme implementations) do this explicitly. For one this, it make call-with-current-continuation much simpler to implement.
One big practical complaint is that recursion makes it difficult to reason about the run-time characteristics of code. There's some truth to this - I'm personally no stranger to refactoring recursive functions that have proven susceptible to stack overflows into iterative implementations.
I think even the most die-hard functional programming fans among us secretly know that this is a problem. It's why virtually every functional language implements tail call optimization, and why one of the earliest things that's taught to new students of functional programming is how to ensure that functions use tail recursion whenever possible. Sometimes we even go through some pretty impressive contortions because of this. Consider the case of continuation passing style.
The other is that a lot of people find recursion hard to understand. Others seem to naturally think in terms of recursion and think iteration is a headache. I'm not sure the two camps ever see eye to eye. It might be a brain wiring thing. Though perhaps practice rewires your brain - I wouldn't be surprised if making someone spend a decade writing mostly imperative code causes them to become more comfortable with iteration, and making someone spend a decade writing mostly declarative code causes them to become more comfortable with recursion.
> I wouldn't be surprised if making someone spend a decade writing mostly imperative code causes them to become more comfortable with iteration, and making someone spend a decade writing mostly declarative code causes them to become more comfortable with recursion.
I have spent a decade doing a mix of both. It really really really depends on the problem at hand what's easier to understand
Have to traverse a tree? Recursion.
Have to make a big thing out of small things? Recursion.
Fwiw I avoid writing both loops and recursion as much as possible. Collection data structures usually come with their own fold/map/etc functions, so this works well in practice.
This is a well known story. I do take issue with the
arrogance of the author: "Of course we cannot blame the Bauer faction for not having the experience of later workers in the field."
Oh _please_! The reality was the the common machines of the time didn't have todays fancy addressing modes. All memory addresses were global addresses. Thus, the _cost_ of making all memory access relative to a stack pointer was far from trivial. Of course, there was a work-around if you could forgo re-entrance (~ "thread-safety"): use static activation records, but saving it to the stack before the recursive call and restoring after. Still not exactly free.
Oops, that static activation records would probably not work with the Algol 60 parameter semantics so the original point stands: supporting recursion is really expensive on that breed of hardware.
Funny side story is that Hoare couldn't publish algorithm for QuickSort until he learned ALGOL was ready to do recursion. QuickSort is one of the algorithms that is very difficult to do without having a concept of recursion. Another one is various tree traversals. What are the other algorithms that would be impractical without recursion?
PS: Using stack to simulate recursion doesn't count :).
> What are the other algorithms that would be impractical without recursion?
Interestingly, writing a compiler. Since expressions nest, it's very hard to write a compiler for them without a recursive function for compiling an expression.
Yes, Java as well as .Net implementations used recursive QuickSort (I think Java recently switched to Timsort?). The implementation however used only one recursive call instead of two as in textbooks because second call is tail recursive and can be made iterative. This is however not done just to avoid recursion. With QuickSort you run risk of using up O(n) stack space in worse case. Making second call iterative allows to make sure recursion always occur on smaller partition thus guerenteeing O(lg n) space.
Certainly std::sort and friends are recursive in all C++ implementations I've looked at.
In C++ it's hard to make the efficient and non-recursive. You can have a fixed-sized buffer (limits depth), or call malloc for a buffer (a single call to malloc can be more expensive than just doing the sort on the stack, and in general the C++ standard library tries hard to avoid mallocing).
Committee member F.L. Bauer registered his protest by characterizing the addition of recursion to the language as an "Amsterdam plot".
Is "Amsterdam plot" an idiom, or simply a colorful expression that occurred to Bauer? I haven't seen this expression before, but TFA repeats it several times in stylized fashion so I thought maybe I was missing the intended meaning?
The article was interesting and entertaining too. Good (and bad) committee outcomes happen as described, when one or a few individuals seize the initiative, setting out their ideas in a more or less finished form. Other committee members may feel blindsided but in the crunch there's insufficient time to produce alternatives. It's a lesson worth our attention.
The story evoked a personal element, going back to when I was first learning programming in the 80's. Early on one of the serious hurdles I encountered was recursion. (Another was pointers and indirection.) After struggling to wrap my brain around the thing that kept calling itself, I ran across a book, "Thinking Recursively" by Eric S. Roberts, 1986, which was enormously helpful.
I consider the book a classic. And it must be since it's still in print! I'd highly recommend it for anyone who finds recursion unintuitive. Way back then it wasn't intuitive to me either, but with the book as a guide in time it became as natural as breathing air.
But Lisp already had recursion, right? It was just Algol and the international standard that was held up.
Similar to the situation with things like CoffeeScript vs. ES6 etc.
What I think we need is a standard so flexible and meta that systems can evolve without waiting for society or standards to catch up. Which we have in Turing-complete languages.
But we should standardize somehow on a very meta level.
>What I think we need is a standard so flexible and meta that systems can evolve without waiting for society or standards to catch up.
This is why we must demand macro systems in the languages we use. Building new languages is part of the programming process. Everyone should be able to do it, not just standards committees.
Winging off a comment above, I find recursion generally intuitive, occasionally useful, and really really boring.
Offhand years ago I used a dialect of C that had no recursion so I can see both sides of this argument. On the one hand, there is a pain point when you don't have access to recursion. On the other, boy can a compiler that eschews recursion produce small and fast code. (Because without recursion the call graph is acyclic)
That would be hard to turn off. My experience is with a old 8051 compiler. Notable feature was it never pushed anything on the stack other than return addresses. Used the call tree to fold parameters and local variables into a small amount of common memory. I think modern compilers deal with this situation by in-lining. But that can cause the code size to swell vs calling a subroutine that accesses globals.
Just saying, you add a feature to a language, you will pay something for it.
Then one day I got one to laugh at this byte-saving routine to do some multiplications:
Recursion. In assembly language. We were six bytes shy of filling our code space, but we had gobs of stack, and I got about 60 bytes back from our stupid compiler with this. When my cow-orker saw it, he thought for a moment, smiled, said "You're one sick dude, you know that?" and we checked it in.