The feature list here has significant overlap with Nim. Maybe we need a website that categorizes languages with feature tags, so we can visualize the overlap!
Superficially similar, but from a look at the README, it has no polymorphism or generics, which hugely differentiates it from Nim, which leans very, very heavily on templates/generics throughout the entire language/standard library.
Granted, that also means Tomo probably has better incremental compilation, and would likely be more amiable to creating shared libraries. You can do that with Nim, too, but the external interface (generally) has to be "C" semantics (similar to most other "high level" languages).
> would likely be more amiable to creating shared libraries.
Why's that? There's a gc/no-gc barrier to cross, and also being able to use other features in an implementation doesn't make creating a C interface harder.
I was thinking more along the lines of compiling Tomo code, then being able to link against that pre-compiled binary from other Tomo code. Basically being able to substitute a source file module for a binary module.
I don't know if Tomo supports anything like that, but not having generics would make it easier/simpler to implement (e.g. no need to mangle symbol names). Note "easier/simpler", Nim can also "precompile Nim libraries for Nim code", but the resulting libraries are brittle (API-wise), and only really useful for specific use cases like creating binary Nim plugins to import to another Nim program, where you'll want to split out the standard runtime library [0] and share it across multiple images. It's not useful for e.g. precompiling the standard library to save time.
I know Nim has been working on incremental compilation, too, but I don't know what the state of that is. I think it might have been punted to the rewritten compiler/runtime "Nim 3".
I've been statically linking Nim binaries with musl. It's fantastic. Relatively easy to set up (just a few compiler flags and the musl toolchain), and I get an optimized binary that is indistinguishable from any other static C Linux binary. It runs on any machine we throw it at. For a newer-generation systems language, that is a massive selling point.
Yeah. I've been doing this for almost 10 years now. It's not APE/cosmopolitan (which also "kinda works" with Nim but has many lowest common denominator platform support issues, e.g. posix_fallocate). However, it does let you have very cross-Linux portable binaries. Maybe beyond Linux.
Some might appreciate a concrete instance of this advice inline here. For `foo.nim`, you can just add a `foo.nim.cfg`:
Nim has a similar, strong preference for value semantics. However, its dynamic heap types (strings, seqs, tables) are all implemented as wrappers that hide the internal references and behave with value semantics by default, unless explicitly escape hatched. It makes it incredibly easy to manipulate almost any data in a functional, expression-oriented manner, while preserving the speed and efficiency of being backed by a doubling array-list.
There are also many Americans who have always desired de-dollarization, since the dollar has been the foundation of our imperial system and abuses of state power. Say what you will about how dated a gold standard is, but it forces immediate fiscal responsibility upon governments; fiat currencies defer responsibility, turning it into a Sword of Damocles that catastrophically falls upon future generations. You trade geopolitical dominance now for guaranteed future withering. Of course, fiscal responsibility and the rejection of imperial ambition were core principles of the Anti-Federalists, Democratic-Republicans, and Whigs. It's baked into our history and tradition to not want to be the unilateral arbiters of a global trade and alliance system.
> Say what you will about how dated a gold standard is, but it forces immediate fiscal responsibility upon governments
We were on metal standards for millenia. Governments routinely spent beyond their means, including for imperial aims. This is like four centuries of Roman history.
What @cheeseomlit said, plus: During the Age of Exploration/Colonization, competing European powers each minted metal currencies and couldn't reasonably debase their metals without immediately being out-competed (which is why the trusted purity of Spanish gold and silver coinage slowly dominated). The primary mode of failure for those empires was bankruptcy via war. The impossibly high cost of fielding armies around the globe was laid bare during that era. Paper money lets us (funnily enough) paper those costs over and live with the illusion that it's all a free lunch. But what we're actually doing today is debasement, and eating, vociferously, all of the deflationary efficiency gains won by 20th century technological progress.
> During the Age of Exploration/Colonization, competing European powers each minted metal currencies and couldn't reasonably debase their metals without immediately being out-competed
You're hitting the nail on the head. Metalness didn't matter. It was competition in money supplies (and strength of private property).
The fact that a banker in Italy could finance (or not) a war by Great Britain is what restrained governments. Same as in the 1990s, the bond market was king.
The historical record simply does not support the hypothesis that metal-based economies are more peaceful or inflate less than modern fiat-based ones. I'm open to revising that opinion if someone has re-run the data. But everything I've seen comes from blogs that start with the conclusion, itself reached from assumptions from first principles that rarely contemplate how armies were actually financed in antiquity. (Hint: they take your shit. Marketing campaigns sharing the martial term isn't a coincidence. If you're a general in the olden times, you got wealthy through your commission because your army took the enemy's shit. If you needed help getting there, you paid a 'friendly' visit to your nobles.)
Yes, but there were practical limits to currency debasement when the currency was a physical commodity with intrinsic value. People notice when their coins start getting filled with lead, and there were serious political repercussions for it. You cant just conjure a trillion gold/silver coins out of nothing like you can with fiat, and the ability to do so is 100% guaranteed to be abused.
> there were practical limits to currency debasement when the currency was a physical commodity
"In the second century, a modius of wheat (approximately nine liters), during normal times, had sold for ½ Denarius…. the same modius of wheat sold in 335 AD for over 6000 denarii, and in 338 AD for over 10,000" [1].
Note that inflation can also occur with zero debasement if the economy around the fixed money supply collapses. This happened in Rome when Pompey and later Augustus were trashing trade routes. It may have even led to the collapse of India's ancient democracies.
> there were serious political repercussions for it
There weren't. The state historically borrowed from the hilt of the sword. Economic collapse constrained kings and emperors. Not politics.
I was surprised that for most of my smaller use cases, Zepto.js was a drop-in replacement that worked well. I do need to try the jQuery slim builds, I've never explored that.
My take is to use cultivated insects (Black Soldier Flies), duckweed, and algae as protein feedstock for chickens and fish. Along with more humane husbandry of them it should be an acceptable path for protein for people.
The real question at the core of any production: What's the minimum performance cost we can pay for abstractions that substantially boost development efficiency and maintainability? Just like in other engineering fields, the product tuned to yield the absolute maximum possible value in one attribute makes crippling sacrifices along other axes.
There are two distinct constructs that are referred to using the name variable in computer science:
1) A ‘variable’ is an identifier which is bound to a fixed value by a definition;
2) a ‘variable’ is a memory location, or a higher level approximation abstracting over memory locations, which is set to and may be changed to a value by an assignment;
Both of the above are acceptable uses of the word. I am of the mindset that the non-independent existence of these two meanings in both languages and in discourse are a large and fundamental problem.
I take the position that, inspired by mathematics, a variable should mean #1. Thereby making variables immutably bound to a fixed value. Meaning #2 should have some other name and require explicit use thereof.
From the PLT and Maths background, a mutable variable is somewhat oxymoronic. So, I agree let’s not copy JavaScript, but let’s also not be dismissive of the usage of terminology that has long standing meanings (even when the varied meanings of a single term are quite opposite).
“Immutable” and “variable” generally refers to two different aspects of a variable’s lifetime, and they’re compatible with each other.
In a function f(x), x is a variable because each time f is invoked, a different value can be provided for x. But that variable can be immutable within the body of the function. That’s what’s usually being referred to by “immutable variable”.
This terminology is used across many different languages, and has nothing to do with Javascript specifically. For example, it’s common to describe pure functional languages by saying something like “all variables are immutable” (https://wiki.haskell.org/A_brief_introduction_to_Haskell).
Probably variable is initially coming out of an ellipsis for something like "(possibly) variable* value stored in some dedicated memory location". Probably holder, keeper* or warden would make a more accurate terms using ordinary parlance. Or to be very on point and dropping the ordinariness, there is mneme[1] or mnemon[2].
Good luck propagating ideas, as sound as it might, to a general audience once something is established in some jargon.
> Probably variable is initially coming out of an ellipsis for something like "(possibly) variable* value stored in some dedicated memory location".
No, the term came directly from mathematics, where it had been firmly established by 1700 by people like Fermat, Newton, and Leibniz.
The confusion was introduced when programming languages decided to allow a variable's value to vary not just when a function was called, but during the evaluation of a function. This then creates the need to distinguish between a variable whose value doesn't change during any single evaluation of a function, and one that does change.
As I mentioned, the terms apply to two different aspects of the variable lifecycle, and that's implicitly understood. Saying it's an "oxymoron" is a version of the etymological fallacy that's ignoring the defined meanings of terms.
I think you are confused by terminology here and not by behavior, "immutable variable" is a normal terminology in all languages and could be says to be distinct from constants.
In Rust if you define with "let x = 1;" it's an immutable variable, and same with Kotlin "val x = 1;"
Lore and custom made "immutable variable" some kind of frequent idiomatic parlance, but it’s still an oxymoron in their general accepted isolated meanings.
Neither "let" nor "val[ue]" implies constancy or vacillation in themselves without further context.
Words only have the meaning we give them, and "variable" already has this meaning from mathematics in the sense of x+1=2, x is a variable.
Euler used this terminology, it's not new fangled corruption or anything. I'm not sure it makes too much sense to argue they new languages should use a different terminology than this based on a colloquial/nontechnical interpretation of the word.
I get your point on how the words meanings evolves.
Also it’s fine that anyone name things as it comes to their mind — as long as the other side get what is meant at least, I guess.
On the other it doesn’t hurt much anyone to call an oxymoron thus, or exchange in vacuous manner about terminology or its evolution.
On the specific example you give, I’m not an expert, but it seems dubious to me. In x+1=2, terms like x are called unknowns. Prove me wrong, but I would rather bet that Euler used unknown (quantitas incognita) unless he was specifically discussing variable quantities (quantitas variabilis) to describe, well, quantities that change. Probably he used also French and German equivalents, but if Euler spoke any English that’s not reflected in his publications.
"Damit wird insbesondere zu der interessanten Aufgabe, eine quadratische Gleichung beliebig vieler Variabeln mit algebraischen Zahlencoeffizienten in solchen ganzen oder gebrochenen Zahlen zu lösen, die in dem durch die Coefficienten bestimmten algebraischen Rationalitätsbereiche gelegen sind." - Hilbert, 1900
The use of "variable" to denote an "unknown" is a very old practice that predates computers and programming languages.
I've used JSON as an additional options input to a native-compiled CLI program's various commands because 1) the schema of each option is radically different, 2) they need to be passed most of the way down the call stack easily for each stage of our calculation and report generation.
It works fantastically well, and don't let anyone tell you that you MUST bloat the CLI interface of your program with every possible dial or lever it contains. We should all be cogent of the fact that, in this very young and rapidly evolving profession, textbook and real-world best practice often do not overlap, and are converging and diverging all the time.
reply