The nice thing about shell scripts is that it's ubiquitous and have very little external dependency. Having a shell script depend on node.js seems a bit counter-intuitive?
Xeon just add ability to use npm as your package manager, why use tools like bpkg or something else if you can use great environment that trusted by thousands people.
Xeon bundle should be made on dev step, you should not bundle it on real server .etc where u use this script.
Reliance on transport security instead of providing cryptographic verification of code is my biggest beef, very closely followed by what is essentially a nonexistent reputation system (or, in lieu of a code reputation system, a curated selection of packages).
better management of your dependencies, also bundling to single file for easier distribution.
In future transforming scripts with plugins means reusing fish shell functions in bash scripts .etc
Also each shell has different syntax for sourcing files.
If you're needing to manage dependencies for a bash script, are you sure you're using the right language for the job? I'm not convinced that mixing and matching functions from different shells is a good thing.
I like to modularise my shell scripts for maintainability reasons. It would be nice to be able to just "pull in the pieces" as needed from my collection of components instead of, as I currently tend to do, re-including them wherever I'm using them.
Is it really a good idea to name your open source module after a trademarked name ?
Xeon is the brand name for Intel's line of server and workstation processors and they dont strike me as the kind of firm that would take co-opting of a brand name lightly.
IIRC you can register identical trademarks as long as they represent something sufficiently different especially if it's a commonly used word, no one is going to confuse between a CPU an Ecig and a Scooter.
I've not played with node.js since the very early days, and never really used NPM. However, I do see the need for modular, composable shell scripts.
Personally, I've been using Nix in a similar way, since it also has nice features like caching, laziness, splicing into indented strings, dependency management, etc. For example, if you have a Nix expression stored in "my-script.nix" you can use the following (e.g. in "my-script.sh") to invoke it:
The `--eval` tells Nix to evaluate an expression, rather than build a package. `-E` is the expression to evaluate (in this case, importing our "script" file). `--read-write-mode` allows the script to add things to the Nix store. `--show-trace` is to aid debugging.
That sounds pretty cool, how does it work? Does it match the indentation of the parent that issued the splice for the entire child? How about nested splices?
Well, this is basically two features I condensed into one phrase. I didn't mean context-sensitive splicing (e.g. splicing together Python code, whilst adhering to the off-side rule).
Firstly, indented strings mean that long strings (such as scripts) can be embedded inside other expressions quite naturally. For example:
runCommand "foo"
{ buildInputs = [ python imagemagick ]; }
''
I am an indented string
I will be executed as a bash script, with the following dependencies available:
- python
- imagemagick
Since these lines, and those above beginning with "I", have the least indentation,
they will appear flush to the left. The "list" above will hence be indented by 1
space.
''
Secondly, splicing allows Nix expressions to be embedded inside strings. A splice begins with "${" and ends with "}". The expression should either evaluate to a string, which is inserted as-is, or a "derivation" (e.g. a package), which gets "instantiated" (i.e. installed) and its installation directory is inserted into the resulting string. Splices can be nested too.
For example, instead of giving "python" as a dependency in the buildInputs, we could splice the full path into a string, e.g.
''
"${python}/bin/python" my_script.py
''
Although this is probably a bad idea, since there may be transitive dependencies, etc. missing when the script gets executed.
If we want to build up a result incrementally, with each step getting cached, we can use "runCommand", and write the results to "$out". For example:
with import <nixpkgs> {};
with builtins;
let
# Takes a script and runs it with jq available (Nix functions are curried)
runJq = runCommand "jq-cmd" { buildInputs = [ jq ]; };
step1 = runJq ''
echo "I am step 1" 1>&2
echo '[{"name": "foo"}, {"name": "bar"}]' | jq 'map(.name)' > "$out"
'';
step2 = runJq ''
echo "I am step 2" 1>&2
I won't be executed, because Nix is lazy and nothing calls me
'';
step3 = runJq ''
echo "I am step 3" 1>&2
jq 'length' < "${step1}" > "$out"
'';
in readFile step3
When run, this gives the following:
$ ./go.sh
building path(s) ‘/nix/store/5ks08zbvmgzbhg9kr0k4g75nf2ymsqsr-jq-cmd’
I am step 1
building path(s) ‘/nix/store/v1svcqq6cmi4xc9650qz9w2x177w4pfr-jq-cmd’
I am step 3
"2\n"
$ ./go.sh
"2\n"
The results are cached, and will be re-used as long as the commands aren't edited, and their dependencies don't change (e.g. if a newer version of jq is available, they'll be re-run with that version).
In this case, each "step" represents the data, which is common in lazy languages. Alternatively, we can use "writeScript" to write more 'traditional' process-oriented scripts:
with import <nixpkgs> {};
with builtins;
let
# Takes a script and runs it with jq available (Nix functions are curried)
runJq = runCommand "jq-cmd" { buildInputs = [ jq ]; };
step1 = writeScript "step-1" ''
echo "I am step 1" 1>&2
echo '[{"name": "foo"}, {"name": "bar"}]' | jq 'map(.name)'
'';
step2 = writeScript "step-2" ''
echo "I am step 2" 1>&2
I won't be executed, because Nix is lazy and nothing calls me
'';
step3 = writeScript "step-3" ''
echo "I am step 3" 1>&2
"${step1}" | jq 'length'
'';
in readFile (runJq ''
"${step3}" > "$out"
'')
Of course, we need something to invoke these scripts, which is why I used "runJq" in the final expression. When run, we get:
$ ./go.sh
building path(s) ‘/nix/store/fnw68cmkib5fkmhls4fkdhx0vb2cyka8-step-1’
building path(s) ‘/nix/store/1kiwa6m11d0apxfjbwpqq3vl6jbv3sdx-step-3’
building path(s) ‘/nix/store/9hv1jcrglyx8x6xa64pnds6vzcp35zl5-jq-cmd’
I am step 3
I am step 1
"2\n"
$ ./go.sh
"2\n"
This time the scripts are cached, but we execute them both together in a normal pipe. The overall result of the "runJq" call is still cached though. This is how you'd run non-bash scripts too: by using "writeFile" to save your code to disk, and "runCommand" to invoke it with a bash one-liner. For example, if we want "step4" to use Haskell we might do the following:
This reads the length given by jq, and writes out a list of that many "hello world"s:
$ ./go.sh
building path(s) ‘/nix/store/2d7wrd78dk1ilj84adnyq8ddgzy6m2rr-hs-cmd’
building path(s) ‘/nix/store/haqcwssfbzbj5s4ampv322qbpll1gw1h-jq-cmd’
I am step 3
I am step 1
"[\"hello world\",\"hello world\"]"
Unfortunately, this can end up separating the code from its dependencies, i.e. we needed to give "ghc" as a dependency to whichever script invokes "step4" (via "runJq"), rather than being able to add it in "hsScript". If we used the original data-oriented approach, this wouldn't be an issue.
It's also pretty easy to transfer data between the Nix language and the processes we're invoking, using "readFile" and "builtins.fromJSON", or "builtins.toJSON" inside a splice; although Nix doesn't support floats, so you might need to turn them into strings first. This is useful for doing tricky transformations on small amounts of data, which may be error-prone in bash, but where invoking a full-blown language like Haskell or Python would be overkill. It can also be useful for things like assertions.
Exactly, he knows what he's looking for, exactly - also because of the HN post and release google's search ranking will be rating it higher than in the future. Say you search for https://www.google.com.au/search?q=xeon+code or something similar - you're not going to find it. OK maybe you should be better with your search terms but still, can we stop naming tech things the same exact name as other common tech search terms?
Woah, this has been on my sort list of projects to build.
Advantages over `source`
- require is relative to script file instead of execution `pwd`
- a build process can create a single distributable file by cat-ing the required files.
Down the road I'd like to see a `babel` for bash. I think import / export, and functions with arguments would make my time with bash more enjoyable and productive.
As with other tools, `xeon` does a little too much,
Prefixing is pretty trivial, though, and this does a number on portability without providing the kinds of benefits that a more full-featured solution (either of the rake/thor variety or the Chef/Puppet variety, depending on your aims) is likely to give you.
It's a neat project, but I don't see a ton of use for it.
Is there really any benefit to this over explicit child process calls? I realize the syntax is shorter, but now you're hiding the fact that you are shelling out.
Overloading require for this purpose is a guaranteed way to break static analysers and module bundlers.
If I understand you right, child process is made by utility that check your local version and laters to notify your for updates. It should only be called once an hour
main difference from all of this tools, that they try to create bridge between different shell types, instead of providing same API, integrated with other tools.
Outside of the packaging benefits (which could be accomplished in many other ways), but which come at the cost of prefixing stuff, I don't see much use.