> I think we got off on the wrong foot here, and I find that I am pursuing this to absurdity. I'm gonna try and set this right.
Good on you. I was hopefully that if we rode it out, eventually we'd somehow uncross the Rubicon. Good on you for doing so. Thanks.
> The main plus for javascript, is that it's accessible to people who already know javascript, and there's quite a lot of them.
For this particular context, I couldn't agree more. The one caveat I'd put on that is that a not insignificant quantity of those people who already "know" JavaScript don't really "know" it; they could probably be fooled for a disturbing length of time if they were swapped over to Java (and that's not to say the languages are really that similar). Still, people with at least passing familiarity with JavaScript are numerous.
> The other plus, potentially, is that existing software may be able to run on it.
Yeah, that one I'm not buying much. I can't think of any other top10 development language that wouldn't have a richer software ecosystem for this problem domain. Even for the more general server-side development, the Node software library is growing fast, but it is very, very thin and represents a tiny fraction of the JavaScript software world.
> it's not clear to me really what the use cases for this device are, other than that it's a way to run javascript on hardware, and some people might want that.
Yeah, and even from that context, if you went with a JVM runtime, you'd have JavaScript (admittedly not quite as nice as with V8, but still enough for most people who'd want to play around with an embedded JavaScript server) along with a plethora of other languages to choose from, which I'd have to think would do a much better job of getting a broader selection of web developers started in the "Internet of Things" paradigm.
> My goal was, initially, to just try to turn the conversation away from language wars toward actually discussing the device. and its merits. I have clearly failed.
Hehe. Well, responding to questions about the language choice can do that. ;-)
That said, I will ask that question: there are lots of other efforts to package up Cortex-M microservers for sensornets and ad-hoc devicenets and bundle them with user-friendly API's. I haven't been able to discern what's particularly exciting about this approach. What do you think is uniquely interesting about this solution?
> over time you revealed that you believe that being able to deal natively with binary protocols is a core feature of a language suited for hardware.
"core feature" means different things to different people. I look at it as a building block that a lot of core functionality for working with devices tends to build on top of, and a very important tool to have in your back pocket when dealing with a legacy device with lord only knows what fun little protocol bugs^H^H^H^Hquirks. It's not that you have to have it, but not having it tends to discourage the rich development of that larger ecosystem, and in JavaScript's case I've had first hand experience with it.
> it is a core feature of node.js, which is the api this hardware device purports to emulate. node.js is a kind of defacto standard at this point, which is good enough.
Agreed. My main problem is "good enough" is not exactly the kind of quality that makes me say, "oh yeah, we obviously should choose this one". If someone hands me a Node.js server and says, "talk to this air pressure sensor", I'm not going to say, "it can't be done" because obviously it can be done, and without having to move mountains. But if I'd not been handed anything other problem of talking to the air pressure sensor, even if I was looking to bring on a ton of web developers with little or no familiarity working with hardware, Node.js wouldn't exactly have sprung to the top of my mind. ;-)
> To my knowledge there are no standards documents for lua, python or ruby and people have no problem using those languages.
There are no EMCA-like standards bodies for those languages, but there are documents defining the language (though Ruby in particular seems to have a bit of the, "however the runtime works" mentality it is still shrugging off). That actually has been an impediment from time to time, though obviously not a huge one.
Lua's language is quite detailed about bindings to the native platform and even has a specific type, "userdata" for unmanaged blobs of memory, and Lua's deceptively named "string" type holds an 8-bit clean arbitrary set of random bytes, so not only are its "string functions" and IO libraries naturally capable of byte-oriented processing, there's little friction for even ancillary libraries supporting and exploiting it. Ruby's string implementation is similar.
Python has, as far back as I can recall, had support for binary IO and processing, though I'd qualify that by saying it has been somewhat hokey, though things like structs and Cython have helped mitigate it. Until 2.7.x and 3.x.x came around, I'd give Python demerits in this area, though not as bad as JavaScript.
> So what else is there that gives you the willies?
As I mentioned, there's the probabilistic parser. When you are learning it is nice to have forgiveness, but it is much better to have each and every mistake pointed out to you. A probabilistic parser makes things less transparent. That can be worth it if you're going with a "do what you can" mentality that makes perfect sense for web documents, but with hardware imprecise communications are just as wrong as incorrect ones. This doesn't mean a language need be hard, it just means that you want to be painfully clear about what is and isn't correct from day one.
Then there's JavaScript's bizarre and limited approach to arithmetic. Java's lack of unsigned fixed point arithmetic has drawn some deserved criticism, but those concerns seem puny when compared to JavaScript's quantum numbers that exhibit double/int32 duality. Not a good can of worms to open up, particularly when talking to hardware that might (per Murphy: read will) have odd numerical representations of its own. Even if you assume numbers are passing back and forth as decimal arithmetic strings, that's a Pandora's Box of fun just waiting to be opened that isn't going to make the experience any easier even for programmers fairly familiar with JavaScript.
Then there's the whole async I/O, event driven callback model. While I personally love that model, and at first glance it seems like it'd be a perfect match for sensors and devicenets, it does come with some downsides. It's a less natural paradigm for a lot of more novice programmers, and the way it decouples logic and injects layers of indirection in to the code can lead to confused developers when tackling new paradigms. Coroutines seem to present an initially more accessible approach to that model, and the thread model is initially often much more approachable for developers (though the popularity of Java NIO is an obvious testament to advantages of getting comfortable with I/O state machines sooner rather than later).
So there's some more straight forward concerns. Again, not the "it can't do X" variety, but more the "we're trying to get a fish to climb a tree" variety, is this really the best way to get to the coconuts?
first, thank you for taking the time for such a detailed answer. I do appreciate it and it is a very interesting insight.
"I haven't been able to discern what's particularly exciting about this approach. What do you think is uniquely interesting about this solution?"
It remains to be seen if it's uniquely interesting. If it is uniquely interesting, it will be because of node.js's supposed ability to do well with distributed server type things.
It is my general opinion that node.JS gets overused for quite a lot of things that it has no business being used for. But one thing I would use it for is the high concurrency, small payload size stuff like instant messaging and MMORPG's.
Imagining a robotics scenario, you could enable a simulated version of a swarm of robots that runs in a webpage.
with distributed sensors, you could integrate it with a web server set up using the same language- the sensors look just like some extra nodes in the distributed network. That could be nice.
The nice thing about javascript of course is that, all though it is kind of dumb out of the box, it is expressive enough that you can build some useful abstractions- and manage the callback hell that people tend to run into with node.js coded naively.
On the other hand it may well be making a fish climb a tree. Low level hardware stuff was never javascript's strength. What could be interesting though is the higher level event driven stuff which is harder to achieve with things like c/java/arduino
> Imagining a robotics scenario, you could enable a simulated version of a swarm of robots that runs in a webpage.
For almost anything that runs in a web page, JavaScript is a better option. ;-)
> The nice thing about javascript of course is that, all though it is kind of dumb out of the box, it is expressive enough that you can build some useful abstractions- and manage the callback hell that people tend to run into with node.js coded naively.
Yeah, no question that binding to JavaScript is it is very expressive. In terms of intrinsic language properties, its hard to be both really expressive and a good language for learning. Some really beautiful languages manage to thread the needle, but I don't think JavaScript is one of them. Still, it is a tempting choice to introduce people to simply because of its ubiquity...
> What could be interesting though is the higher level event driven stuff which is harder to achieve with things like c/java/arduino
FYI, Java can actually do that kind of thing even better than JavaScript: http://vertx.io/
Good on you. I was hopefully that if we rode it out, eventually we'd somehow uncross the Rubicon. Good on you for doing so. Thanks.
> The main plus for javascript, is that it's accessible to people who already know javascript, and there's quite a lot of them.
For this particular context, I couldn't agree more. The one caveat I'd put on that is that a not insignificant quantity of those people who already "know" JavaScript don't really "know" it; they could probably be fooled for a disturbing length of time if they were swapped over to Java (and that's not to say the languages are really that similar). Still, people with at least passing familiarity with JavaScript are numerous.
> The other plus, potentially, is that existing software may be able to run on it.
Yeah, that one I'm not buying much. I can't think of any other top10 development language that wouldn't have a richer software ecosystem for this problem domain. Even for the more general server-side development, the Node software library is growing fast, but it is very, very thin and represents a tiny fraction of the JavaScript software world.
> it's not clear to me really what the use cases for this device are, other than that it's a way to run javascript on hardware, and some people might want that.
Yeah, and even from that context, if you went with a JVM runtime, you'd have JavaScript (admittedly not quite as nice as with V8, but still enough for most people who'd want to play around with an embedded JavaScript server) along with a plethora of other languages to choose from, which I'd have to think would do a much better job of getting a broader selection of web developers started in the "Internet of Things" paradigm.
> My goal was, initially, to just try to turn the conversation away from language wars toward actually discussing the device. and its merits. I have clearly failed.
Hehe. Well, responding to questions about the language choice can do that. ;-)
That said, I will ask that question: there are lots of other efforts to package up Cortex-M microservers for sensornets and ad-hoc devicenets and bundle them with user-friendly API's. I haven't been able to discern what's particularly exciting about this approach. What do you think is uniquely interesting about this solution?
> over time you revealed that you believe that being able to deal natively with binary protocols is a core feature of a language suited for hardware.
"core feature" means different things to different people. I look at it as a building block that a lot of core functionality for working with devices tends to build on top of, and a very important tool to have in your back pocket when dealing with a legacy device with lord only knows what fun little protocol bugs^H^H^H^Hquirks. It's not that you have to have it, but not having it tends to discourage the rich development of that larger ecosystem, and in JavaScript's case I've had first hand experience with it.
> it is a core feature of node.js, which is the api this hardware device purports to emulate. node.js is a kind of defacto standard at this point, which is good enough.
Agreed. My main problem is "good enough" is not exactly the kind of quality that makes me say, "oh yeah, we obviously should choose this one". If someone hands me a Node.js server and says, "talk to this air pressure sensor", I'm not going to say, "it can't be done" because obviously it can be done, and without having to move mountains. But if I'd not been handed anything other problem of talking to the air pressure sensor, even if I was looking to bring on a ton of web developers with little or no familiarity working with hardware, Node.js wouldn't exactly have sprung to the top of my mind. ;-)
> To my knowledge there are no standards documents for lua, python or ruby and people have no problem using those languages.
There are no EMCA-like standards bodies for those languages, but there are documents defining the language (though Ruby in particular seems to have a bit of the, "however the runtime works" mentality it is still shrugging off). That actually has been an impediment from time to time, though obviously not a huge one.
Lua's language is quite detailed about bindings to the native platform and even has a specific type, "userdata" for unmanaged blobs of memory, and Lua's deceptively named "string" type holds an 8-bit clean arbitrary set of random bytes, so not only are its "string functions" and IO libraries naturally capable of byte-oriented processing, there's little friction for even ancillary libraries supporting and exploiting it. Ruby's string implementation is similar.
Python has, as far back as I can recall, had support for binary IO and processing, though I'd qualify that by saying it has been somewhat hokey, though things like structs and Cython have helped mitigate it. Until 2.7.x and 3.x.x came around, I'd give Python demerits in this area, though not as bad as JavaScript.
> So what else is there that gives you the willies?
As I mentioned, there's the probabilistic parser. When you are learning it is nice to have forgiveness, but it is much better to have each and every mistake pointed out to you. A probabilistic parser makes things less transparent. That can be worth it if you're going with a "do what you can" mentality that makes perfect sense for web documents, but with hardware imprecise communications are just as wrong as incorrect ones. This doesn't mean a language need be hard, it just means that you want to be painfully clear about what is and isn't correct from day one.
Then there's JavaScript's bizarre and limited approach to arithmetic. Java's lack of unsigned fixed point arithmetic has drawn some deserved criticism, but those concerns seem puny when compared to JavaScript's quantum numbers that exhibit double/int32 duality. Not a good can of worms to open up, particularly when talking to hardware that might (per Murphy: read will) have odd numerical representations of its own. Even if you assume numbers are passing back and forth as decimal arithmetic strings, that's a Pandora's Box of fun just waiting to be opened that isn't going to make the experience any easier even for programmers fairly familiar with JavaScript.
Then there's the whole async I/O, event driven callback model. While I personally love that model, and at first glance it seems like it'd be a perfect match for sensors and devicenets, it does come with some downsides. It's a less natural paradigm for a lot of more novice programmers, and the way it decouples logic and injects layers of indirection in to the code can lead to confused developers when tackling new paradigms. Coroutines seem to present an initially more accessible approach to that model, and the thread model is initially often much more approachable for developers (though the popularity of Java NIO is an obvious testament to advantages of getting comfortable with I/O state machines sooner rather than later).
So there's some more straight forward concerns. Again, not the "it can't do X" variety, but more the "we're trying to get a fish to climb a tree" variety, is this really the best way to get to the coconuts?