> Javascript is basically Scheme/Self with a C-based syntax[1][2],
You forgot to mention "and without a native way to read and write bytes", which is one of just a few highly relevant examples of why it might not qualify as "a good programming language" for the job. (Others might include having a probabilistic parser...)
"Typed arrays are available in WebKit as well. Chrome 7 includes support for ArrayBuffer, Float32Array, Int16Array, and Uint8Array. Chrome 9 and Firefox 15 add support for DataView objects. Internet Explorer 10 supports all types except Uint8ClampedArray and ArrayBuffer.prototype.slice."
They are in ES6 which will be standard any year now. If that's not good enough for you then I guess there's nothing stopping you from just writing anything you need in 6502 assembler.
How long do you think the list is of languages that didn't wait until revision 6 before adding support for working with bytes? or are you seriously trying to suggest that it has to either be JavaScript or 6502 assembler?!
Do you think it is maybe possible that a language that had standard ways of working with bytes say within the first decade of popularizing it just might be better suited for talking to hardware?
Okay, reading another one of your posts I think I've gained a tiny insight into what you're trying to get across.
You have the assumption that "Talking to hardware" necessarily involves dealing directly with binary protocols. And javascript isn't good at that. (though it CAN do it, if pressed)
And you're right on both counts.
I think what you are missing though is that on a product like this, why wouldn't you do the binary protocol stuff in a C module and expose a nice easy api in the scripting language as is the usual practice? In the presentation it very much looks like that is exactly what they do.
>why wouldn't you do the binary protocol stuff in a C module and expose a nice easy api in the scripting language as is the usual practice
You guys have been going at it for a while on this thread. I thought what cbsmith was saying the entire time couldn't have been clearer. I also think your statement here completely concedes his/her point.
You went from calling him/her an "old man" (your words) for not accepting your contention that Javascript can do it all, to advocating the use of C to handle the stuff it doesn't do well.
Okay so he/sh was preaching to your choir. But s/he needs to work a little harder to persuade or teach someone who isn't already convinced.
I never said javascript could do it all. I said that it happens to not be true that javascript doesn't have a native way of dealing with binary. It turns out it does.
that's not everything. That's one thing.
>But s/he needs to work a little harder to persuade or teach someone who isn't already convinced
It was pretty clear. You just didn't seem to be in learning mode. Your "old man" comment was dismissive on its face and you made assumptions rather than ask thoughtful questions. You said yourself that you understood once you went back and re-read. There wasn't anything new there.
I honestly have no idea what you are talking about. He had nothing but dismissive comments about the language. There was nothing to learn or ask questions about. it only finally slipped out of him almost by accident what he was actually trying to get at, and even that isn't much of a relevant point.
And just when I thought you were finally getting it.
From cbsmith's very first post on the thread:
>We're talking about a language that still doesn't have a standard way of reading and writing bytes... for talking to hardware. Think about that for a moment.
You directly took exception to this and even quoted from that part of the post in doing so.
You don't have to "buy" my depiction. Just go back and read (for at least the third time, apparently).
What's your point?
You quoted a fact that he stated, which happens to be wrong, and I pointed out that it was wrong. That's evidence for my view of things, not yours. It's not a teaching moment, he had nothing useful to say. He still doesn't so far as I can see. Where we are now is let's just re-iterate the same wrong point over and over again, and pretend like it means something.
Maybe if you're so concerned about me getting it, you can point out what his actual point is, which you seem to think is so obvious? spell it out for me.
My apologies. You are correct that I thought it was obvious.
I'd started typing a reply that outlined what just happened, but it would essentially be a strange Cliff's notes recap of the thread. I'm not sure that I want to continue or that it would be sufficient in any case.
So, apparently I'm not as concerned about your getting it as I initially thought I was. Perhaps someone else can jump in here and "spell it out" for you but, as it is, I'm content to leave things where they are.
well, you don't have to recap the whole thread. just write something useful or insightful.
Keep in mind that he criticised javascript for not having a standard way of reading and writing binary bytes (even though it does) and then suggested lua (which does not, at least not anymore than javascript does). My vague understanding of the point he was trying to make is that since javascript wasn't originally designed to deal with hardware, it's not a good fit. I don't find this to be a compelling argument though. It's more of like, a thesis, with no supporting evidence or premises, and the only point being the binary bytes thing, which is pretty debunked. There's no other point. No detailed analysis about exactly what it is about javascript that he thinks is unsuitable. Just vague handwaving.
Do you have anything to add which might flesh this out? You do know the difference between an argument and a vague complaint right?
> Keep in mind that he criticised javascript for not
No, I didn't criticize JavaScript. I criticized the _choice of JavaScript_ for the task. There's a world of difference there.
> the point he was trying to make is that since javascript wasn't originally designed to deal with hardware, it's not a good fit.
Actually, as I stated that there were a number of factors that pointed to it not being a good fit, and that was just an example of one of them. I'm kind of shocked that a statement like that somehow lead to so much controversy.
> I don't find this to be a compelling argument though.
Cleary.
> It's more of like, a thesis, with no supporting evidence or premises, and the only point being the binary bytes thing, which is pretty debunked.
Yeah, it really, really wasn't a thesis. It was a single point , and not presented as a conclusive point, but as an easily identifiable and understood indicator of a cause for concern.
And I really take issue with the "pretty debunked" characterization. I can't believe you'd say that a feature that over 15 years after it was initially conceived and popularized, after a dozen revisions, nearly a half dozen major revisions, and lord knows how many countless reimplementations, wasn't even consistently implemented on the language's primary platforms, was a byproduct not of the revisions of the core language but rather of another effort to add a specific feature (WebGL), and which you yourself described as being standardized "any year now", is a "standardized" feature of the language.
> There's no other point.
Actually, I made other points. I didn't go in to them in much depth because you seemed unable to accept even the most basic points and didn't offer any counter points (despite me inviting you to).
> There's no other point. No detailed analysis about exactly what it is about javascript that he thinks is unsuitable. Just vague handwaving.
Perhaps I should feel bad about this, given the detailed analysis of precisely what it is about javascript that you think is so suitable. Just a lot of flaming.
> Do you have anything to add which might flesh this out? You do know the difference between an argument and a vague complaint right?
You might want to consider what possible motive I could have at this point for attempting to further explain my point.
I think we got off on the wrong foot here, and I find that I am pursuing this to absurdity. I'm gonna try and set this right.
In a sense, you are right, there is nothing about javascript that makes it particularly suitable for programming hardware. Nothing at all, for many reasons that you kind of hinted at but, as you say, never really went into in any depth.
The main plus for javascript, is that it's accessible to people who already know javascript, and there's quite a lot of them. The other plus, potentially, is that existing software may be able to run on it. However, as I pointed out in a much much older comment, truly, it's not clear to me really what the use cases for this device are, other than that it's a way to run javascript on hardware, and some people might want that. (as I point out in another more recent comment, if it were my own project I would have chosen differently)
My goal was, initially, to just try to turn the conversation away from language wars toward actually discussing the device. and its merits. I have clearly failed.
What I've been trying to get you to do was, rather than just say "I don't think javascript is suitable", to actually go into some detail about that. over time you revealed that you believe that being able to deal natively with binary protocols is a core feature of a language suited for hardware.
While I concede that this is a late added feature to javascript, it is a core feature of node.js, which is the api this hardware device purports to emulate. node.js is a kind of defacto standard at this point, which is good enough. To my knowledge there are no standards documents for lua, python or ruby and people have no problem using those languages.
So what else is there that gives you the willies? You seem to believe you know something, trying to impart some knowledge you have, but I can't get at what it is, because you may have assumed you said all kinds of things you haven't actually said. (such as, the assumption that I needed to intuit, about binary protocols)
> I think we got off on the wrong foot here, and I find that I am pursuing this to absurdity. I'm gonna try and set this right.
Good on you. I was hopefully that if we rode it out, eventually we'd somehow uncross the Rubicon. Good on you for doing so. Thanks.
> The main plus for javascript, is that it's accessible to people who already know javascript, and there's quite a lot of them.
For this particular context, I couldn't agree more. The one caveat I'd put on that is that a not insignificant quantity of those people who already "know" JavaScript don't really "know" it; they could probably be fooled for a disturbing length of time if they were swapped over to Java (and that's not to say the languages are really that similar). Still, people with at least passing familiarity with JavaScript are numerous.
> The other plus, potentially, is that existing software may be able to run on it.
Yeah, that one I'm not buying much. I can't think of any other top10 development language that wouldn't have a richer software ecosystem for this problem domain. Even for the more general server-side development, the Node software library is growing fast, but it is very, very thin and represents a tiny fraction of the JavaScript software world.
> it's not clear to me really what the use cases for this device are, other than that it's a way to run javascript on hardware, and some people might want that.
Yeah, and even from that context, if you went with a JVM runtime, you'd have JavaScript (admittedly not quite as nice as with V8, but still enough for most people who'd want to play around with an embedded JavaScript server) along with a plethora of other languages to choose from, which I'd have to think would do a much better job of getting a broader selection of web developers started in the "Internet of Things" paradigm.
> My goal was, initially, to just try to turn the conversation away from language wars toward actually discussing the device. and its merits. I have clearly failed.
Hehe. Well, responding to questions about the language choice can do that. ;-)
That said, I will ask that question: there are lots of other efforts to package up Cortex-M microservers for sensornets and ad-hoc devicenets and bundle them with user-friendly API's. I haven't been able to discern what's particularly exciting about this approach. What do you think is uniquely interesting about this solution?
> over time you revealed that you believe that being able to deal natively with binary protocols is a core feature of a language suited for hardware.
"core feature" means different things to different people. I look at it as a building block that a lot of core functionality for working with devices tends to build on top of, and a very important tool to have in your back pocket when dealing with a legacy device with lord only knows what fun little protocol bugs^H^H^H^Hquirks. It's not that you have to have it, but not having it tends to discourage the rich development of that larger ecosystem, and in JavaScript's case I've had first hand experience with it.
> it is a core feature of node.js, which is the api this hardware device purports to emulate. node.js is a kind of defacto standard at this point, which is good enough.
Agreed. My main problem is "good enough" is not exactly the kind of quality that makes me say, "oh yeah, we obviously should choose this one". If someone hands me a Node.js server and says, "talk to this air pressure sensor", I'm not going to say, "it can't be done" because obviously it can be done, and without having to move mountains. But if I'd not been handed anything other problem of talking to the air pressure sensor, even if I was looking to bring on a ton of web developers with little or no familiarity working with hardware, Node.js wouldn't exactly have sprung to the top of my mind. ;-)
> To my knowledge there are no standards documents for lua, python or ruby and people have no problem using those languages.
There are no EMCA-like standards bodies for those languages, but there are documents defining the language (though Ruby in particular seems to have a bit of the, "however the runtime works" mentality it is still shrugging off). That actually has been an impediment from time to time, though obviously not a huge one.
Lua's language is quite detailed about bindings to the native platform and even has a specific type, "userdata" for unmanaged blobs of memory, and Lua's deceptively named "string" type holds an 8-bit clean arbitrary set of random bytes, so not only are its "string functions" and IO libraries naturally capable of byte-oriented processing, there's little friction for even ancillary libraries supporting and exploiting it. Ruby's string implementation is similar.
Python has, as far back as I can recall, had support for binary IO and processing, though I'd qualify that by saying it has been somewhat hokey, though things like structs and Cython have helped mitigate it. Until 2.7.x and 3.x.x came around, I'd give Python demerits in this area, though not as bad as JavaScript.
> So what else is there that gives you the willies?
As I mentioned, there's the probabilistic parser. When you are learning it is nice to have forgiveness, but it is much better to have each and every mistake pointed out to you. A probabilistic parser makes things less transparent. That can be worth it if you're going with a "do what you can" mentality that makes perfect sense for web documents, but with hardware imprecise communications are just as wrong as incorrect ones. This doesn't mean a language need be hard, it just means that you want to be painfully clear about what is and isn't correct from day one.
Then there's JavaScript's bizarre and limited approach to arithmetic. Java's lack of unsigned fixed point arithmetic has drawn some deserved criticism, but those concerns seem puny when compared to JavaScript's quantum numbers that exhibit double/int32 duality. Not a good can of worms to open up, particularly when talking to hardware that might (per Murphy: read will) have odd numerical representations of its own. Even if you assume numbers are passing back and forth as decimal arithmetic strings, that's a Pandora's Box of fun just waiting to be opened that isn't going to make the experience any easier even for programmers fairly familiar with JavaScript.
Then there's the whole async I/O, event driven callback model. While I personally love that model, and at first glance it seems like it'd be a perfect match for sensors and devicenets, it does come with some downsides. It's a less natural paradigm for a lot of more novice programmers, and the way it decouples logic and injects layers of indirection in to the code can lead to confused developers when tackling new paradigms. Coroutines seem to present an initially more accessible approach to that model, and the thread model is initially often much more approachable for developers (though the popularity of Java NIO is an obvious testament to advantages of getting comfortable with I/O state machines sooner rather than later).
So there's some more straight forward concerns. Again, not the "it can't do X" variety, but more the "we're trying to get a fish to climb a tree" variety, is this really the best way to get to the coconuts?
first, thank you for taking the time for such a detailed answer. I do appreciate it and it is a very interesting insight.
"I haven't been able to discern what's particularly exciting about this approach. What do you think is uniquely interesting about this solution?"
It remains to be seen if it's uniquely interesting. If it is uniquely interesting, it will be because of node.js's supposed ability to do well with distributed server type things.
It is my general opinion that node.JS gets overused for quite a lot of things that it has no business being used for. But one thing I would use it for is the high concurrency, small payload size stuff like instant messaging and MMORPG's.
Imagining a robotics scenario, you could enable a simulated version of a swarm of robots that runs in a webpage.
with distributed sensors, you could integrate it with a web server set up using the same language- the sensors look just like some extra nodes in the distributed network. That could be nice.
The nice thing about javascript of course is that, all though it is kind of dumb out of the box, it is expressive enough that you can build some useful abstractions- and manage the callback hell that people tend to run into with node.js coded naively.
On the other hand it may well be making a fish climb a tree. Low level hardware stuff was never javascript's strength. What could be interesting though is the higher level event driven stuff which is harder to achieve with things like c/java/arduino
> Imagining a robotics scenario, you could enable a simulated version of a swarm of robots that runs in a webpage.
For almost anything that runs in a web page, JavaScript is a better option. ;-)
> The nice thing about javascript of course is that, all though it is kind of dumb out of the box, it is expressive enough that you can build some useful abstractions- and manage the callback hell that people tend to run into with node.js coded naively.
Yeah, no question that binding to JavaScript is it is very expressive. In terms of intrinsic language properties, its hard to be both really expressive and a good language for learning. Some really beautiful languages manage to thread the needle, but I don't think JavaScript is one of them. Still, it is a tempting choice to introduce people to simply because of its ubiquity...
> What could be interesting though is the higher level event driven stuff which is harder to achieve with things like c/java/arduino
FYI, Java can actually do that kind of thing even better than JavaScript: http://vertx.io/
> He had nothing but dismissive comments about the language.
The comments were not intended to be dismissive about the language; I'm kind of surprised that statements with qualifiers like "might" would be interpreted as being "dismissive". Perhaps you didn't notice them?
If you look at my comments, while I was pretty strident about the facts, I took great pains to extensively qualify my opinion. To illustrate, I'll quote from my initial comments to you with highlights to draw your attention to the qualifiers:
> ...it might not qualify as "a good programming language" for the job...
> ... Do you think it is maybe possible that a language that had standard... just might be better suited for talking to hardware?...
> ... Maybe JavaScript makes working with hardwaremore accessible than other choices, but I doubt it...
Now, no question I was expressing a difference of opinion, but I think if I had said any more times and ways that my critique wasn't of the language in general, but of its suitability for the task in particular, I'd have felt like a broken record. Not only wasn't I dismissive of the language in general, but I wasn't even dismissive of the application of the language for the problem in particular. I expressed that I didn't think it was a good idea, but I explained my basis for the opinion and qualified it extensively, to the point of stating that I could be wrong even if I didn't think so.
I honestly can't explain your characterization of my comments other than speculating that perhaps you came in to the discussion already wearing your language advocacy hat, and that framed & distorted your perception of the discussion.
This is kind of obvious, but given the context I think it might be necessary to say it anyway.
Perfectly good programming languages tend to have strengths, weaknesses, and areas of focus (it's an inevitable consequence of both language design and Darwinian forces that dominate the marketplace of ideas). I don't think any of the flaws I pointed towards are necessarily bad points about the language and in certain domains are actually great strengths (probabilistic parsers are a great idea for web content).
> it only finally slipped out of him almost by accident what he was actually trying to get at, and even that isn't much of a relevant point.
I think a better way of describing it is that it "finally slipped in to you". ;-) When something is said multiple times in different ways, that's not "slipping out".
Obviously, you didn't understand what I was communicating, and no doubt this is partly a reflection on my own failings to communicate effectively. I apologize for not doing better.
However, you might want to consider the possibility that particularly given the limitations of the medium, the nature of communication failures, and that my point was deemed clear as day by at least one other 3rd party... you may have missed something.
Maybe we can learn something from this experience.
I have edited comments in some cases immediately after posting them, as I have a habit of noticing grammatical errors, typeohs, errors in punctuation, etc. only after posting.
I didn't alter anything that substantively altered or added to the meaning of what I said (when I have done that in the past I have marked the relevant text as an UPDATE so that the change gets noticed by anyone who may have already replied, but that wasn't necessary in this case), and I didn't edit anything any time after (or shortly before) you posted a reply. I find once a comment has been up long enough for people to read it, even in place corrections of gibberish to English increases confusion more than it decreases it.
If you saw an earlier version of an edited post you might have found a non-sensical phrase or two (e.g. I do recall one post where I fixed an accidental "maybe maybe" to just "maybe", I do remember my first post had something like "which just the many of few examlpes" and I'm sure there was one or two superfluous apostrophes in some of the comments which I later removed), then it's conceivable you read something that was later edited, but if it was fully legible (more like, as legible as it is now ;-), then definitely nothing was edited. I can assure you that the qualifiers were there in the original postings.
Truth is HackerNews locks down comments pretty fast, so by the time I might want to change/add/remove to what I've said in a way that even subtly changes the meaning, it's way, way too late for an in place edit.
UPDATE: Murphy's Law strikes again. I remember one edit I did do that might be deemed more significant than the others I mentioned. In the original post where I said, "..it is maybe possible that a language that had standard ways of working with bytes say within the first decade of popularizing it just might be better suited for talking to hardware", I had original used underbar instead of asterisks to emphasize certain words. Once I hit post my mistake became obvious and I went back and fixed it. I made the same mistake again when I wrote the comment where I was quoting myself with previous excepts (so I changed "with italics" to "with highlights" and then promptly fixed the rest of it so that "with italics" was if anything finally the more accurate phrase ;-).
> You have the assumption that "Talking to hardware" necessarily involves dealing directly with binary protocols.
I wouldn't say that it necessarily involves dealing directly with binary protocols, but the issue tends to come up. Particularly when talking with hardware that can't just easily be upgraded, you often find that even "text" based protocols might require some manipulation at the byte level in order to decode them successfully.
But really, the binary protocol aspect was just a really obvious and simple example of the larger "square peg, round hole" aspect of using JavaScript for the task. Maybe JavaScript makes working with hardware more accessible than other choices, but I doubt it.
> why wouldn't you do the binary protocol stuff in a C module and expose a nice easy api in the scripting language as is the usual practice?
I'd argue once you've done that, you're done. The hard part about the "internet of things" is that devices generally don't have a nice clean API's, and that's what needs to change.
Once you have a nice API, you can embed a JVM in their quite cheaply and then support almost any language web developers might want (including JavaScript), and quite cheaply and efficiently at that. Better still, just do a simple REST-ful interface and leave it to the developer to do whatever they want.
Well, considering we're talking about a $90 product that already has an ARM Cortex-M processor in it, it seemed it'd be hard to argue with the "quite cheaply" argument: http://www.st.com/web/en/catalog/mmc/FM141/SC1169 (particulalry since you can get free samples).
No, not really. My point wasn't about a checkbox item on a feature chart.
The point is the language has evolved for quite a long time in a direction very different from talking to hardware. Basic capabilities like the ability to work with bytes are a foundation that a lot of other constructs are built on top of, and adding it in late in the game doesn't undo all that came before it, or provide all that should be built on top of it.
It's not that you can't do this kind of work with JavaScript (obviously you can). It's just that it seems like very bad fit (even if you ignore its history with browsers), when there were a lot of other choices that seem like they'd have made things easier for programmers getting started in this area.
Well, there are JVMs and native compilers for the embedded market, able to target less powerful boards, hence cheaper design per unit, with much more performance out of the box.
You forgot to mention "and without a native way to read and write bytes", which is one of just a few highly relevant examples of why it might not qualify as "a good programming language" for the job. (Others might include having a probabilistic parser...)