Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Julia receives DARPA award to accelerate electronics simulation (juliacomputing.com)
571 points by jonbaer on March 11, 2021 | hide | past | favorite | 165 comments


A modern circuit simulator, that has a fully featured programable API, that can be run from a real modern programming language, would be a dream. Every simulator i have used has been pretty limited in that way, or at least those features were not well advertized. Even the insanely expensive ones. I mean the most expensive ones (like $100K liscense type deals, like Cadence) are if anything even less modern from a user interaction perspective.

My first academic paper (https://www.oxinabox.net/publications/White2015PsoTransistor...) was based on doing a search over parameters for the transistors being used in a circuit. The way that whole thing work was by string mangling the netlist file to update the parameters. Triggering the simulator on the commandline. Getting it to output a CSV of the signals. Parsing that CSV, and detecting edges and then measuring timing that way. Then throwing it at a particle swarm optimizer to fine a new one. That sucks from a user experience. It's a super cool way to solve problems though. that would be so easy with a real API.

Further, with julia being a differnetiable programming language, rather than having to use a particular swarm optimizer to search, I could have been differentiating it, and using some sophisticated gradient descent like LBGFS etc.

I hope that some of the general tools like this will be an outcome of this project.


A lot of semi companies have built their own simulation environments because the EDA vendor's provided tools are very limited in what they can do and difficult to use. Many users go to Python, Excel or MATLAB because the programming/math capabilities of the EDA tools are inadequate. Over time these home-grown environments become a burden and the developer disowns it and it becomes a headache.

The EDA tools have no ecosystem you can hook into and they don't really care about the user trying to put the simulators into a flow to solve their problem. It is a bunch of point tools each with their own embedded interpreter that don't play together. I sure hope there is a plan to create a better set of tools so I can write custom netlist checks and do something novel (like get derivatives out of the simulator) and in-memory (no slow disks) and run custom Julia checks during simulation. Julia is a much better match because running Python or MATLAB code within the simulator is way too slow. I'll keep watch for sure.


I had to leave a company over that. I had made a personal fpga construct generator for various easily parameterizable modules. It worked pretty well but it was a mess and needed to be redone pretty badly, but it was good enough. I shared it upon request with a couple of engineers and before I knew it there were probably 25 EEs using it. They wanted "more" and I told them it was a personal tool and they were more than welcome to extend it. This actually angered a few of them and they "reported" me another manager a couple levels higher. He told me that I had to maintain/extend it and to use as much time as it took. (I was a junior engineer at the time). I told him I was there to be an electrical engineer and not a legacy code maintainer. He gave me an ultimatum and being a person that doesn't like ultimatums, I told him I resigned on the spot. I gathered the few items I had on my desktop waved at a couple buddies and went on out the door. I never regretted that decision. Moral (I guess?) be careful of what you share :) .


If they didn't lay claim to the IP, you should polish it and offer it for sale, or use it as interview material at someplace developing similar technology.


First and foremost: I AM NOT A LAWYER.

With that out of the way...

They could argue that since it was a tool that was made to help with his job at at the company, then it's internally developed. If there are no clear grounds or easily presentable evidence (even if there was!), he's out in the wild with a liability.

The only place where I can see him in the clear is if they had a repo going back to before the work on the company and they could prove the tool was a personal project, and unrelated to the job. Even then, Company could still sue and burden them with legal fees/process until they tire him out and BINGO, now they own their IP.

The alternative could be starting a new project, completely open source from the start (probably with one of the more liberal licenses) and get crowdfunding to develop and maintain it. Assuming they are interested in doing that, of course.


Betcha it's Xilinx today, or a branch of Analog, or something like that. You get the idea.


Oh the creation of modules has gotten much better with languages like SystemVerilog and SystemC :) . What I did was a very rudimentary effort in perl (later ported it to python). Kind of like people look like "excel gods" at some company when they take some mundane manual tasks and do some excel scripting and take a 3 day job down to 15 minutes. I just happened to be the only EE with the soul of a coder while I was there. I was under very heavy NDA at the time, I'm sure others were doing the same at their companies. tldr; it wasn't anything special and is much more common these days. Basically it eliminated a ton of copy-pasta and manual editing.


I can attest first-hand to the "headache" that comes from semi company simulation environments. Not only are they horribly outdated (in Perl/Tcl), but they're different at every company you work at. There's no gold standard because the standard that these EDA companies ought to be making doesn't exist.

There needs to be an open initiative between semi companies to create a standard simulation environment -- with compilers, unit-test frameworks, and all sorts of simulation (gate-level, analog/mixed signal, emulation, etc). Hell, just give me a free IDE plugin for SystemVerilog that actually works.

This lack of a standard seems to me like the critical path in hardware design. I'm trying to support projects to fix this like SVLS (A language server for SystemVerilog: https://github.com/dalance/svls) but these are all hard problems to solve. This industry is relatively niche and doesn't seem to have many engineers interested in FOSS.


Shameless plug: We are building this over at flux.ai

its early days but you can start playing with our beta


It's an absolute nightmare. Cadence added support for Matlab calculations on simulator outputs, but its clunky and inconsistent. Don't even get me started on how long it takes to do basic calculations on numbers that should already be in memory...


My worst experience was doing a simple min/max of each signal took 7x longer than the simulation. I'd be so happy to toss TCL in the trash. I spent a long time debugging because TCL expr doesn't do -2^2 correctly. The error messages don't tell you the line number and I found no good way to debug. Things like that are just the tip of the iceberg of time wasted fighting with arcane tools. I suppose others have their own stories.


In Tcl's expr:

^ stands for bitwise XOR: so [expr {-2^2}] results in -4

* stands for exponentiation: so [expr {-2*2}] results in 4

Both seem correct to me, taking into account how integers are represented in binary (two's complement for the negative ones).

With regards to debugging dynamic programming languages, it is different as compared to their static counterparts, since much is delayed to happen at runtime (as opposed to at compilation time). But it also opens up possibilities (like introspection, ability to intervene in the scripts while they run, ...). It requires a different mindset.


I've always been astounded that much of the rage against Tcl seems to stem from the fact that it works as documented, rather than according to the rules of other languages.


Mmmn, HN stripped out the half of the double asterisks, now I made it confusing myself... Too late to edit my comment above, but it needs to be like this:

** stands for exponentiation: so [expr {-2**2}] results in 4


We are building something along these lines that the DARPA work sits on top of, so if you have a current need, do feel free to reach out (email is in profile).


Talking about this - sorry for the slight OT - I really wish Linear would consider OpenSourcing LTSpice.

I mean ... it's a really nice tool, they are giving away from free anyways ... if they OpenSourced it, the kind of feature you describe here (and which I've been sorely missing as well) would be a no-brainer to implement.

Not to mention all the other enhancements the community could and would bring to it.


That is an interesting optimization trick, essentially a brute force search over a parameter space in order to minimize silicon area, I am assuming this was for cells that were going to be replicated many times on the same die?


Something like that is the idea. Silicon area is what costs money. Use as little as possible while still being fast enough to beat the clock-rate of your system. PSO's are way better than brute force. but yeah.

It was a purely academic work for a masters level CS unit. I had just finished a masters level electrical engineering unit on silicon circuit design. Where the final project was to design an adder that minimized silicon used (and thus cost) while also being faster enough. And the hard bit is that you want big thick doped regions for high conductivity, but also the bigger the area the more paracidic capacitance. And so there are some tricks the to find good sizes, like progressive sizing and stuff. But afaik there is no actual answer, at least not one we ever learned. So a lot of trial and error went into it. It was a hard project.

And so then I did this CS unit where the project was "Do something interesting with a particle swarm optimizer". And i was like "lets solve this". and once I saw the results, i was like "this is actually really good", and so the lecturer and I wrote a paper about it.

It is a real problem. Minimizing silicon area subject to speed. I bet the big integrated designers have there own tricks for it that i don't know about. To do it really well you need to miminal the real area so also need to solve layout (which is a cool cutting and packing problem). (and ther are also nth order effects, like running traces over things can cause slow downs, because electromagnetism reasons) I bet a bunch of folk on HN know this problem much better than i do though. probably something bad in my solution, but i think it illustrates the utility


Very cool paper. Your observations in V.B.4 are pretty well understood in circuit design. If you've not heard of it, you might be interested in https://en.wikipedia.org/wiki/Logical_effort. Turns out the optimum scaling for propagation delay is e (natural log constant), but I don't know if I ever learned anything about the optimum for area.

Now that everyone is using finfet processes, the layout part is pretty easy to solve because transistor widths have to be a certain number of fins and the layout is extremely regular.

One thing your analysis didn't include, which actually ends up being quite significant, is the extra capacitance caused by the wires between transistors. This changes the sizing requirements substantially.

I've done some custom logic cell design, and I always had to use a lot of trial and error, though generally I was concerned more with speed than area. I'm not sure exactly what the development process is at my current employer, but it seems like its a lot of manual work. I'm guessing they set area targets based on experience and attempt to maximize speed where possible.

Ultimately, everything gets placed and routed by a computer anyways!


> Your observations in V.B.4 are pretty well understood in circuit design.

Indeed, I am actually surprised the paper doesn't include something like _"This is inline with the well known result for progressive sizing [cites textbook]"_. It was my first paper, i was worse at writing things then. :-D

> One thing your analysis didn't include, which actually ends up being quite significant, is the extra capacitance caused by the wires between transistors. This changes the sizing requirements substantially.

Good point. And not easy to model in a SPICE style simulator. I guess one could maybe introduce explict capacitors and them compute capacitiances by making some assimptions about layout.


> I guess one could maybe introduce explict capacitors and them compute capacitiances by making some assimptions about layout.

That is, in fact, exactly what we do! I think it would be pretty straight forward for your large buffer example - you can model it as a fixed capacitance at each output which corresponds to the routing between inverters, which would be the same for all sizes, plus some scaling capacitance that relates to the size of the transistor itself, which you already have.

The adder would be trickier, for sure. Regardless, in my experience, just adding a reasonable estimate is good enough to get you close in terms of sizing in schematics, then you fine tune the layout.


Thank you for writing it up and posting it here, one of the more interesting comment threads.


There was a simulator called HSIM that I used 20 years ago to do what you describe. It had a C api that I integrated with. It was more of a fast Spice rather than equivalent to HSPICE or Spectre. I coupled it with a little mini-language I wrote to do easily scriptable regression tests for our system. I believe Synopsys bought the company.


Have you looked into Spectre's interactive mode? You can send SKILL commands to change parameters and rerun simulations. It's not documented, though.


Huh, did not expect our press release to end up on HN, but I'm the PI on this project, so AMA, I guess. Will jump into the questions already asked here and provide some context.


I've been doing analog integrated circuit design for a decade and I'm somewhat skeptical that this is practical. I hope you can convince me otherwise because this would be great to avoid multi week simulations.

From your other comments, it seems that the general principle is to create a simple model that captures most of the behavior then apply corrections on top of that.

Can you elaborate on how the model is trained? Is it just from netlist/models? Or are you running a regular transient simulation? Or do you need a special transient setup?

What happens when your circuit's operating point varies wildly during operation? Presumably you'd have to train a larger model over the entire range of operating points but it seems to me that it would require extensive simulation to just collect the operating point data, which somewhat defeats the purpose. You've also got simulation corners, where you'd have another massive set of permutations to generate models for...

My other big concern is accuracy. Chris commented that you're achieving these speed ups at 99% accuracy. Does that mean your results are within 1% of the "full" simulation? Or that the intrinsic simulation error is 1%? For the former, that result is extremely dependent on the particular simulation and desired output. I'm curious if you have preliminary results on real circuits? If it's the latter... That's not enough. Even for mediocre circuits you're looking at >60dB relative accuracy requirements. Potentially >100dB for some high performance applications. The default relative tolerance is 1e-3 and we often reduce that by an order of magnitude or two...

There are certainly mixed signal simulations where we care more about functionality than performance, so it could help there. But as a matter of practice we usually already have verified verilog models for all analog blocks anyways.

And lastly, I'm curious how much does this have to do specifically with Julia the programming language? Is it just an "all part of the family" kind of thing? There's no claim that something intrinsic in the language is providing speed up, right? It's just a conveniently easy way to implement?


> From your other comments, it seems that the general principle is to create a simple model that captures most of the behavior then apply corrections on top of that.

It's not applying corrections as in doing a taylor expansion and then adding terms to it, it's basically a projection of the systems. The basic details are in the CTESN paper, though I think there's improvements that haven't been published yet to make it work in this domain. Another point to note is that sometimes the "simpler" model actually as more equations, but the equations are of a form that is much faster to simulate, because you can relax some stiffness in the original set of equations.

> Can you elaborate on how the model is trained? Is it just from netlist/models? Or are you running a regular transient simulation? Or do you need a special transient setup?

The details are complicated, but the basic approach is the usual ML thing: You pick some highly parameterizable model and projection, and then take gradients to tweak the parameters until the projection reproduces what you want, except in this case it's obviously a continuous time problem and you don't need to feed in data, because you can just use the original simulation in your loss function. For some more advanced techniques you do benefit from having the whole simulator be differentiable even on the baseline, but conceptually it's not required and you could do it blackbox with a regular transient simulator.

> What happens when your circuit's operating point varies wildly during operation?

You can choose what parameters to fix and which to keep variable over the training process. It is probably true that in some (e.g. chaotic) models this means that the surrogate generation is too hard and will fail to converge for reasonable sizes of the model. We don't have enough experience yet to give you a clear answer of when this happens or not - figuring that out is part of the research effort here.

> My other big concern is accuracy. Chris commented that you're achieving these speed ups at 99% accuracy. Does that mean your results are within 1% of the "full" simulation?

I'm not sure I quite understand the distinction that you're drawing and measuring errors in a sensible way here is actually somewhat non-trivial. I believe the error rates Chris quoted are from a smooth distance metric between the signal generated by a full simulation and those generated by the surrogate.

These tradeoffs are tunable of course, so for a particular application you can decide how much accuracy loss you can live with. Of course, there will be infeasible points for this technique but hopefully it's useful. I can't speak for DARPA, but if you look at the original call for proposals, you will see that they were asking for accuracy targets in the single-digit percent, so we think at those targets the 1000% speed-up is more than achievable.

> There are certainly mixed signal simulations where we care more about functionality than performance, so it could help there. But as a matter of practice we usually already have verified verilog models for all analog blocks anyways.

One way to think of this is as automatic generation of those verilog models from the netlist for the mixed signal use case. It is certainly still at the research phase though - where it's applicable will highly depend on what the performance/accuracy trade-offs look like and how expensive the thing is to train.

> And lastly, I'm curious how much does this have to do specifically with Julia the programming language? Is it just an "all part of the family" kind of thing? There's no claim that something intrinsic in the language is providing speed up, right? It's just a conveniently easy way to implement?

We've spent many millions of dollars building some extremely sophisticated modeling & simulation tooling in Julia, most of it open source, so we get to piggy back on that here to get a very modern simulator with all kinds of bells and whistles "for free", including the surrogatization capabilities. We are expecting speedups on real-world problems just by using this framework for baseline simulation also, but I don't have data on that yet, so I don't have any concrete claims to make. As for the question whether such a speedup is intrinsic to the language, as always the answer is yes and no. Julia's design makes it extremely easy to write very high performance code. Both of those things are important. Being easy to use, allows you to do go further down the algorithmic rabbit hole before you start hitting the complexity limit :).


> improvements that haven't been published yet to make it work in this domain

Are you able to share what some of those are?

> You can choose what parameters to fix and which to keep variable over the training process

Can you explain more about the parametrization? Do your parameters correspond 1-to-1 with schematic parameters like transistor sizes or resistance/capaciance values? Or internal transistor model parameters? Or are they more abstract mathematical parameters?

> I'm not sure I quite understand the distinction that you're drawing and measuring errors in a sensible way here is actually somewhat non-trivial.

Circuit simulator vendors often market their accuracy in terms of "% SPICE Accuracy", and what they mean is if you run a simulation and measure some parameters (usually something like RMS noise voltage or signal-to-noise ratio), then those measurement results will be within 1% of what the measurement results you'd get from running the same simulation with full-accuracy SPICE.

The other way of measuring simulator accuracy is in terms of dynamic range. For example if I have a noisy sine generator circuit where the signal has rms amplitude 1, and the noise is 1e-4 rms, I need to make sure the numerical noise of the simulator is much less than 1e-4.

The first is sort of relevant in your case as a comparison between surrogate and full simulation. The second is an absolute measurement of a single simulator's (or surrogate's) accuracy.

> I believe the error rates Chris quoted are from a smooth distance metric between the signal generated by a full simulation and those generated by the surrogate.

This is interesting because it makes sense as an application-independent metric for measuring the accuracy of your surrogate. It's not immediately clear to me how this would translate to circuit performance accuracy in all cases. However, in one specific case - a Digital to Analog Converter - that 1% smooth distance error could be catastrophic for some performance metrics depending on how it behaves.

That might be an interesting thing for you to consider investigating through the course of your research.

> We are expecting speedups on real-world problems just by using this framework for baseline simulation

Very cool, and I'm looking forward to seeing some data there.

One last question - why did you choose this particular DARPA project? Was it any specific interest/relationship with the circuit design industry? Or did it just happen to be a very cool application of CTESN?


> Are you able to share what some of those are?

The papers are being written, so should be public in a few months, but I can't go into detail quite yet.

> Can you explain more about the parametrization? Do your parameters correspond 1-to-1 with schematic parameters like transistor sizes or resistance/capaciance values? Or internal transistor model parameters? Or are they more abstract mathematical parameters?

All of the above. We have a fully symbolic representation of the circuit, so any part of it can be replaced by a parameter. Transistor sizes and device values are a natural thing to do, but the system is generic.

> % SPICE Accuracy"

Heh, we've actually found some SPICE implementations to have significant numerical issues when compared to our baseline simulator, which has some fancier integrators - I suppose that would make us worse on this metric ;).

> That might be an interesting thing for you to consider investigating through the course of your research.

Yup, characterizing error trade offs is one of the primary outcomes of this research.

> One last question - why did you choose this particular DARPA project? Was it any specific interest/relationship with the circuit design industry? Or did it just happen to be a very cool application of CTESN?

Circuit design was my first job, and I have a second desk with a soldering iron and test equipment, though it's collecting dust ;). I have a bit of a hypothesis that - between open source tooling becoming more mature, and more people getting into the chip design space for custom accelerators - we're on the cusp of a major overhaul in EDA tooling. I thought we'd be good at it, so I was advocating internally for us to start up a team in the domain. We have a bit of an "if you can get it funded it, you can do it" attitude to those sorts of things, so I was trying to find the money to jumpstart it, and this is that.


> Heh, we've actually found some SPICE implementations to have significant numerical issues when compared to our baseline simulator, which has some fancier integrators - I suppose that would make us worse on this metric ;)

Hah. That's not surprising. Our vendors generally mean spectre when they talk about full accuracy, but I have a laundry list of grievances when it comes to spectre and its accuracy settings... What are you using as your baseline?

> Circuit design was my first job

Glad to hear. This industry needs a lot of modernization.

> I have a bit of a hypothesis that - between open source tooling becoming more mature, and more people getting into the chip design space for custom accelerators - we're on the cusp of a major overhaul in EDA tooling

I hope you're right. The current state of circuit EDA tooling is abysmal. I think on the analog/RF side, the ecosystem is so ancient and entrenched that it will take a herculean effort to make any real strides, especially because the golden age of analog/RF semi startups is over. But digital design is very much becoming open source, as you mentioned, and maybe that will eventually bleed over.

I wish all types of circuit design were more accessible to the world - lots of emerging economies could use it, but the up front cost is just so high. There's been a massive surge of software dev recently in the Middle East and Africa, and hopefully hardware dev follows suit.


Is there a good way of doing analytic analysis on circuits with a sensible GUI, if restricted to passive components? I have a few interesting problems that I'd like to analytically explore, but the best I've been able to find is OpenModellica piped into something like Maxima or sympy. Mathematica looks like it has a nice tool -- system builder -- but it costs a lot and I don't have a license.

Julia would of course be ideal for this, and I very much like where you project is going!


The underlying representation of the simulator we have is symbolic, so you can have it spit out a https://github.com/JuliaSymbolics/Symbolics.jl representation of the equations that make up the circuit and feed it into a symbolic integrator from there. Not a focus for us, but I don't see why it wouldn't work.


What was the technology replaced by Julia (i.e. the systems which were 100 or 1000 times slower)? Is the whole factor 1000 because of Julia/LLVM optimizations, or is there faster hardware, or was the old system an interpreter?


The 1000x goal here specially refers to algorithmic accelerations. Depending on your baseline simulator, there may also be improvements because Julia's simulation capabilities are extremely fast, but those gains are multiplicative, since we're using our own simulator as a baseline for measuring the algorithmic improvements. Just to be clear though the 1000x is a goal. We've demonstrated the same techniques at about 100x algorithmic improvement (for close to 1000x overall), but haven't quite gotten to that point yet in this domain. It is still a research project at this point. We are quite actively exploring what to do with just the simulator components though, because we think there's a good opportunity to replace some legacy tooling in this space even before factoring in the algorithmic improvements.


This is about funding for future work, not payment for achieved bonuses, it seems.


A long long long time ago, I wrote this (currently very unmaintained, and I have no interest in maintaining) julia project, don't know if this is useful to you, but it's pretty clear that there is a LOT of potential for julia in this domain, though if I'm not mistaken, the HDL-stuff is one level above what you're doing in your project -- it would be nice if these sorts of things could be composable: https://github.com/interplanetary-robot/Verilog.jl


We're not doing design in Julia at the moment, just simulation of existing domains, so we're just reading in netlists from other tools. Eventually I do think Julia would be good at HDL, and I too have an abandoned HDL DSL (that I basically used for an FPGA demo and nothing else), but I don't think HDL is an area where Julia would be able to provide 100x improvements over existing tools (which is about where we want to be to start something new). Of course, if Julia takes over in the space, doing HDL in the same language as your simulations would have advantages :).


you mean quantifiable 100x improvements ;-). Chisel, which I would argue is the best HDL tool, still produces completely inscrutable verilog, so if you don't like the way it does things you might have... a challenge rejiggering it. I imagine there's like "100x" DX improvement you could generate between say best practices on CI, deployment, portability of design, running your design on accelerated hardware with ease, etc.


How was the process of applying to DARPA? What kind of effort was involved, and what was the experience like?


We have a fairly academic background, so we know how to write these grant proposals and have been doing it for a while. It's definitely a specialized skill, but there's no magic to it. You have to know how to do the paperwork and budgets though. This particular one was an extreme fast track program, so there were 30 days between each of program announcement, proposal due date, performer selection and final contract negotiation date. Nothing in government usually happens that fast - I was quite shocked that it actually happened in the stated time frame.


This seems very promising to me One big part of circuit simulation is solving differential equations. That's the whole inducer, resistor, capacitor thing. Julia has hands down the best toolkit for differential equation solving. Most circuit simulators today are going to be using old methods invented in the 60's-80's. Neglecting the state of the art developments.

Someone made a blog post recently comparing the time to simulate with LTSpice vs writng and solving the system in Julia https://klaff.github.io/LTSvDEQ_1.jl.html this is a very simple circuit, and they still got a 100x speed up. Sure that is neglecting the time it takes to actually extract the differnetial equetion from the circuit. But from what i hear that kind of thing is something this DARPA project will be working on. And sure LTSpice isn't state of the art. But still I find this indicative and promising.


As someone that actually uses LTSpice I’ve never considered speed to be an issue. The main draws for it are

1. It’s free unlike Altium addons or orcad p-spice

2. Graphical, I’m happy to code things but code literacy varies widely among EEs. Much easier to share results when it looks like a schematic

3. Good enough component library. The time spent finding and inputting component parameters are gonna be way bigger than any savings on the actual computation

I mostly work on small embedded systems boards and use simulation to probe behavior of analog sub systems I’m concerned about, rather than simulating the whole board. Maybe more complex designs get more like CFD models where computation time is measured in hours or days. Would love to see someone use this as a backend for an alternative to the major spice programs, LTSpice UI isn’t exactly pleasant, and is unusable on Mac so it wouldn’t take a whole lot to get me to switch.


LTSpice is my go-to circuit simulator, but I have spent weeks of my waking life waiting on it to run simulations. My current demon is simulating a 1 MW DC power supply to validate current output smoothing solutions in low load scenarios. The topology is a delta-delta and delta-wye transformer, each feeding a summing bridge rectifier, the two rectifiers have their negative legs tied together and the positive legs are the outputs.

I only simulate one pulse (about 3 ms), but the simulation takes minutes to resolve the inrush. Tuning circuit impedances to match measurement is a real pain. At this point I'm just going to take many more direct measurements. If it was quick I would have written a script to scan through unknown parameters to maximize correlation with measurements, but that isn't reasonable when the simulation fails after several minutes of trying for most values.


Oof that's rough. Don't know if it can handle your topology but have you tried LTPowerCAD or TIs equivalent? They did the job when I was doing a boost converter design but I don't remember how capable they were for more complex tasks.


1. yeah these are super expensive. Even the cheaper ones are close to a grand a liscense 2. definately got to have a front-end. Writing netlist by hand is suffering. 3. Interestingly (to me) the components are more or less portable between them. With only a little manual writing i have translated components from ORCAd p-spice, to ISpice, to LTSpice. No idea on the liscensing for that. (I suspect the IC manufacturers produce these)

I think speed is very much a question of what kind of thing you are doing. I agree it often doesn't matter. and without your 3 points, it certainly doesn't matter


The problem is a lot of the models you use are black box characterized models and not differential equations. I don't mean Julia won't speed things up a lot and won't be helpful but your point about modern sim tools using old techniques is not true. Simulation RCL in semiconductors is a huge business where customers have been paying a lot for faster solutions.


fair enough. I can't say i've seen inside how things things are made. Julia definately has cutting edge for DEs. I doublt LTSpice does, but ORCAD etc might.


Not only that, but the LTSpice solution was incorrect, and the Julia solution was correct.


I wish BioJulia[1][2] (bioinformatics ecosystem with Julia language) would get a similar attention. Currently, they seem largely underfunded[3]. Global pandemic highlighted importance of such projects, I hope more and more people would participate into the FOSS-based computational biology and medicine.

[1] https://biojulia.net/

[2] https://github.com/BioJulia/

[3] https://opencollective.com/biojulia


I'm one of the main developers of BioJulia. I believe our main issue is lack of developer manpower, and not necessarily lack of funds.

Of course, if we got enough money to actually employ a developer, that would be amazing. It's just not very realistic. Furthermore, having BioJulia be developed by working scientists has its advantages.

If you, or anyone else, is interested in BioJulia, do think about making a contribution to your favorite package, it would be very welcome. Developing in Julia is extremely satisfying, as you get so much bang for your buck, while still being able to create highly efficient code.


Can a programmer with zero knowledge of bioinformatics be of help too? Or do you need a bio background?


You don't need to have any particular skills except familiarity with Julia, but it's obviously an advantage to have a bio background - depending on what you're going to do.

Usually, the best packages come about when people are motivated to creating something specific, for example if they think the status quo in some domain is not good enough.

I'm sure we can dig up a handful of old, badly maintained projects that could use some love. Off the top of my head, it would be nice to have

* Micro-optimized our smith-waterman algorithm. That's probably fairly easy to get started with if you're not a bio person

* A number of our parsers have not been properly maintained. We use finite state automata https://github.com/BioJulia/Automa.jl to create parsers. That's for more advanced users

* We need to consolidate our scattered sources of k-mer analysis code. Another developer is re-writing our k-mer iterator protocol, but we need a big toolbox for k-mer counting, minhashing, calculating k-mer spectra etc. That's also very computer-sciency and no so much biological

Feel free to get in touch on the Julia Slack, or send me an email :)


I'd like to second this question. I'm very interested in bioinformatics as a field, but no background. Would be happy to devote some free time but I wouldn't want to be counterproductive.


The way to do this is to find non-domain specific tasks in these projects, and make useful contributions to the team - and slowly learning as you go along. Website, CI, benchmarking, helping out new users, pointing out unclear docs, writing a tutorial as you learn, etc. are all great ways to get involved.


Thanks a lot, these are great suggestions.


Are there signs of wider BioJulia adoption? Looks like bio frameworks for Python, Go and even Rust still more popular. Disclaimer: I'm just checking GitHub stars, have no glue what metrics would be more appropriate.


I can't tell how many users we have, or who they are. Unless they directly interact by e.g. raising a GitHub issue, I won't know they exist.

In my broader experience, almost no bioinformaticians use Julia. I think we, as a field, are more conservative than e.g. physicists when it comes to technology. My old institute taught me Perl as the lingua franca of bioinformatics as late as 2015 (but switched to Python the year after).

I think we, as a field, have been consistently fairly poor at choosing our programming tools. Old bioinfo scripts are cluttered mess of write-only spaghetti-Perl. Most bioinformaticians I know don't use Biopython or Bioperl or anything similar, but rather creates new programs or packages by either re-implementing the basics from scratch, or by duct-taping together static binaries and/or scripts through shell commands.

We will never get rid of having to use static binaries or external scripts, but I think BioJulia at least have a decent chance of stopping people from re-implementing the basics again and again, and providing a central "platform" that various external scripts communicate through (e.g. an old Perl script may produce DNA as a FASTA file, when can then be fed into BioJulia). The main issue is to have bioinformaticians understand the current situation is problematic.

It doesn't take much to have a big impact, I think. If we had an ecosystem of the most basic data types (biosequences, kmers, phylogenetic trees and protein structures), a collection of well-known fundamental functions to operate on them, and parsers for the 20 most common formats, we would already have a very compelling ecosystem.


The section "SID Scalar and Vector Processor Synthesis" from https://en.wikipedia.org/wiki/VAX_9000 talks about this need to have better tools and even a bit about AI:

SID was an artificial intelligence rule-based system and expert system with over 1000 hand-written rules. In addition to logic gate creation, SID took the design to the wiring level, allocating loads to nets and providing parameters for place and route CAD tools. As the program ran, it generated and expanded its own rule-base to 384,000 low-level rules.[19][20] A complete synthesis run for the VAX 9000 took 3 hours.

Initially it was somewhat controversial but was accepted in order to reduce the overall VAX 9000 project budget. Some engineers refused to use it. Others compared their own gate-level designs to those created by SID, eventually accepting SID for the gate-level design job. Since SID rules were written by expert logic designers and with input from the best designers on the team, excellent results were achieved. As the project progressed and new rules were written, SID-generated results became equal to or better than manual results for both area and timing. For example, SID produced a 64-bit adder that was faster than the manually-designed one. Manually-designed areas averaged 1 bug per 200 gates, whereas SID-generated logic averaged 1 bug per 20,000 gates. After finding a bug, SID rules were corrected, resulting in 0 bugs on subsequent runs.[19] The SID-generated portion of the VAX 9000 was completed 2 years ahead of schedule, whereas other areas of the VAX 9000 development encountered implementation problems, resulting in a much delayed product release. Following the VAX 9000, SID was never used again. [Not sure why]


> Following the VAX 9000, SID was never used again.

It was the last machine that DEC made that did not use a microprocessor.


This is super interesting, thanks!


Fix the headline - it should say "Julia Computing Receives DARPA Award to Accelerate Electronics Simulation by 1,000x"

The money is going to a company.


Note that while Julia Computing and Julia are different entities, the former is the employer of most of the top contributors to Julia. Lots of this grant will probably go into paying for additional features for `DifferentialEquations.jl` or compiler work necessary to speed up some of this code.


I have no problem at all with the award, company, or its employees and founders - just the accuracy of the headline.


Apparently 4 of the 6 founders of Julia Computing are the 4 creators of Julia.


Yeah, I think most people on this site are going to first assume the language.


It's also weirdly editorialized, since the original title also says "Julia Computing"


I think the original hit the very small character limit on HN


Impossible. That has 84 characters. Also, since it doesn’t seem possible for a language to receive grant money, I don’t think there is much chance for a dangerous level of confusion.


LT-Spice is absolute trash and basically why I decided to leave EE for CS in college... among many other reasons! Essentially, because after learning all kinds of math my linear systems prof basically said "yeah, at some point you just have to simulate everything because the math you learned only applies maybe 60% of the time". Granted, I do not think I was exactly destined to be a great electrical engineer.

Electronics simulation is fascinating, especially given the AI models used to do this. Layout gets especially complex when rf-traces / layers have to be considered or when you want to have an arrangement of traces to high bandwidth components all be the same length. Interaction between multi-layer vias is also insane (the guy who built the UberTooth1 bluetooth hacking dongle has a great defcon talk on the subject).

The best analog I can come up with is the debate / discrepancies between the US and European weather simulation models. Fascinating space, I got to work a few feet away from the Julia team at the Harvard Launch Lab way back in the day at a college internship. Of the few interactions I had with their team they are great people and unbelievably brilliant. If any of you are reading this, my hat's off to your engineering abilities - I'm still impressed by the fact you guys identified an error in intel's x86 instruction sets and yeeted the issue in less than 24hrs.


Been using LTSpice for the last decade or so and am just fine simulating everything from Buck to current sense circuits to battery monitoring systems to HBridge. Don’t know what your beef is with LTSpice but still can’t get around the fact you quit your discipline for the lack of a better tool. If you didn’t like what you had to deal with why didn’t you pivot to CS and invent a better simulation tool. Just saying.


I'm really just being cheeky, the writing was on the wall for me about three years in that EE wasn't for me.

Granted, I make a great living writing software and honestly have really benefitted from my 67% complete EE degree. Software ppl generally have zero idea how computers work / how to really leverage hardware bits to accelerate certain workloads. The ideal CS education for me is based in EE but also starts with both lisp and C. NOT Python. However, I was a horribly distracted student throughout college so I really should be the last person giving recs for coursework.


> still can’t get around the fact you quit your discipline for the lack of a better tool.

When people are at the very beginning of some path, they have almost no attachment to it and the smallest nudge one way or the other can change their course.

Think about how many people say, "If it wasn't for <random elementary school teacher> I would have never gotten into <field they became famous in>."


I got into electronics, which pretty much defined my career, at 14 when I went to technical college and the only reason I chose electronics was because I wouldn't get bullied as much! I struggled with it for the first few months and after I finally "got it" I fell in love with it!

I don't work as an Electronics engineer any more but I still have a significant interest.


I agree LT-Spice and many scientific software have terrible experience. There is a massive opportunity in designing better UI and UX in the modeling/simulation space (Julia's market). I can't wait for the web to clean this space. I believe modern JavaScript and more ergonomic low-level tech will help (i.e. Rust, better C#). I'm skeptic though.


You want the component parameters entered for you. A nice UI is secondary.

Also LT-Spice being native desktop is great. No way the lock it into the cloud now.


I enjoy seeing this sort of press for Julia. One thing I’ll say for the language is that it is good with elegant high level abstractions for productivity while at the same time being able to go as low-level as you need for optimization. My company uses Julia for our text structuring ML. From go we communicate with the Julia binary using protobufs. I have no complaints and have enjoyed my experience with Julia.


Are you doing your NLP with Flux?


Nope, completely custom, with a big chunk written prior to Flux anyway, but I’ve seen Flux and I’ll be looking for opportunities to play with it in future projects for sure.


I'm having a lot of trouble understanding how "just add AI" is going to make electronics simulation 1000x faster.


Chris has posted the relevant papers in a sibling comment, but let me try to give an intuitive explanation. Say you have some sort of electronic device like an amplifier. It is made out of many individual transistors and parasitics, but its time evolution is overall guided mostly by the amplifier characteristics its supposed to implement plus boundary effects and corrections from the parasitics. So what you try to do is to learn a simpler system of equations that captures the time evolution plus a (highly-nonlinear, complicated) projection that recovers the signals of interest (if you don't care about reproducing interior signals, you can usually get away with a simpler model). Of course it's not obvious that this should actually work (though of course picking a model of the same size as the system plus an identity projection is always possible, so it'll always work for some size of the model being learned, just at that point you don't get a speedup), but our initial results show that it does seem to be very promising.


Since this is based on ML, I would have trouble trusting the results. Can you verify the results with standard mathematical methods? E.g. solve a linear system using AI, then compute the residue and meaningfully interpret it? Would it be possible to apply ML techniques iteratively, and let the error approach zero?


Sure, you can always just run the baseline simulation and compare error rates. You can also do fancier analyses to get some rough idea of robustness and maximum error rates (over your particular parameter domain).


Would this integrate with e.g. Measurements.jl?


The error propagation won't be through measurements, because that blows up linearly. There's other approaches which give much more promising results.


From an implementation perspective of course, so you can get error propagation on the baseline simulator. For the actual ML part, you'd get Measurements complaining that it doesn't know how to propagate bounds through the surrogate. You can do some estimation of what that propagation would look like, but it's a research project.


It seems that there are two related but distinct ways to simplifying a model:

1. projections / disregarding some dimension, followed by some model in this latent space, followed by an injection back into the native space. The model is simpler by virtue of operating in a lower dimensional space.

2. using a "reduced-order model" or "surrogate model", but still operating in the same input/output space. The model is simpler by virtue of, e.g. using fewer derivatives or delays.

And with 2, you can have have the simple model e.g. arise from first principles, and learn a _residual_ correction on top of it.

And it sounds like the comment contains a bit of both. Am I getting that right?


A bit of both.


Do you think such an approach could extend to the final verification stage of a large post-extraction chip? My impression is that speedups are most sorely needed in this final sign off phase where the number of nodes explodes with parasitic R's and C's, especially in modern technologies. Simulation times in weeks seem necessary now for sufficient verification accuracy of analog/mixed signal chips.


Yes, we're looking at it. One of our benchmark cases for this work, will likely be a post-extraction structure from one of the Sky130 chips (not that it wouldn't work in a more moderns process, but the NDA issues become complicated while this is still a research project).


There are ways to surrogatize portions of a simulation, replacing large portions with ML-trained surrogates. There are already demonstrations in different domains (on highly stiff differential-algebraic equations) showing that you can get these kinds of speedups at around 99% accuracy.

https://arxiv.org/abs/2010.04004

https://arxiv.org/abs/2103.05244


> can get these kinds of speedups at around 1% accuracy

Shouldn't this be at around "at around 99% accuracy" or "within around 1% accuracy loss"?


Thanks, fixed.


the TLDR is that for a lot of high dimensional PDE models, you get a better trade off of speed vs accuracy by using a NN for part of the model that is inefficient to calculate the full physics for. This is already having a lot of success in climate modeling, where NN based solutions do a better job of dealing with some atmospheric effects than previous efforts.


Sounds similar to what stockfish (leading chess computer engine) does. In newer versions, position evaluation comes at least in part from a NN, which lead to an improvement in playing strength.

edit: similar in approach, not as a 1:1 mapping. Replace a deterministic model with a faster, slightly fuzzy one.


This can be a useful connection, but I believe it's worth fleshing out in case it is misleading

What's the alternative to using a NN for position evaluation? I can think of two:

1. Do minimax search until you have a winner or a stalemate. Then you have the exact value of the position. Well, this is the problem we're actually trying to solve to begin with, and it's also impractical to do for chess and any interesting game. This is what necessitates an approximation to position evaluation.

2. A human expert writes a position evaluation function. It determines a huge handful of features, a simple example being how many pieces I have - how many pieces you have, and some way to combine those features into a score.

In surrogate modeling, you can get ground-truth data to evaluate your approximation against. You're approximating another model that you can compute, it's just too slow for practical use.

In chess, we don't know THE position evaluation function. We can certainly get data about it, but we don't know it in the same way we know PDE models.

To be clear, I am not saying we _know_ the PDE models are accurate with respect to reality. That's the science part, to determine if the model arising from empirical evidence or first-principles that are themselves arising from empirical evidence, actually summarizes empirical evidence.


Chess position evaluation is always going to be somewhat subjective and statistical. Electronics simulation is neither; it's physics.


Call me skeptical but to claim 1000x improvement for circuit simulation is at least hyperbole or outright lying. I will love to be proven wrong though. This by the fact that Julia is still depending on third party numerical libraries, e.g. OpenBLAS that depends on faster numerical languages like Fortran.

If you want to see next generation circuit simulation and automation that already working check out JITX. The simulator is using their advanced LB Stanza language which similar if not better than Julia[1]. This is the same team from Berkeley that proposed CHISEL and FIRRTL.

Other promising efforts on next generation digital and analog circuit design are from MLIR by LLVM team and LLHD from ETH Zurich [2][3].

[1]http://lbstanza.org/ [2]https://github.com/llvm/circt [3]http://llhd.io/


Those efforts are quite different, focusing primarily on the design side with a focus on digital, while our effort is on ML-accelerated simulation of continuous time analog domain problems. I've met Chris Latter many times, and I'm pleased to see him work on tooling improvement in this space. He has a very good design sense for what compilers ought to look like. I've also met Patrick Li and we had an extensive discussion around the design space of multiple dispatch, which is of course Julia's core paradigm. I hadn't realized that Stanza was in use at JITX, but I'm glad to see it since there's enormous room for PL-based improvements. They are very different efforts though, so I'm somewhat confused by the off-hand dismissal of our work, without seeing any of the details.

One additional point: Julia depends on OpenBLAS not because Fortran is faster, but because doing the architecture tuning for all supported architectures and sizes is a bit of a pain and for standard BLAS problems there's very little reason to switch. We do have pure Julia packages that outperform OpenBLAS, but nobody has gone through the effort of replacing the base usage of BLAS usage and completing the pure Julia packages to achieve 100% API coverage. There's just no good reason to as long as the vendor BLAS packages (or OpenBLAS) work fine.


Most of the methods I've mentioned apart from FIRRTL can be used for analog design. In fact, the JITX's product in particular only support ML-accelerated analog based design mainly circuit board level at the moment but nothing stopping them for doing it for both analog and digital design later on.

Don't get me wrong I'm not dismissing your work, I'm just dismissal of the outrages 1000x claim, as they say "extraordinary claims require extraordinary evidence"(ECREE). I believe Chris Lattner and Patrick Li will probably cringe to hear the 1000x claim ;-). Like I've mentioned before, I love to be proven wrong.

Regarding OpenBLAS, it will be very good to have native Julia alternative as you've claimed. It is nothing new to be better than OpenBLAS since D language has done it with probably less than 10x (100x?) man power compared to Julia several years back[1].

[1]http://blog.mir.dlang.io/glas/benchmark/openblas/2016/09/23/...


The pure Julia (sub)BLAS (because they are incomplete right now) that benchmarks the best right now are Octavian and PaddedMatrices.jl. On Ryzen these BLAS's are doing extremely well:

https://github.com/JuliaLinearAlgebra/Octavian.jl/issues/24#...

but also on Intel:

https://chriselrod.github.io/PaddedMatrices.jl/dev/arches/ca...

I personally wouldn't spend too much time on BLAS-limited applications though, and this kind of circuit modeling is not one of them as I describe in another post. Also, it's 1000x at 99% accuracy: it's essentially a form of automated model order reduction which allows you to choose a tolerance and get more speedup matching the original circuit to the given tolerance.


Also, the major point is that BLAS has little to no role played here. Algorithms which just hit BLAS are very suboptimal already. There's a tearing step which reduces the problem to many subproblems which is then more optimally handled by pure Julia numerical linear algebra libraries which greatly outperform OpenBLAS in the regime they are in:

https://github.com/YingboMa/RecursiveFactorization.jl#perfor...

And there are hooks in the differential equation solvers to not use OpenBLAS in many cases for this reason:

https://github.com/SciML/DiffEqBase.jl/blob/master/src/linea...

Instead what this comes out to is more of a deconstructed KLU, except instead of parsing to a single sparse linear solve you can do semi-independent nonlinear solves which are then spawning parallel jobs of small semi-dense linear solves which are handled by these pure Julia linear algebra libraries.

And that's only a small fraction of the details. But at the end of the day, if someone is thinking "BLAS", they are already about an order of magnitude behind on speed. The algorithms to do this effectively are much more complex than that.


I am not sure why you are downvoted.

Google has been doing electronic with ML for some time now : https://ai.googleblog.com/2021/02/machine-learning-for-compu...

Python and Tensorflow work just fine for that task, the speed comes from LLVM and dedicated hardware such as TPU. The language in which the high level ML network/algorithm is specified doesn't matter much.

It is always good to have several paths explored, but I'm skeptical on the Julia hype since its beginning.


What it's doing is searching through circuit architectures, not accelerating the circuit simulation itself. If you have small quick circuits that's fine, but if solving the circuit once is a hard enough problem (which is true in the problems which we are targeting) then this problem would be infeasible with Google's algorithm because it relies on whatever simulator to exist. So it's just different methods targeting completely different sets of problems.


Well, that was just one public blog post. I would expect Google's R&D to go beyond what they publicly present.


This is the sign that Julia had arrived as a language. Kudos!


Thanks


Gate level sims would be sweet. Yosys has the CXXRTL backend that may nicely dovetail into this too.


Maybe of interest in that context:

https://github.com/ModiaSim/Modia.jl

The authors of that tool have a strong background in modeling and simulation of differential algebraic equations. Not so much in designing DSLs, though, so there maybe some technical oddities. But I expect the simulation aspect to be quite decent.


Modia.jl is great! Though note that there are other M&S DAE libraries in Julia which stray from the Modelica DSL syntax. For example, ModelingToolkit.jl can do it directly by modifying Julia code:

https://www.stochasticlifestyle.com/generalizing-automatic-d...

or it can work via a component-based modeling DSL:

https://mtk.sciml.ai/dev/tutorials/acausal_components/


Modia.jl is built by Hilding Elmquist, Martin Otter and their collaborators. These are some of the best folks in the world - Hilding's thesis in 1978 described equation oriented modeling. It had to wait until computers became powerful enough in the 90s to become broadly usable.

https://people.inf.ethz.ch/fcellier/Res/Soft/Dymola_engl.htm...

We've had the benefit of learning from them and their presentations at JuliaCon and collaborating. More is coming. In the meanwhile - do see this JuliaCon talk by Hilding:

https://www.youtube.com/watch?v=hVg1eL1Qkws


Can anyone explain the potential benefits of accelerating electronics simulation?

Do they want to generate efficient FPGA programs?


Our particular project is about analog simulation, though DARPA is also funding surrogate modeling efforts in the digital domain. On analog in particular, one significant impetus is speeding up mixed signal simulations, since digital simulators (in terms of seconds of execution simulated per seconds of simulation time) are much faster than analog simulators, so if you want to do a mixed signal simulation, you're generally running it at the speed of analog simulators. For end-to-end simulation, it would be much nicer if you could bring up the speed of the analog simulators to be comparable to the digital simulators. Of course there is a question of how much accuracy you're paying (and we're still finding out what the tradeoff is), but in the kinds of simulations where this comes in, your actually often ok with losing some accuracy, since you can always get some noise coupled into the analog portion, so your control electronics should be robust to slightly weird analog signals anyway.


i've had a lot trouble with analog simulations in SPICE. they often fail to converge or run incredibly slowly. i do a lot of audio circuitry. the speedups addressed here could potentially make it possible to simulate audio circuits in real time. suppose you feed a signal in via an audio interface, pass it through your analog processing circuitry and play it back in real time. the design cycle would be so much faster.


We are working on improvements to the baseline simulator also (or rather we basically get them for free because they are part of our core simulation engine), so hopefully that should address some of the first order usability issues in SPICE simulations. For these kinds of design applications, the ML-based speed up may or may not work, since you do have to spend time training the surrogate. You can often re-use pieces of if, but depending on what kind of manipulations you're doing to the model you're surrogatizing, it may or may not help if you're changing the circuit after every simulation.


Sorry if you've already clarified, but is this meant to replace or augment traditional SPICE simulators? I remember Ken Kundert mentioned that, even with the improvements of Spectre over SPICE-based simulators, it took things like SpectreRF's oscillator phase noise modeling to get analog designers to consider changing their ways. Their steadfast use of SPICE is "a form of Stockholm Syndrome," in his own words.


Our plan is to build both some extremely sophisticated analog design tooling that improves the state of the art (by leveraging our existing investments in modeling & simulation from other domains to build a clean simulator implementation) and then have the ML acceleration be an optional "one more thing" that can be used where applicable. Of course the first part is a major undertaking, so we're talking to potential customers to see what particular thing to start with that would really get them to consider using our system. That is independent of the DARPA-funded work though, which is particularly on the ML side (closely aligned of course).


many sims use some variety of SPICE and some convergence algorithm a-la Newton's method to discover the voltages and currents at the nodes. simulating a system where every single component runs in parallel with every other component isn't easy.


speaker frequency response simulations will sometimes use an equivalent electronic circuit. it could help design better speakers. any time you interface digital circuitry with the real world, hybrid circuitry is used. also, with this increased processing capacity, you could be looking at simulating the parasitic properties of components, not just ideal versions of them.


Because all modern computer chips and electronic systems (e.g. PCBs) rely on electrical simulation for design and build. Typically, these are SPICE based simulators, which are open source and quite old in their methods (you literally set convergence requirements and step times).

FPGAs are already digital... meaning that although they (FPGAs) have to be simulated when they are designed, their purpose is digital simulation. Programs are at their heart a discrete digital simplification of the underling electrical operation for the purpose of increasing reliability and allowing greater design complexity.


Open source? Isn't everyone using Spectre or Eldo?


which are built on BSD licensed SPICE developed at Berkeley: "SPICE (Simulation Program with Integrated Circuit Emphasis) is a general-purpose, open-source analog electronic circuit simulator."

https://en.wikipedia.org/wiki/SPICE

Also see HSPICE, pSPICE, LTspice...


Accelerate doesn't just mean 'run faster' but 'write models faster.' It's about reducing time to market and overhead for design verification, which is a massive time and money sink in the industry today.


1000x sounds nice, but can you trust the results?

Also, why doesn't the funding go to more general linear algebra (and related) solver software, which can then be used in many physics packages?


Note that DifferentialEquations.jl already has bindings for R and Python, so this has the potential to be used in many physics packages.


Having three horses in the race competing for the stated goal is a better bet for results you can at least compare the pro/cons of their approaches to. At next iteration, fuse the best parts and recompete towards the stated goal.


Is scientific computing getting some revival with the advent of quantum computers? From what I could see the niche is relatively small and not well paid, with most jobs somehow tied to the public sector. Not sure how Julia factors into all of this. I don't think the programming language makes that big of a difference, ultimately. Very interesting field at the intersection of all my skills, but I'm hesitant to get into it.


Scientific computing has always been around, but as you allude, a bit in the background. It used to be matlab/mathematica on your desktop or a supercomputer that very few could get access to and program. With cloud computing, GPUs, ability to get terabytes of RAM on a single compute node, and all the exciting developments in CPUs despite skirting the edge of Moore's Law - a lot of opportunities are showing up.

You can now simulate science in ways like never before. Today, the median scientist can easily rent a cluster of hundreds of nodes for a few hundred dollars an hour. It is increasingly the case that you can actually simulate entire products in silico before you do anything in the lab. SciML is a large part of that story because we are able to use ML to approximate science and speed it up even more.

I like to think about it as follows - 10x faster CPUs, 100x from GPUs when possible, 100x from ML when possible, 100x through easier access to parallel computing on cloud. So your best case speedup compared to a decade ago is easily 10^7x. Because of this huge space for improvement, we can easily find 1000x improvements in so many cases.

And this is what we as software engineers can do to change the world - by simulating science, building new batteries, designing new drugs, solving power infrastructure, getting climate right and its impact on our cities, food production, and so on and so forth.

Bret Victor captures this really well in his essay: http://worrydream.com/ClimateChange/ and at Julia Computing, we are doing a lot of what it outlines, and really grateful that ARPA-e and DARPA are funding all this hard science and improvements to Julia and its ecosystem.


> I don't think the programming language makes that big of a difference, ultimately

I mean, it should be clear Julia is a much higher-performance method of crunching data than Python, which is unquestionably the king of the data science pile, and therefore it factors into performance & cost.


You can also look at it in terms of man hours with respect to how much time it takes to implement and how much time it takes for others to read it. Coming from a computational physics background, one of the main reasons I love programming in Julia is because it allows me to write code with syntax very close to the underlying mathematics.


Scientific computing is not done in python. It is usually Fortran on C/C++ or Matlab (which I think uses Fortran libraries behind the scenes). Python is only used for the data manipulation part but not for the core simulation.


There is hundreds of millions of dollars funding available at the DOE for quantum computing research, but that is spread over all aspects of quantum computing, so it’s not really enough to make significant headway anytime soon. Maybe at the $5-10B mark we could see rapid progress and start to think about scientific computing in a production context


> ... getting some revival ...

Scientific computing is chugging along about the same as it always has, quantum computers aren't really that relevant yet.

> I don't think the programming language makes that big of a difference, ultimately.

I can see how you might think this, but it's really ahistorical.

Scientific computing has always been a niche because of the range of skills needed. To have any real success at it as a team you needed to be a good enough at numerical analysis to understand the implementation, a good enough programmers to write something like production code (e.g. not your typical lab code) and good enough at the science to do the right project.

In the old days you were basically looking for one person who could do all of this, and in Fortran 77. You can carve off the last requirement if you only work on tools for other people, but that still leaves you with two domains.

Fortran 77 basically limited the scope of project that was reasonable. Things like matlab essentially started as wrappers on good libraries (in F77) so that people could get some work done without spending all their time fighting that complexity. This had a massive impact on productivity globally.

Introduction of things like c++ allowed more complex programs to be built for good or ill (also lead to improvements in fortran) but a lot of the same problems remain in terms of managing the complexity.

Later people added enough numerical libraries to python to get real work done, and that started to eclipse matlab at least in some specific domains (mainly because it's free and open).

Neither matlab or python are particularly good languages for scientific programming, but they are accessible - a gazillion grad students shoot themselves in the foot less in python than they would in fortran or c++, and iterate much faster.

In some ways systems like this have impact because they have reduced the necessary skill level across domains. There is always going to be room at the margins for a polymath but a lot of people who aren't can get things done much more easily now than a few decades ago. Now you may argue that nobody "does" scientific programming in python but it's a bit of a semantic flip, the core algorithms are all in c or something but depending on domain you may mostly be using python wrappers to access them.

Julia is an attempt (not the first one) to define a language that is both approachable and interactive (important) but also well designed for numerics etc. It's a very interesting project for that reason.

I've obviously skipped a lot of important stuff, but the impact of languages and particularly their accessibility has been really significant, especially when we get past scientific programming for it's own sake, and into real applications.


Just curious, what's your current works involves?


I'm a math grad student studying numerics and scientific computing actually. But I do not see a future in it as it does not seem to pay well and the relevant jobs are scarce. So it seems risky to get into it as a career when I can pivot to a more profitable SWE role or do something with ML.


You could certainly choose a few paths that are likely to pay higher salary that most of the scientific computing jobs you'd likely get. On the other hand, the pay for some of those scientific computing jobs is reasonable, and often the work is more interesting for someone with your background. It's worth bearing in mind that unless you are extremely unusual, you'll need a few years training in industry as well before you are really running on all cylinders.

You'll really have to think about your priorities, but it sounds like you have a bit of time to do that.

Probably the worst case is "support programmer for a research lab", some people do well there if they love the lab and the work but it tends to combine poor pay with extremely limited options for professional growth.


1) If you are a US-born citizen and qualify for a security clearance, jobs aren't as scarce as you think.

2) You are right the pay is worse in the public / government side. I will say - you might not believe me now, but it really is true that once you make enough money to pay all your bills, suddenly more money is less motivating than the ability to work on interesting problems.

What I just said above supposes that most lab work is interesting, which is not not the case, but I still think it's better than the average SWE's workload at your average tech company, just based on my personal experience having done both at different times in my career.


How are Julia Computing and Julia the programming language related? Is it similar to R and R Studio in a way?



Thank you!


I love the goal, and the idea to use Julia. I hope the emphasis on machine learning and AI is for the purposes of the press release, and it's not forced into the core of the project. If it turns out to be a good fit, great. If not, skip it.


The funding is particularly for AI exploration, but we are building out a strong foundation of modern analog design tooling in julia as well. If the AI works like we hope, it'll be a nice bonus for situations where it is applicable, but I think there is plenty of opportunity to provide value even without it.


I guessing that as long as you're not trying to plot the results, julia is fast?


So the machines are gonna start building the machines?


That's the plan. Don't worry, what could go wrong?


With current EDA tools they already are :-)


Anyone know how much $ ?

This might give a nice boost to adoption of Julia. And hopefully this spending trickles down to further improve the language itself.


Whatever happened to Samsan Tech, they had a really impressive year and then just disappeared off the face of the earth.


So, at 1000x the current fastest speed, how close does simulation get to the speed of real circuits?


Depends on the time scales of the circuits (as long as your baseline simulator isn't dumb) ;). For example, consider LRC circuits that have an analytic solution - you can basically simulate them instantly. The primary goal here is to bring analog simulation to similar time scales as digital simulation on realistic problems.


Why not Haskel? There are a bunch of projects on that front, too


Slow.


Why isn't Rust top at modeling and simulation yet?


What advantage does Rust have over Julia? I can think of a lot of disadvantages, but not many advantages.


This is about Julia Computing, the company, not Julia the language.


I mean what is the Julia ecosystem focusing on that Rust would not (or similar languages)


Scientific and technical computing.

---

Not saying you can't do this stuff in rust. I mean its turing complete.

But contrast the description of Rust: A language empowering everyone to build reliable and efficient software.

vs Julia Julia is a high-level, high-performance dynamic language for technical computing.

Julia is for doing "technical computing". Things like simulations, problems where you need to apply a bunch of math. Its easy to write math in julia. The syntax is designed for it (like look at code for manipulating matrixes). The semantics are designed for it (multiple dispatch is the thing you need for efficient linear algebra for example. Specializing on each combination of matrixes)


Thanks for clearing. I'm not familiar with Julia' Syntax but I don't doubt its pros over Python. I see why I got unvoted, I think my question was more if Rust (as a metal language) could really help make simulation techs like Julia better than the standards. I don't know.


If by "metal language" you mean a language that produces machine assembly, then Julia is also a metal language. Both Rust and Julia use the LLVM compiler infrastructure.


Maybe they are using Rust, then...


We're not. I do think Rust is a good language though and you may take my word that I have very strong opinions on language design :).


Julia is great, thank you for your work <3


I'm shocked :-)


Julia Computing exists to commercialize the Julia language which exists to solve precisely this kind of problem without resorting to a lower-level language. So, probably not.


I was joking!


Well, you never know with the fringe elements of the RESF. :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: