As someone who purchased their first M-series Mac this year (M4 pro), I've been thrilled to discover how well it does with local genAI tasks to produce text, code and images. For example openai/gpt-oss-20b runs locally quite well with 24GB memory. If I knew beforehand how performant the Mac would be for these kinds of tasks, I probably would have purchased more RAM in order to load larger models. Performance for genAI is a function of GPU, # of GPU cores, and memory bandwidth. I think your biggest gains are going from a base chip to a pro/max/ultra version with the greater gpu cores and greater bandwidth.
That's pretty much how all laser particle counters work... except the good ones use a fan and a chamber. Guess we'll have to wait and see how this compares to the reference sensors.
I think there is at least some plausible interpretation of this that points to more than marketing fluff.
You want to count particles per volume of air, so conventional sensors use a fan to have a constant volumetric flow and then count particles per second to infer particles per volume.
The way I interpret the above marketing language is that they use the optical sensor not only to count particles but also to measure the particle movement and infer airflow. So as long as there is some natural movement in the air, they can measure both particle count and volumetric flow, and thus infer particles per volume.
This is Bosch and not some random startup. It’s for sure a substantial technical breakthrough of integration, miniaturization, and if coming from Bosch, certainly enterprise and clinical-grade ready.
A website in the US doesn't deliver anything to the UK, it hands off some packets to a router in the US. Why is the website responsible for what all the interconnecting routers do? If a person from the UK were to visit an adult bookstore in the US, the bookstore owner isn't at fault if the customer decides to move certain material across national boundaries.
Even non-physical numbers are problematic to signal 'invalid'. I had a customer use -999 as a placeholder for 'invalid' data. Years later somebody made a higher level data product that averaged and combined that data with other products, without knowing to first remove those 'invalid' values. The resulting values were all now within physical limits, but very very wrong. The best solution is to use IEEE NaN https://en.wikipedia.org/wiki/NaN so that your code blows up if you don't explicitly check for it.
NaN is a sentinel value, just as much as 2,147,483,647 is
The only difference is that NaN is implemented in hardware. However, taking advantage of that requires using the hardware arithmetic that recognizes NaN, which restricts you to floating point numbers, and all the problems that introduces.
If you have good language support and can afford the overhead, you want to replicate that behavior in the type system as some sort of tagged union:
data SentinelInt32 = NAN | Int32
Or, more likely, using the equivalent of Optional<T> that is part of your languages standard library.
Of course, this means boxing all of your numbers. You could also do something like:
type SentinelInt32 = Int32
Then provide alternative arithmetic implementations that check for your Sentinel value(s) and propagate the appropriately. This avoids the memory overhead, but still adds in all the conditional overhead.
999 or 9999 etc. are extremely common in traditional statistics, especially because there is no known good sentinel value.
In many cases I wished that they used the maximum value as a sentinel, e.g. take 255 for a short as invalid and make only -244 to +244 normal numbers.
reply