Hacker Newsnew | past | comments | ask | show | jobs | submit | trailbits's commentslogin

WA state it is an extra $56 every time you renew for Real ID

The LG only has 60Hz refresh - this Dell has 120Hz and so seems to actually take advantage of the extra thunderbolt bandwidth.


The default context window using ollama or lmstudio is small, but you can easily quadruple the default size while running gpt-oss-20b on a 24GB Mac.


As someone who purchased their first M-series Mac this year (M4 pro), I've been thrilled to discover how well it does with local genAI tasks to produce text, code and images. For example openai/gpt-oss-20b runs locally quite well with 24GB memory. If I knew beforehand how performant the Mac would be for these kinds of tasks, I probably would have purchased more RAM in order to load larger models. Performance for genAI is a function of GPU, # of GPU cores, and memory bandwidth. I think your biggest gains are going from a base chip to a pro/max/ultra version with the greater gpu cores and greater bandwidth.


I just create new orgs to group related repositories together. Works well for a small team.


That's pretty much how all laser particle counters work... except the good ones use a fan and a chamber. Guess we'll have to wait and see how this compares to the reference sensors.


Yep, I suspect this is all marketing fluff and no substance. I see a lot of superlatives but no substantial technical breakthrough here.


I think there is at least some plausible interpretation of this that points to more than marketing fluff.

You want to count particles per volume of air, so conventional sensors use a fan to have a constant volumetric flow and then count particles per second to infer particles per volume.

The way I interpret the above marketing language is that they use the optical sensor not only to count particles but also to measure the particle movement and infer airflow. So as long as there is some natural movement in the air, they can measure both particle count and volumetric flow, and thus infer particles per volume.


This is Bosch and not some random startup. It’s for sure a substantial technical breakthrough of integration, miniaturization, and if coming from Bosch, certainly enterprise and clinical-grade ready.


I’m pretty pretty sure it is just marketing


Also wonder how the sensor can stay clean without a fan. I suppose mounting upside down would help. Other fanless designs require periodic cleaning.


The integration picture shows an "optical cover" transparent surface. I guess it's not meant to be used in highly contaminated areas.


A website in the US doesn't deliver anything to the UK, it hands off some packets to a router in the US. Why is the website responsible for what all the interconnecting routers do? If a person from the UK were to visit an adult bookstore in the US, the bookstore owner isn't at fault if the customer decides to move certain material across national boundaries.


It does remove any incentive for a thief to steal a Macbook. They can't strip it for parts and sell those parts if they won't work.


Even non-physical numbers are problematic to signal 'invalid'. I had a customer use -999 as a placeholder for 'invalid' data. Years later somebody made a higher level data product that averaged and combined that data with other products, without knowing to first remove those 'invalid' values. The resulting values were all now within physical limits, but very very wrong. The best solution is to use IEEE NaN https://en.wikipedia.org/wiki/NaN so that your code blows up if you don't explicitly check for it.


NaN is a sentinel value, just as much as 2,147,483,647 is

The only difference is that NaN is implemented in hardware. However, taking advantage of that requires using the hardware arithmetic that recognizes NaN, which restricts you to floating point numbers, and all the problems that introduces.

If you have good language support and can afford the overhead, you want to replicate that behavior in the type system as some sort of tagged union:

    data SentinelInt32 = NAN | Int32
Or, more likely, using the equivalent of Optional<T> that is part of your languages standard library.

Of course, this means boxing all of your numbers. You could also do something like:

  type SentinelInt32 = Int32
Then provide alternative arithmetic implementations that check for your Sentinel value(s) and propagate the appropriately. This avoids the memory overhead, but still adds in all the conditional overhead.


999 or 9999 etc. are extremely common in traditional statistics, especially because there is no known good sentinel value. In many cases I wished that they used the maximum value as a sentinel, e.g. take 255 for a short as invalid and make only -244 to +244 normal numbers.


As someone else who regularly uses 8.93 bit words for computations, I understand completely.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: