Hacker Newsnew | past | comments | ask | show | jobs | submit | deivid's commentslogin

Would it be cheating to use the kernel's nolibc?


See another comment.

Using nolibc is fine when you compile it together with the kernel.

The parent article is about a C header that you can use to compile your program independently of the source files of the Linux kernel.

Even the presence of the Linux kernel sources on your computer is not enough to enable the compilation of a program that uses directly the syscalls, as the raw sources do not contain any suitable header. You must first compile the kernel with the desired configuration, because header files are selected or auto-generated accordingly. That is enough for nolibc, which lives in the kernel source tree, but it would still be difficult to identify which are the header files that could be used to compile an external program.

Moreover, including Linux header files in your program is an extremely bad idea, because they are not stable. It is frequent that a minor version increase of the Linux kernel breaks the "#include" directives of external programs (such as out-of-tree device drivers), because items are moved between headers or some headers disappear and other headers appear.


That makes sense, I guess this was not a problem for the times I needed nolibc.

I do agree that trying to extract data/logic from linux is a pain -- I've tried a few times to extract some of the eBPF verifier handling, but end up pulling most of the kernel along.


This is really well written, thanks for sharing.

I didn't understand the point of using Unikraft though, if you can boot linux in much less than 150ms, with a far less exotic environment


Hey! Co-founder of Unikraft here.

Unikraft aims to offer a Linux-compatible environment (so it feels familiar) with the ability to strip out unnecessary internal components in order to improve both boot-time/runtime performance and operational security.

Why would you need a memory allocator and garbage collector if you serve static content? Why would you need a scheduler if your app is run-to-completion?

Linux gives you the safety-net of generality and if you want to do anything remotely performant, you by-pass/hack it altogether.

In the article, Unikraft cold-boots in 150ms in an emulated environment (TCG). If it was running natively with virtualization hardware extensions, it can be even shorter, and without the need for snapshots which means you don't need to store this separately either.


Unikraft is cool, I still have it in my 'todo' list to play around with sometime.

Linking the app with the 'kernel' seems pretty nice, would be cool to see what that looks like for a virtio-only environment.

Just wanted to point out that the 150ms is not snapshot based, you can get <10ms for small vms (128MB ram, 2GB ram moves you to ~15ms range), for 'cold' boots.


Security, it isn't only memory footprint.


Which architecture can boot it in 150ms ?!


Boot is a misleading term, but you can resume snapshotted VMs in single digit ms

(and without unikernels, though they certainly help)


You can boot a vm without snapshots in < 10ms, just need a minimal kernel.


I think "in a VM" was elided. It's easy to tune qemu + Linux to boot up a VM in 150ms (or much less in fact).

Real hardware is unfortunately limited by the time it takes to initialize firmware, some of which could be solvable with open source firmware and some (eg. RAM training) is not easily fixable.


Stripping away unused drivers (.config) and other "bloats" can get you surprisingly far.


And most importantly and TFA mentions it several times: stripping unused drivers (and even the ability to load drivers/modules) and bloat brings very real security benefits.

I know you were responding about the boot times but that's just the icing on the cake.


Mostly depends on how bloat correlates to attack surface, but you're right

But 150ms? That's boot time for dos or minix maybe (tiny kernels). 1s sure.


FreeBSD did some work to boot in 25ms.

Source: https://www.theregister.com/2023/08/29/freebsd_boots_in_25ms...


You can do <10ms. I was working to see if I could get it under 1ms, but my best was 3.5ms



Microvm's


Because it will be slightly faster and you will use less resources? For a lot of use cases that probably does not matter but for some it does.


Thanks for sharing my site!

I've been thinking about building a platform like this for a while, and it was quite fun to build.

Let me know if you have questions or ideas for new exercises.


This is really cool.

Are you planning to add "lessons" related to deployment? For example, using libbcc vs CO-RE?


I wanted to add all kind of exercises, but I'm not sure what's a good way of presenting a deployment exercise.

On libbcc specifically, I'm not sure it's worth it, CO-RE / BTF is where things are heading, and any reasonably new kernel supports it (<5 years old)


Thanks for making this, looking forward trying it out!


Downsizing from a 27" 5k to a 24" 4k, could not find anything besides a new company called JAPANNEXT (they are French)


yeah, I've tried their 24" 4k monitor, was okay, but not great, so returned. 24" is the max size I can tolerate with short-sightedness, but avoid using glasses for the monitor.


It's an interesting idea. I'm butchering TCC (tiny c compiler) for a side project/experiment, and using arenas sped it up 2x. This of course requires the memory limit to be specified in advance, but for my situation that's fine.


IMO eBPF is best viewed as a mechanism that allows you to load "arbitrary" code in specific kernel paths, while guaranteeing that the kernel won't hang or crash.

That's it. Though I said "arbitrary" because the program has to pass the verifier, which limits valid programs to ones where it can make the stability guarantees.


Building postgres server as a library. Some early success, but initdb and in-process restarts are much harder than expected


Do you mind elaborating more about the use case? Postgres itself is heavily engineered around OS process boundaries for both correctness and resiliency.


I'm not sure it'll be a serious project, but the main goal is to use it in CI or dev, where setting up postgres is kind of a pain.

I got it to work already by setting up the global context in single-user mode (like postgres --single) and exposing bindings for SPI operations.

Yesterday night I got extensions working, but as this project builds as a static archive, the extensions also have to be part of the build. Both plpgsql and pgvector worked fine.

The bigger challenge is dealing with global state -- comparing the pre-start and post-shutdown state of the process memory, about 200 globals change state. Been slowly making progress to get restarts working


I've been writing on my blog for 9 years. Still feel the same blockers you do on every new post.

For me, the main motivation is that I enjoy reading other people's blogs, and hopefully my posts give someone ekse a similar enjoyment

I had a few attempts to lower the bar (tags for low effort, short and shitpost so far), but it feels like a crutch and hasn't worked long term for me.


These screens look amazing, but $1500-2500 is a bit much. Any other screens with this monichrome CRT style?


I've had this project idea in my list for a while, I even implemented thr software side (an option rom for the pci card) but the hardware side is quite difficult to get started. My plan was to get an FPGA with a hard pci core to do this, but I don't even know what to buy.

I got a cheap Tang Mega 238k but I never managed to even get the PCI examples working (and couldn't even adjust BAR settings)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: