Indeed, it seems iVentoy uses NBD to present block storage devices (eg: ISOs) to the DHCP client. I just installed iVentoy 1.0.08 in my lab on a Debian 12 VM (via VMware 8 server). From there, I created a new VM and booted Linux Mint 21.1 via UEFI boot. Linux Mint install went well with no noticeable issues.
Overall, first impressions are looking good. Found a few bugs that need to get worked out including:
* UEFi boot requires iVentoy to have 2 vCPUs (see forums)
* Restarting the iVentoy script resets the IP Boot config (specifically the Subnet Mask and Gateway)
From their release page, iVentoy supports a bunch of Linux and Windows install images, and you can even inject custom auto-install scripts to the ISO for unattended installs. Very cool. Finally, the iVentoy discussion forum seems very active, and the developer seems engaged.
I will probably support the developer ($49) because he was able to leverage NBD to overcome some iPXE issues I struggled with for a long time. I know how much time/effort goes into making iPXE booting look seamless. Kudos to him/her/them
It seems to be hot-patching the image to redirect the kernel command line to boot from the iventoy server.
I booted ubuntu-22.04.2-live-server-amd64.iso and the resulting kernel command line of the installation environment has a "url" argument that points at the iventoy server URL for the iso image.
I was not able to directly boot a homegrown Debian live ISO image, so I think they just have a broad set of specific ISOs that work with iVentoy. It's not a generic "boot any ISO" solution.
I'm not totally familiar with this in the context of low-level boot.
I've used iSCSI (SCSI protocol over Ethernet), AoE (likewise, ATA) and FreeBSD has a network layer of their GEOM virtual storage stack.
A "file server" takes a local filesystem (like Ext4) and shares it to network clients. Client and server need to agree on the protocol for sharing (SMB for example), but the client need not understand the server's local, on-disk data layout. My Mac doesn't run Ext4.
A Network Block Device just serves up storage blocks, not files. Sort of the inverse of the file server, where the server need not understand the local, on-disk layout of the span of blocks it's sharing. The NBD client interprets the blocks it receives as a file system. I could install an iSCSI driver on my Mac, share a volume from a Linux iSCSI server, and format the volume with APFS. I could encrypt the volume, such that the server couldn't access the decrypted data on its disk. It just serves up "raw" blocks.
NBD is fairly similar to AoE or iSCSI, you can connect to an NBD and then format it, as you suggest doing with iSCSI and APFS. One use case people were doing a lot at one time is doing RAID-1 with a local partition and a remote partition over NBD in HA clusters.
> On Linux, network block device (NBD) is a network protocol that can be used to forward a block device (typically a hard disk or partition) from one machine to a second machine.
I saw that iVentoy seems to have an NBD server, so it might just be mounting the ISO from the server via NBD.