The default and oldest auth method for NFS, "sec=sys," was designed with the assumption that servers and clients are both trusted (sysadminned by the same people) and had the same set of user accounts. Servers enforce that client connections only come from privileged ports, and they trust whatever UID the client says it's using. This works in concert with the traditional UNIX restriction that only root can bind to ports under 1024, including initiating connections where the client port is under 1024.
In this model, giving users root would let them su to arbitrary UIDs on the client and impersonate other users to the server. (Alternatively, it would also let them run their own NFS client on a low port and do whatever they want, too.)
This does lend itself to a very simple and efficient design, since all you're doing is transmitting a single integer over the wire to identify yourself, and the whole connection is in plaintext, authenticated only by the source port. For the HPC / cluster computing use cases where NFS is popular, the efficiency and scalability of that scheme is important. There are better authentication methods (Kerberos, notably, which also adds optional encryption), and other ways to design your NFS architecture, but they're much more operationally complicated and commercial NAS devices tend to work best with the sec=sys approach. Also, public cloud NFS-as-a-service options tend to only support sec=sys (https://cloud.google.com/filestore/docs/access-control, https://docs.aws.amazon.com/efs/latest/ug/accessing-fs-nfs-p..., https://docs.microsoft.com/en-us/azure/storage/files/storage..., etc.).
We are trying to figure out how to solve this, as I mentioned, but when dealing with an organization that has decades of workflows and code assuming a traditional UNIX environment with shared multi-user machines, there's no instant solution to it. (In many cases our solution is going to be to stop using NFS and use something more object-storage shaped, which will also help us move to idiomatic public-cloud designs.)
Has anyone told them yet that just plugging your own computer into the network lets you get root anyway?
And yes you are right, there is no solution other than to stop using NFS. Maybe Samba with Kerberos domain-joined hosts, but still probably not a great solution.
Yes. That's why the NFS servers have IP ACLs, why the office networks have 802.1x to get onto the corporate VLAN, why access to the datacenters is physically restricted, and why getting to our cloud accounts requires authenticating to a bastion.
Setting up an IP ACL to known NFS clients is pretty straightforward and doesn't impact the performance characteristics of sec=sys.
(And you should be doing the remainder of those anyway - are you really telling me that in a non-NFS environment, you wouldn't mind an interviewee or guest plugging in their laptop and seeing what they can get to? There are no unauthenticated resources at all on your network?)
There are no unauthenticated resources on my network because all the resources are in the cloud. The only thing that's local is the network gear. There's still some security paranoia where security requires we make internal services not publically routable, but I'm pushing for a zero trust model (mainly because they have us using ZScaler which is a piece of garbage)
In this model, giving users root would let them su to arbitrary UIDs on the client and impersonate other users to the server. (Alternatively, it would also let them run their own NFS client on a low port and do whatever they want, too.)
This does lend itself to a very simple and efficient design, since all you're doing is transmitting a single integer over the wire to identify yourself, and the whole connection is in plaintext, authenticated only by the source port. For the HPC / cluster computing use cases where NFS is popular, the efficiency and scalability of that scheme is important. There are better authentication methods (Kerberos, notably, which also adds optional encryption), and other ways to design your NFS architecture, but they're much more operationally complicated and commercial NAS devices tend to work best with the sec=sys approach. Also, public cloud NFS-as-a-service options tend to only support sec=sys (https://cloud.google.com/filestore/docs/access-control, https://docs.aws.amazon.com/efs/latest/ug/accessing-fs-nfs-p..., https://docs.microsoft.com/en-us/azure/storage/files/storage..., etc.).
We are trying to figure out how to solve this, as I mentioned, but when dealing with an organization that has decades of workflows and code assuming a traditional UNIX environment with shared multi-user machines, there's no instant solution to it. (In many cases our solution is going to be to stop using NFS and use something more object-storage shaped, which will also help us move to idiomatic public-cloud designs.)