How is "your own network, your own view of the file system, your own view of the process table, your own view of the user IDs, ..." the same as "processes"?
On modern Linux distros every process is running in a cgroup and namespace by default. So these days the main difference between a "container" and a regular process is that regular processes are all jumbled together in the same root namespace, and containers are in separate namespaces.
Now as far as which distros put processes in a namespace and cgroup by default, I know at least CentOS 7 and Ubuntu 15 do this. And those two distros on their own would qualify for "most".
To check if your distro does it, one way of checking is just doing a `cat /proc/1/cgroup`. This will show you what cgroups process 1 is in. By default you will be in the "root" cgroup.
To check your namespaces, `ls -l /proc/1/ns/`. You'll see the process is in some randomly generated namespace ID per item.
I'm sure you could recompile your kernel to disable this behavior, but the default reality of modern Linux is that everything is already running in a "container".
Now the question is whether or not people want to take advantage of that reality, and separate out processes in isolation, or keep running everything on a system in such a way that any single process can impact the whole system.
I think that Plan 9's filesystem-based namespacing (and lack of a superuser) actually have a lot to offer for container-like solutions. Any Plan 9 user can set up namespacing of the network and of resources and spawn a process within that restricted namespace.
The whole process is much simpler, I think, than that of creating a Linux container (that's the whole reason Docker exists: to simplify & abstract something which isn't really inherently complex, but is accidentally complex).
Plan 9 certainly wasn't perfect, but it had some really high-quality ideas we still haven't assimilated in mainstream platforms.