Problem is, not only did we have decades of C code that unnecessarily assumed 8/16/32, this all-the-world-is-a-VAX view is now baked into newer languages.
C is good for portability to this kind of machine. You can have a 36 bit int (for instance), CHAR_BIT is defined as 9 and so on.
With a little bit of extra reasoning, you can make the code fit different machines sizes so that you use all the available bits.
In my experience, highly portable C is cleaner and easier to understand and maintain than C which riddles abstract logic with dependencies on the specific parameters of the abstract machine.
Sometimes the latter is a win, but not if that is your default modus operandi.
Another issue is that machine-specific code that assumes compiler and machine characteristics often has outright undefined behavior, not making distinctions between "this type is guaranteed to be 32 bits" and "this type is guaranteed to wrap around to a negative value" or "if we shift this value 32 bits or more, we get zero so we are okay" and such.
There are programmers who are not stupid like this, but those are the ones who will tend to reach for portable coding.
yep, i remember when i tried coding for some atmega, i was wondering "how big are int and uint?" and wanted the types names to always include the size like uint8. but also there is char type, which should become char8 which looks even more crazy.
You'd define architecture-specific typedefs to deal with these cases in a portable way. The C standard already has types like int_fast8_t that are similar in principle.
See, why would you need an "architecture specific typedef" in order to represent the day of the month, or the number of arguments in main "in a portable way".
int does it in a portable way already.
int is architecture specific too, and it's been "muddled" plenty due to backward compatibility concerns. Using typedefs throughout would be a cleaner choice if we were starting from scratch.
C is good for portability to this kind of machine. You can have a 36 bit int (for instance), CHAR_BIT is defined as 9 and so on.
With a little bit of extra reasoning, you can make the code fit different machines sizes so that you use all the available bits.