What really scares me is that the use of the preprocessor is still allowed inside modules. This might have been a chance to define a clean break at least with #include and the shortcomings thereof. A syntactically and semantically saner replacement for #define amd #ifdef could have laid the groundwork for a much improved tool support. But if the preprocessor is dragged into modules as a whole (sans interaction between modules), the only gain is in language complexity.
I'm generally disliking the need to maintain separate header and implementation files. Maintaining both is time consuming and putting everything in headers is no panacea, either. Now modules seem to add another type of interface definition to the mix that would need to be maintained after a project adopts modules.
How would you write different code for different architectures or based on compilation flags without preprocessor directives? Rust has directives for specifying which version to compile, but C++ currently doesn’t. However, I find #if __AVX512BW__/#elif __AVX2__/#elif__SSE2__#else/#endif to be easy and flexible, allowing only a subset of code to vary by architecture. I also find that macros allow me to write much more concise, maintainable code.
It’s archaic and low level, but it’s also powerful and expressive. Replacing the CPP would probably just require a new language.
if constexpr only works for procedural code, not data members. (e.g., use __m128i v[4] for SSE2, __m256 v[2] for __AVX2__, etc.) It could be templated, as long as there was a compile-time method besides the CPP to get architectural information, so that would be a way (if much more verbose) forward.
That being said, if constexpr is great; iterating over either sparse or dense arrays with in Blaze without wrapping in two functions was mind-blowing when I discovered I could so easily.
That is why static if and version in D work very differently. Alexei Alexandescu criticized if constexpr in C++ for being a poor copy of static if that misses the point, but that was dismissed. But it is powerful enough to handle almost all cases of conditional compilation.
Yes thank you. Not to mention that the parent comment neglects the fact that splitting out files just means you have to now worry about conditional inclusion or tighter coupling with your build system.
That approach has successfully turned a pile of spaghetti C pre-processor code into a managable secure server application deployed across Aix, HP-UX, Solaris, Linux and Windows NT/2000.
Yes it was tighly coupled with Make that took care of selecting the proper set of files to compile and link based on the platform.
The anti-module crowd seems to want to have it all, which to me is the same anti-exceptions and anti-RTTI crowd, and in that case modules are indeed dead-on-arrival.
> That approach has successfully turned a pile of spaghetti C pre-processor code into a managable secure server application deployed across Aix, HP-UX, Solaris, Linux and Windows NT/2000.
I have also weitten and deployed what you’re denigrating as “preprocessor spaghetti code” across the above exact platforms, with a lot of success.
What's your story to migrate bilions of line of code that are currently using the preprocessor to use modules?
One of the design goals of the module proposals is that there is a migration path from pure include to pure modules. The transition must not require a flag day.
In particular a program must be able to handle a mix of modularized libraries and old school libraries for the rest of the eternity (it is not likely that C is going to move to modules anytime soon).
That would mean that it's impossible to mix old and new C++ codebases, which would make it very very hard to port larger projets. It may even make it impossible if one uses a header only 3rd party library.
It would certainly restrict the ways in which code could be mixed or updated. But I do not think that it would be as hard as you make it out to be. Let's say that you can have either modules without includes or more traditional translation units that use a preprocessor and can also import modules. Then you could port then code over one module at a time, couldn't you?
Not really. You can't wrap a translation unit that uses a 3rd party library into a module. That means every other translation unit that uses this unit also can't be a module and so forth.
I'm generally disliking the need to maintain separate header and implementation files. Maintaining both is time consuming and putting everything in headers is no panacea, either. Now modules seem to add another type of interface definition to the mix that would need to be maintained after a project adopts modules.