What the Hec++k!

C++ has made a point of pride to thoroughly keep backward compatibility so that even the most antique and obsolete C++ source code may compile with just a few warnings (if any). This may be good to someone, but it makes for some funny constructs. E.g.:

[[nodiscard]] bool isEmpty() const override noexcept;

[[nodiscard]] means that the value returned by the function is not to be ignored (well you will get just a warning, so it is still an advice, not an order). This semantic (i.e. a function whose return value has not to be ignored) makes sense and many errors, in the history of C and C++ programming, could have been avoided just by checking function return values for errors. But when they decided to face this problem it was too late and the option picked by the committee was to add this [[nodiscard]] attribute.

override means that this function overrides a virtual function in the base class. Nothing wrong with that, but… why “override” has to be placed past the argument list where its visibility is minimal? Because in the committee they didn’t feel right to add a new keyword to the language (actually two keywords – override and final) fearing to break compatibility. So they invented that after the function argument list the language accepts an identifier (yes, any symbol-like identifier) and then it gets interpreted during compilation to report an error in case it is neither,override nor.final.

noexcept marks the function as not throwing exceptions. Again a reasonable take would be to have the programmer mark function that can throw exceptions since those functions are problematic and have to be handled with care. It should be just the norm that a function does not throw. Yes, I know noexcept has some trickery to add conditions under whose the function can or cannot throw, but that can be done also in the other way (i.e. when marking throwing functions).

Not to mention the use of explicit keyword for constructors –

explicit Constructor( T const& ) noexcept;

Just to recap – as the language was designed at the beginning – if a constructor has just one argument (or one argument without default values) that constructor is used by the compiler to implicitly convert from the type of the argument to the type of the class.
That is if a type of class T is required in a given context (e.g. function argument), then you may pass a type U, provided that class T had a constructor with a single argument of type U.

This is sometimes a nice tool, but unintended use could cause some trouble possibly in a subtle way. In the first language revision (C++11) the explicit keyword was added to let the programmer select if she wants implicit or explicit construction. Since the problems are with implicit conversion, it would have made much more sense that the programmer had to express her intent to pick the most dangerous option, not the safer one.

(BTW, I just discovered that C++20 adds another use case for explicit keyword, that is to mark functions that always return true… This really baffles me)

And talking about implicits, next year should be the time for the next Scala language version which dares to break quite some backward compatibility. I am so curious to see whether this will turn out as a bad or good idea. Python did this a few years ago and the move caused a lot of troubles and grumbling. Scala generals assure that this will be nicely handled by automatic conversion and binary compatibility… we’ll see.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.