How to check for a feature-test macro

In the tests, we have the following usage:

#if TEST_STD_VER > 14

if !defined(__cpp_lib_filesystem)

error “__cpp_lib_filesystem is not defined”

elif __cpp_lib_filesystem < 201703L

error “__cpp_lib_filesystem has an invalid value”

endif

#endif

I submit that that’s non-portable, because some standard libraries may not implement all the features (that’s the point of the feature-test macro), and this test should not fail.

In D55308, I am using the following form:

#if TEST_STD_VER > 17
LIBCPP_ASSERT(IS_DEFINED(__cpp_lib_char8_t)); // libc++ implements this

if defined(__cpp_lib_char8_t)

if __cpp_lib_char8_t < 201811L

error “__cpp_lib_char8_t has an invalid value”

endif

endif

#endif

Basically it (unconditionally) checks that if the macro is defined, then it has a sane value.
Additionally, since we know that libc++ defines this, we check that.

An alternate formulation - w/o the IS_DEFINED trickery.

#if TEST_STD_VER > 17

if !defined(__cpp_lib_char8_t)

LIBCPP_STATIC_ASSERT(false, “__cpp_lib_char8_t is not defined”);

else

if __cpp_lib_char8_t < 201811L

error “__cpp_lib_char8_t has an invalid value”

endif

endif

#endif

Comments welcome.

– Marshall

After thinking about this overnight, I like this better than what I’ve got in D55308.

– Marshall

In the tests, we have the following usage:

#if TEST_STD_VER > 14

if !defined(__cpp_lib_filesystem)

error “__cpp_lib_filesystem is not defined”

elif __cpp_lib_filesystem < 201703L

error “__cpp_lib_filesystem has an invalid value”

endif

#endif

I submit that that’s non-portable, because some standard libraries may not implement all the features (that’s the point of the feature-test macro), and this test should not fail.

Maybe this is lack of imagination on my part, but isn’t a test failure for an unimplemented standard library feature exactly the desired behavior?

In D55308, I am using the following form:

#if TEST_STD_VER > 17
LIBCPP_ASSERT(IS_DEFINED(__cpp_lib_char8_t)); // libc++ implements this

if defined(__cpp_lib_char8_t)

if __cpp_lib_char8_t < 201811L

error “__cpp_lib_char8_t has an invalid value”

endif

endif

#endif

Basically it (unconditionally) checks that if the macro is defined, then it has a sane value.
Additionally, since we know that libc++ defines this, we check that.

An alternate formulation - w/o the IS_DEFINED trickery.

#if TEST_STD_VER > 17

if !defined(__cpp_lib_char8_t)

LIBCPP_STATIC_ASSERT(false, “__cpp_lib_char8_t is not defined”);

else

if __cpp_lib_char8_t < 201811L

error “__cpp_lib_char8_t has an invalid value”

endif

endif

#endif

Comments welcome.

A standard library implementation that partially supports a feature may define the feature test macro to a value lower than the standard one. Your revised approach would reject that. There are two things that I think are reasonable:

  1. Condition all your tests on the relevant feature test macros, and add

LIBCPP_STATIC_ASSERT(__cpp_lib_char8_t >= 201811L, “libc++ should implement this”);

to make sure that libc++ in particular behaves as expected, or

  1. Write tests that test for conformance: the feature test macros should be correct and the functionality should be available. That way a standard library implementation that fails to implement a feature (and doesn’t advertise the feature via feature test macros) doesn’t get an “all tests pass” despite not implementing all of the standard library.

I’d lean towards option 2 myself, but maybe vendors of other stdlibs using the libc++ testsuite might have a different opinion.

In the tests, we have the following usage:

#if TEST_STD_VER > 14

if !defined(__cpp_lib_filesystem)

error “__cpp_lib_filesystem is not defined”

elif __cpp_lib_filesystem < 201703L

error “__cpp_lib_filesystem has an invalid value”

endif

#endif

I submit that that’s non-portable, because some standard libraries may not implement all the features (that’s the point of the feature-test macro), and this test should not fail.

Maybe this is lack of imagination on my part, but isn’t a test failure for an unimplemented standard library feature exactly the desired behavior?

No. The test here is for the correct definition of the macro in question.
With the adoption of feature test macros, the committee has explicitly sanctioned the idea of incomplete implementations.

In D55308, I am using the following form:

#if TEST_STD_VER > 17
LIBCPP_ASSERT(IS_DEFINED(__cpp_lib_char8_t)); // libc++ implements this

if defined(__cpp_lib_char8_t)

if __cpp_lib_char8_t < 201811L

error “__cpp_lib_char8_t has an invalid value”

endif

endif

#endif

Basically it (unconditionally) checks that if the macro is defined, then it has a sane value.
Additionally, since we know that libc++ defines this, we check that.

An alternate formulation - w/o the IS_DEFINED trickery.

#if TEST_STD_VER > 17

if !defined(__cpp_lib_char8_t)

LIBCPP_STATIC_ASSERT(false, “__cpp_lib_char8_t is not defined”);

else

if __cpp_lib_char8_t < 201811L

error “__cpp_lib_char8_t has an invalid value”

endif

endif

#endif

Comments welcome.

A standard library implementation that partially supports a feature may define the feature test macro to a value lower than the standard one. Your revised approach would reject that.

It would; I haven’t considered that.

There are two things that I think are reasonable:

  1. Condition all your tests on the relevant feature test macros, and add

LIBCPP_STATIC_ASSERT(__cpp_lib_char8_t >= 201811L, “libc++ should implement this”);

to make sure that libc++ in particular behaves as expected, or

  1. Write tests that test for conformance: the feature test macros should be correct and the functionality should be available. That way a standard library implementation that fails to implement a feature (and doesn’t advertise the feature via feature test macros) doesn’t get an “all tests pass” despite not implementing all of the standard library.

I’d lean towards option 2 myself, but maybe vendors of other stdlibs using the libc++ testsuite might have a different opinion.

I’d rather avoid that; long-term test failures are not in anyones best-interest.
It leads to people ignoring the test results “oh, yeah we always have test failures”

– Marshall

In the tests, we have the following usage:

#if TEST_STD_VER > 14

if !defined(__cpp_lib_filesystem)

error “__cpp_lib_filesystem is not defined”

elif __cpp_lib_filesystem < 201703L

error “__cpp_lib_filesystem has an invalid value”

endif

#endif

I submit that that’s non-portable, because some standard libraries may not implement all the features (that’s the point of the feature-test macro), and this test should not fail.

Maybe this is lack of imagination on my part, but isn’t a test failure for an unimplemented standard library feature exactly the desired behavior?

No. The test here is for the correct definition of the macro in question.
With the adoption of feature test macros, the committee has explicitly sanctioned the idea of incomplete implementations.

Maybe I’m misunderstanding what you mean, but I don’t agree. An incomplete implementation is non-conforming, just as it always was. No part of the language or library is optional. All parts are required to be implemented, and all feature test macros are required to be defined, by a conforming implementation. The existence of feature test macros do not sanction the absence of features in a conforming implementation, they merely provide a mechanism by which a user of an implementation can query whether they’re using an incomplete (and therefore non-conforming) implementation, and if so, which parts are available and which parts are still work in progress.

In D55308, I am using the following form:

#if TEST_STD_VER > 17
LIBCPP_ASSERT(IS_DEFINED(__cpp_lib_char8_t)); // libc++ implements this

if defined(__cpp_lib_char8_t)

if __cpp_lib_char8_t < 201811L

error “__cpp_lib_char8_t has an invalid value”

endif

endif

#endif

Basically it (unconditionally) checks that if the macro is defined, then it has a sane value.
Additionally, since we know that libc++ defines this, we check that.

An alternate formulation - w/o the IS_DEFINED trickery.

#if TEST_STD_VER > 17

if !defined(__cpp_lib_char8_t)

LIBCPP_STATIC_ASSERT(false, “__cpp_lib_char8_t is not defined”);

else

if __cpp_lib_char8_t < 201811L

error “__cpp_lib_char8_t has an invalid value”

endif

endif

#endif

Comments welcome.

A standard library implementation that partially supports a feature may define the feature test macro to a value lower than the standard one. Your revised approach would reject that.

It would; I haven’t considered that.

There are two things that I think are reasonable:

  1. Condition all your tests on the relevant feature test macros, and add

LIBCPP_STATIC_ASSERT(__cpp_lib_char8_t >= 201811L, “libc++ should implement this”);

to make sure that libc++ in particular behaves as expected, or

  1. Write tests that test for conformance: the feature test macros should be correct and the functionality should be available. That way a standard library implementation that fails to implement a feature (and doesn’t advertise the feature via feature test macros) doesn’t get an “all tests pass” despite not implementing all of the standard library.

I’d lean towards option 2 myself, but maybe vendors of other stdlibs using the libc++ testsuite might have a different opinion.

I’d rather avoid that; long-term test failures are not in anyones best-interest.
It leads to people ignoring the test results “oh, yeah we always have test failures”

Test failures on tests for features that you don’t implement seem like exactly the right behavior to me. Isn’t that already the status quo for the rest of libc++'s test suite? If someone doesn’t implement, say, std::unique_ptr, shouldn’t all the unique_ptr tests fail for their implementation?

Yes, although we have other examples where we’re more lenient. For example, we have plenty of machinery to make sure that the tests pass when you use an old libc++ dylib shipped with e.g. macosx10.7, even though it does not implement the full standard.

Feature test macros bring back the question of what to do for incomplete implementations to an extreme level of granularity. I’d much rather keep the libc++ test suite as close as possible to a conformance test suite, meaning it should fail if a standard library does not satisfy everything. We can then add some implementation-specific XFAILs to say “oh, we’re aware of this on that implementation”. My opinion is that all of this “being lenient to incomplete implementations” should be implemented in LIT, not in the test suite itself.

Louis

I think maybe something between my option 1 and option 2 above might work to split the difference. Divide the tests into two categories:

a) Functionality tests, conditioned on feature-test macros that indicate support for the tested feature
b) Feature-test macro tests that ensure the macros are at the current highest version

That way, a failure in the tests in category (a) indicate a serious bug: you’re advertising support for a feature that you don’t have. But a failure in the tests in category (b) indicate that your implementation is incomplete (and if no category (a) test fails, it’s correctly reporting its incompleteness to the user). Then you can XFAIL the category (b) tests for the implementations that don’t yet support whichever features.

In the tests, we have the following usage:

#if TEST_STD_VER > 14

if !defined(__cpp_lib_filesystem)

error “__cpp_lib_filesystem is not defined”

elif __cpp_lib_filesystem < 201703L

error “__cpp_lib_filesystem has an invalid value”

endif

#endif

I submit that that’s non-portable, because some standard libraries may not implement all the features (that’s the point of the feature-test macro), and this test should not fail.

Maybe this is lack of imagination on my part, but isn’t a test failure for an unimplemented standard library feature exactly the desired behavior?

No. The test here is for the correct definition of the macro in question.
With the adoption of feature test macros, the committee has explicitly sanctioned the idea of incomplete implementations.

Maybe I’m misunderstanding what you mean, but I don’t agree. An incomplete implementation is non-conforming, just as it always was. No part of the language or library is optional. All parts are required to be implemented, and all feature test macros are required to be defined, by a conforming implementation.

I believe that this is a red herring. If all we were concerned about were “conforming implementations”, then there would be no need for feature test macros. All conforming implementations have implemented all features, so there’s no need to check.

But that’s not what the committee said in adopting SD-6. They said “We recognize that there are incomplete implementations, and we’re providing mechanisms for working with them”.

The existence of feature test macros do not sanction the absence of features in a conforming implementation, they merely provide a mechanism by which a user of an implementation can query whether they’re using an incomplete (and therefore non-conforming) implementation, and if so, which parts are available and which parts are still work in progress.

In D55308, I am using the following form:

#if TEST_STD_VER > 17
LIBCPP_ASSERT(IS_DEFINED(__cpp_lib_char8_t)); // libc++ implements this

if defined(__cpp_lib_char8_t)

if __cpp_lib_char8_t < 201811L

error “__cpp_lib_char8_t has an invalid value”

endif

endif

#endif

Basically it (unconditionally) checks that if the macro is defined, then it has a sane value.
Additionally, since we know that libc++ defines this, we check that.

An alternate formulation - w/o the IS_DEFINED trickery.

#if TEST_STD_VER > 17

if !defined(__cpp_lib_char8_t)

LIBCPP_STATIC_ASSERT(false, “__cpp_lib_char8_t is not defined”);

else

if __cpp_lib_char8_t < 201811L

error “__cpp_lib_char8_t has an invalid value”

endif

endif

#endif

Comments welcome.

A standard library implementation that partially supports a feature may define the feature test macro to a value lower than the standard one. Your revised approach would reject that.

It would; I haven’t considered that.

There are two things that I think are reasonable:

  1. Condition all your tests on the relevant feature test macros, and add

LIBCPP_STATIC_ASSERT(__cpp_lib_char8_t >= 201811L, “libc++ should implement this”);

to make sure that libc++ in particular behaves as expected, or

  1. Write tests that test for conformance: the feature test macros should be correct and the functionality should be available. That way a standard library implementation that fails to implement a feature (and doesn’t advertise the feature via feature test macros) doesn’t get an “all tests pass” despite not implementing all of the standard library.

I’d lean towards option 2 myself, but maybe vendors of other stdlibs using the libc++ testsuite might have a different opinion.

I’d rather avoid that; long-term test failures are not in anyones best-interest.
It leads to people ignoring the test results “oh, yeah we always have test failures”

Test failures on tests for features that you don’t implement seem like exactly the right behavior to me. Isn’t that already the status quo for the rest of libc++'s test suite? If someone doesn’t implement, say, std::unique_ptr, shouldn’t all the unique_ptr tests fail for their implementation?

Really? do you expect that the clang test suite will fail ON EVERY TEST RUN because clang lacks support for concepts (as an example)? Or explicit(bool) ?

I’m not seeing that reflected in the test matrix. I see lots of green for clang.

– Marshall

Green is not the same as “all tests pass”, it’s the same as “all tests that were expected to pass passed, and all tests that were expected to fail failed”. I would expect Clang’s test suite to be green. I would not expect all tests to pass. Specifically, if we had implemented tests for those new features prior to implementing the corresponding Clang support, it would make sense to me to XFAIL those tests.

If we wanted to share a testsuite between Clang and other compilers, the strategy I described in my previous message (condition tests under the relevant feature test macro, and have separate tests that all the feature test macros are defined appropriately, with XFAILs for compilers that don’t support the feature yet) seems entirely reasonable to me for that.

Yes, I think the strategy you outline makes the most sense too. It allows us to keep the test suite a conformance suite, which is what we want (I think?), while still allowing incomplete implementations.

This will either require enclosing all tests for features that might not be implemented inside #ifdef <feature-test-macro>, or doing it at the LIT level using one feature for each feature-test-macro and then using XFAIL: !<feature-test-macro>. And then in test/libcxx, we can add tests for all the feature test macros that we should be defining. Other implementations can do something similar when they run the test suite for their implementation.

I dislike this as it makes everything more complicated and more brittle, but it seems like this is the best way to go given the need to implement feature-test macros.

We should also see how other implementations that use our test suite feel about this (CCing them).

Louis

I only use your tests a few times a year, run by hand. If I get a load
of FAILs because I've not implemented something yet and you have,
that's not a problem. I don't expect to get 100% green when I run your
tests.

(+Casey who also works on this)

When running libcxx's tests against MSVC's STL, we have an MSVC-internal skipped_tests.txt (full content available if requested) which says:

# *** MISSING STL FEATURES ***
# C++20 P0122R7 "<span>"
upstream\test\std\containers\views\types.pass.cpp
[...]

# C++20 P0355R7 "<chrono> Calendars And Time Zones"
upstream\test\std\utilities\time\days.pass.cpp
[...]

Updating this requires minimal effort. It's more problematic when existing tests are modified to test new features, since we have to skip the entire test even if only a small part is affected.

Guarding tests with feature-test macros in their source code (as opposed to "the LIT level", which I believe is machinery that we don't use) would be ideal, I think.

The annoying thing is when features are added without corresponding feature-test macros. There's a bunch of those right now. We'll probably need the skip-tests approach for those.

STL

I don’t have context for the original issue, but our test selector does inspect some of the LIT control comments. Specifically, it skips tests with any of the comments:

// REQUIRES: c++11

// REQUIRES: c++11 || c++14

// REQUIRES: c++98 || c++03

// REQUIRES: c++98 || c++03 || c++11 || c++14

// UNSUPPORTED: c++14, c++17, c++2a

So if were talking about something that doesn’t touch those, it won’t affect our typical usage.

I don’t have context for the original issue, but our test selector does inspect some of the LIT control comments. Specifically, it skips tests with any of the comments:

// REQUIRES: c++11

// REQUIRES: c++11 || c++14

// REQUIRES: c++98 || c++03

// REQUIRES: c++98 || c++03 || c++11 || c++14

// UNSUPPORTED: c++14, c++17, c++2a

So if were talking about something that doesn’t touch those, it won’t affect our typical usage.

Why do you not use lit to run the test suite?

As I understand it, MSVC would prefer disabling tests that are not supported using #if !defined(<feature>). This can be made to work, but what I don’t like is that it removes visibility into what tests are passing and which ones are not supported. Right now, we can easily tell how many tests are unsupported by an implementation because those are marked by LIT as UNSUPPORTED. If we start using feature test macros to disable tests, we lose that.

Louis

+libcxx-dev …

“Why do you not use lit to run the test suite?”

I don’t work at Microsoft, but I have tried to smash MSVC and libc++ tests together repeatedly. Lit doesn’t make it easy. I’ve never gotten something nice enough to upstream. The libc++ lit infrastructure is heavily tuned towards gcc and clang, and doesn’t do a good job at isolating compiler differences. Instead, the logic is spread all over the place.

Here is one such quick-and-dirty attempt from a year and a half ago… https://github.com/ben-craig/libcxx/commit/fea52ceee736c97e74194b3f38660a7b0f73db83 , note that this is after shuffling around a whole bunch of code to attempt to put compiler configuration behind an “abstract interface”.

I didn’t feel like subjecting myself to that again, so when I made my freestanding branch tests, I just rolled my own ninja generator written in Ruby. Now I only rerun tests that weren’t successful, or have had a change. https://github.com/ben-craig/kernel_test_harness/tree/msvc_stl .

In addition to that we didn’t use it because we wanted to plug the tests into our existing distributed test harness system (first Gauntlet, and now Contest) that we use to test all the other compiler and libraries bits.

Billy3