math.h on Windows

There is a compile error when compiling VC++'s math.h.

Apparently, the GNU extentions are still enabled by default for Windows VC++ builds, in particular “_Complex”.

This conflicts with the parameter name in:

_CRTIMP double __cdecl _cabs(__in struct _complex _Complex);

During my investigation, I tried turning off the GNU extensions when the Microsoft extensions are enabled, but this is problematic with a number of the tests, which use _Complex, and probably lots of other places. This raised the question in my mind as to whether the GNU extensions should be enabled in this case, but I figure it’s probably better that way, making the compiler more flexible.

Therefore, one way to make this work would be to change the parser a bit to detect this case, and treat the _Complex token as an identifier in this case. The error originates from line 410 of DeclSpec.cpp.

Would this be a reasonable solution?

I haven’t looked in to how to do this yet, but if you think it’s a good idea, I’ll attempt it.

Actually, today there are two other errors in compiling this file, related to wchar_t being defined already. One solution is to define _WCHAR_T_DEFINED for the Visual Studio targets, which will disable the conflicting definition in crtdefs.h, and possibly elsewhere. The enclosed patch does this, which if you will approve it, I’ll check it in.

To make the tests less dependent on headers, I’ve also enclosed a mathtest.patch file for eliminating the math.h inclusion in the failed tests I’ve looked at so far.

-John

wchart.patch (951 Bytes)

mathtest.patch (1.41 KB)

Please split out test/CodeGen/const-init.c. I'd like someone else to review that, but the rest of mathtest.patch is fine.

Oh. Interesting. I just checked VC 9.0, and they renamed the parameter. So I guess one solution is to just focus on VC9 for now.

The wchar_t problem persists, however. So, may I check in my proposed wchar_t fix?

-John

Hold the fort. After moving to VC9 and updating, the wchar_t fix breaks a bunch of tests. I don’t know why yet Sigh.

There is a compile error when compiling VC++'s math.h.

Apparently, the GNU extentions are still enabled by default for Windows VC++ builds, in particular "_Complex".

This conflicts with the parameter name in:

_CRTIMP double __cdecl _cabs(__in struct _complex _Complex);

During my investigation, I tried turning off the GNU extensions when the Microsoft extensions are enabled, but this is problematic with a number of the tests, which use _Complex, and probably lots of other places. This raised the question in my mind as to whether the GNU extensions should be enabled in this case, but I figure it's probably better that way, making the compiler more flexible.

_Complex is currently a keyword in every mode. We could make it only be a keyword in C99 or GNU mode, then make sure that neither of these modes is turned on by default when Clang is built with VC++. That probably makes sense, since VC++ doesn't support C99 or many GNU extensions anyway.

We can always force tests into specific modes (turning off Microsoft extensions, turning on GNU extensions or C99, whatever), rather than trying to work around problems that are unlikely to show up in code that compiles with VC++.

Therefore, one way to make this work would be to change the parser a bit to detect this case, and treat the _Complex token as an identifier in this case. The error originates from line 410 of DeclSpec.cpp.
Would this be a reasonable solution?

I'd really rather not do this. We currently have one hack of this form, with __is_pod, because it occurs in a super-critical library (GNU libstdc++ <= 4.2), but we should try not to do that often. It's very obscure, and will lead to weird behavior. ("If I include <math.h>, I can no longer use complex types!")

Actually, today there are two other errors in compiling this file, related to wchar_t being defined already. One solution is to define _WCHAR_T_DEFINED for the Visual Studio targets, which will disable the conflicting definition in crtdefs.h, and possibly elsewhere. The enclosed patch does this, which if you will approve it, I'll check it in.

What is the actual issue with the redefinition of wchar_t? Are they creating a typedef of wchar_t in their headers? Does the VC++ compiler implicitly define _WCHAR_T_DEFINED?

Regarding the patch itself: if VC++ is implicitly defining _WCHAR_T_DEFINED, then we should be doing so based on the Microsoft flag in the preprocessor-definitions, since it is effectively a Microsoft extension. Defining this information in the target probably isn't what we want.

To make the tests less dependent on headers, I've also enclosed a mathtest.patch file for eliminating the math.h inclusion in the failed tests I've looked at so far.

These look good, thanks!

  - Doug

Doug,

_Complex is currently a keyword in every mode. We could make it only be a keyword in C99 or GNU mode, then make sure that neither of these modes is turned on by default when Clang is built with VC++. That probably makes sense, since VC++ doesn’t support C99 or many GNU extensions anyway.

We can always force tests into specific modes (turning off Microsoft extensions, turning on GNU extensions or C99, whatever), rather than trying to work around problems that are unlikely to show up in code that compiles with VC++.

How would you collectively force the tests?

I’m kinda feeling that since we’re down to just 21 failing tests, and _Complex isn’t a problem with VC 9.0, we could probably punt on it for now, and see how it goes with the rest of the tests. But let me know if you want me to persue this.

What is the actual issue with the redefinition of wchar_t? Are they creating a typedef of wchar_t in their headers? Does the VC++ compiler implicitly define _WCHAR_T_DEFINED?

This occurs in a few headers:

#ifndef _WCHAR_T_DEFINED
typedef unsigned short wchar_t;
#define _WCHAR_T_DEFINED
#endif

The enclosed wchar_t_fix2.patch seems to fix both the wchar_t issue, and another one I ran into with VC++ headers. I was side-tracked a bit with errors on wchar_t I didn’t understand, until I realized that wchar_t was only a C++ keyword.

I also ran into some new errors with the VC 9.0 headers on some #pragma’s. Defining _CRT_SECURE_CPP_OVERLOAD_SECURE_NAMES avoids them, though it’s a bit of a hack. This symbol prevents the definition of some macros, some of which have errors, for example on __pragma statements like this:

__pragma(warning(push))

This is a Microsoft extension not currently supported, right? It seems kind of useful for macros.

You mentioned that the Targets.cpp was probably not the right place for the defines, so I moved them to InitPreprocessor.cpp. Is this the right place?

Also, could someone look at the enclosed patch for stdint.h? This also fixes some failing tests, since VC++ doesn’t have stdint.h. Who is point on this?

I can check in the patches you approve.

-John

wchar_t_fix2.patch (1.12 KB)

stdint.patch (427 Bytes)

I’m terribly sorry! In builtins.c I accidentally commited some experimental changes I added after making the patch, making that test fail. I’ve commited the fix.

-John

Doug,

_Complex is currently a keyword in every mode. We could make it only be a keyword in C99 or GNU mode, then make sure that neither of these modes is turned on by default when Clang is built with VC++. That probably makes sense, since VC++ doesn’t support C99 or many GNU extensions anyway.

We can always force tests into specific modes (turning off Microsoft extensions, turning on GNU extensions or C99, whatever), rather than trying to work around problems that are unlikely to show up in code that compiles with VC++.

How would you collectively force the tests?

I don’t think we can collectively force the tests; we could explicitly add -fno-extensions to the ones that matter.

I’m kinda feeling that since we’re down to just 21 failing tests, and _Complex isn’t a problem with VC 9.0, we could probably punt on it for now, and see how it goes with the rest of the tests. But let me know if you want me to persue this.

Let’s punt on it for now, then.

What is the actual issue with the redefinition of wchar_t? Are they creating a typedef of wchar_t in their headers? Does the VC++ compiler implicitly define _WCHAR_T_DEFINED?

This occurs in a few headers:

#ifndef _WCHAR_T_DEFINED
typedef unsigned short wchar_t;
#define _WCHAR_T_DEFINED
#endif

The enclosed wchar_t_fix2.patch seems to fix both the wchar_t issue, and another one I ran into with VC++ headers. I was side-tracked a bit with errors on wchar_t I didn’t understand, until I realized that wchar_t was only a C++ keyword.

The wchar_t bits look good. Please go ahead and commit!

I also ran into some new errors with the VC 9.0 headers on some #pragma’s. Defining _CRT_SECURE_CPP_OVERLOAD_SECURE_NAMES avoids them, though it’s a bit of a hack.

At the very least, please put a FIXME near that define, since we don’t want to keep it for long.

This symbol prevents the definition of some macros, some of which have errors, for example on __pragma statements like this:

__pragma(warning(push))

This is a Microsoft extension not currently supported, right? It seems kind of useful for macros.

Right. There are two issues here, I guess. The first is that we don’t handle __pragma, although it looks like it’s similar “enough” to C99’s _Pragma that it wouldn’t be hard to implement this Microsoft extension. Second, we have the ability to push/pop diagnostic contexts (see the PragmaDiagnosticHandler in lib/Lex/Pragma.cpp) but, IIRC, not with that syntax. Again, it’s probably simple to implement, or we could live with the hack for a bit longer.

You mentioned that the Targets.cpp was probably not the right place for the defines, so I moved them to InitPreprocessor.cpp. Is this the right place?

Yep!

Also, could someone look at the enclosed patch for stdint.h? This also fixes some failing tests, since VC++ doesn’t have stdint.h. Who is point on this?

I’m not thrilled about using _M_IX86 and _M_X64 to detect what is really a library issue. I guess in the worst case we could have a configure-time check that determines whether we can #include_next <stdint.h>, but that’s… horrible.

Unless someone has a better idea… ?

  • Doug

Also, could someone look at the enclosed patch for stdint.h? This also fixes some failing tests, since VC++ doesn’t have stdint.h. Who is point on this?

I’m not thrilled about using _M_IX86 and _M_X64 to detect what is really a library issue. I guess in the worst case we could have a configure-time check that determines whether we can #include_next <stdint.h>, but that’s… horrible.

Unless someone has a better idea… ?

I used these symbols because they are implicitly defined to mimic VC++ in 32 and 64-bit mode respectively. Would substituting a clang-specific implicit symbol be more acceptible, i.e. “_CLANG_MSVC”?

Or, a stab in the dark, remove the #include, and internally switch the order of includes in the search path based on the Freestanding flag? But I don’t know the implications for the other headers Clang provides.

Or, we disable the 7 tests that include stdint.h on Windows (once we have a mechanism that can do that).

-John

> Also, could someone look at the enclosed patch for stdint.h? This also
> fixes some failing tests, since VC++ doesn't have stdint.h. Who is point on
> this?

I'm not thrilled about using _M_IX86 and _M_X64 to detect what is really a
library issue. I guess in the worst case we could have a configure-time
check that determines whether we can #include_next <stdint.h>, but
that's.... horrible.
Unless someone has a better idea... ?

No configure-time check. :stuck_out_tongue:

We could always invent a clang specific internal define for this,
which targets can set in their predefines. For example,
__STDC_INCLUDES_STDINT__ or so.

- Daniel

It seems to me that target information should be limited to information about the ABI, linker, assembler, and other parts of the toolchain, but not the (C or C++) standard library, since standard libraries are meant to be interchangeable.

However, I have an idea. Clang's <stdint.h> looks like this:

/* If we're hosted, fall back to the system's stdint.h, which might have
  * additional definitions.
  */
#if __STDC_HOSTED__
# include_next <stdint.h>
#else
// our own definitions of <stdint.h>
#endif

why not add new __has_include and __has_include_next preprocessor primitives, so that we can *really* detect whether we have a system <stdint.h>?

This also gets us closer to eliminating the need for configure :smiley:

  - Doug

That (__has_include etc.) sounds good. I’m assuming that is for use in a #if?

Yes, just like __has_builtin and __has_feature.

It probably makes sense to allow the same forms as include directives, e.g.,

__has_include(<stdlib.h>)
__has_include(“stdlib.h”)

__has_include_next(<stdlib.h>)
__has_include_next(“stdlib.h”)

Would you like me to take a stab at it>

That would be great!

  • Doug

This would be really really really cool. I'd start with the plain quoted version, we can deal with the angled string later, but it will be trickier.

-Chris

Hi Chris and Doug,

Here’s a patch with my initial stab at “__has_include” and “__has_include_next”, along with a test for it (has_include.c) and the modified stdint.h.

To reuse some code, I had to rearrange things a bit, moving a static function to Preprocessor, making some Preprocessor functions public, and adding a couple of accessors. I also broke out some of the code for “defined” into a separate function.

The code itself I mostly hacked up from the code for “defined” and “#include”. This included code for the angle brackets already. Also, because “defined” seemed to allow omitting the parentheses, I did it for “__has_include” and “__has_include_next” also. The “#include_next” code put out a warning if the including file was the primary file, so I left this in for “__has_include_next” also.

A couple of additional things that might need addressing:

Since these are Clang-specific keywords (I think), perhaps they should be disabled in some circumstances (i.e. language mode), but I’m not sure how, and I haven’t look much into it yet how it is done elsewhere in the code.

Testing the error conditions was a bit tricky as the embedded “expected-error” annotations affect the preprocessor, and the recovery isn’t always clear. Also, whether or not to include an “#endif” was unclear. I tried the commented-out error conditions in the test, so they minimally seem to work, but the preprocessor error recovery in general might need some work. In some cases of including an “#endif” after an error condition, I would get an assert at PPLexerChange.cpp, line 269.

But if the main code for this is okay, I’d like to check it in, to make it easier to for me to keep in sync.

-John

hasinclude.patch (19.5 KB)

has_include.c (2.29 KB)

Awesome, thanks for working on this John, some thoughts:

* these should be documented in the clang language extensions document.

+++ include/clang/Basic/TokenKinds.def (working copy)
@@ -83,6 +83,10 @@
  PPKEYWORD(assert)
  PPKEYWORD(unassert)

+// Clang Extensions.
+PPKEYWORD(__has_include)
+PPKEYWORD(__has_include_next)

Can this be implemented like __has_builtin is? This is actually registered as a macro (in lib/Lex/PPMacroExpansion.cpp) which allows people to do things like:

#if defined(__has_builtin) && __has_builtin(__builtin_foo)

for example.

__has_include "stdio.h"

I think it is best to require ()'s, for similarity with __has_builtin and to allow people without this feature to #define it away.

@@ -244,9 +244,14 @@
      return CurPPLexer == L;
    }

- /// getCurrentLexer - Return the current file lexer being lexed from. Note
+ /// getCurrentLexer - Return the current lexer being lexed from. Note
    /// that this ignores any potentially active macro expansions and _Pragma
    /// expansions going on at the time.
+ PreprocessorLexer *getCurrentLexer() const { return CurPPLexer; }

Thanks, Chris.

Can this be implemented like __has_builtin is?

Oh, I totally missed this! Yes, I’ll redo it like this.

This can be done by just not registering the ‘magic’ macros in no-extensions mode.

How is no-extensions mode detected? I was expecting a flag in LangOptions, but I don’t see it, and there is no -fno-extensions option. There is a -fbuiltin option and a NoBuiltin flag in LangOptions, but that seems to be for builtin functions, not preprocessor macros. Or perhaps it needs adding? I don’t see an equivalent in gcc. Or perhaps something more specific like “-fno-magic-macros=[0|1]” or “-fno-magic-macro=(macroname)”

-John

I would follow the lead of __has_builtin. If it doesn't warn in some situation, your new thing shouldn't either :). Since it starts with __ I think we're in the clear here.

-Chris