>From: David Blaikie [mailto:dblaikie@gmail.com]
>> From: David Blaikie [mailto:dblaikie@gmail.com]
>> > > From: David Blaikie [mailto:dblaikie@gmail.com]
>> > > > > > >
>> > > > > > > From the debugger's standpoint, the functional concern is
that if you do
>> > > > > > something more real, like:
>> > > > > > >
>> > > > > > > typedef int A;
>> > > > > > > template <typename T>
>> > > > > > > struct S
>> > > > > > > {
>> > > > > > > T my_t;
>> > > > > > > };
>> > > > > > >
>> > > > > > > I want to make sure that the type of my_t is given as "A"
not as "int".
>> > > > > > The reason for that is that it is not uncommon to have data
formatters
>> > > > > > that trigger off the typedef name. This happens when you use
some common
>> > > > > > underlying type like "int" but the value has some special
meaning when it
>> > > > > > is formally an "A", and you want to use the data formatters
to give it an
>> > > > > > appropriate presentation. Since the data formatters work by
matching type
>> > > > > > name, starting from the most specific on down, it is
important that the
>> > > > > > typedef name be preserved.
>> > > > > > >
>> > > > > > > However, it would be really odd to see:
>> > > > > > >
>> > > > > > > (lldb) expr -T -- my_s
>> > > > > > > (S<int>) $1 = {
>> > > > > > > (A) my_t = 5
>> > > > > > > }
>> > > > > > >
>> > > > > > > instead of:
>> > > > > > >
>> > > > > > > (lldb) expr -T -- my_s
>> > > > > > > (S<A>) $1 = {
>> > > > > > > (A) my_t = 5
>> > > > > > > }
>> > > > > > >
>> > > > > > > so I am in favor of presenting the template parameter type
with the most
>> > > > > > specific name it was given in the overall template type name.
>> > > > > >
>> > > > > > OK, we get this wrong today. I’ll try to look into it.
>> > > > > >
>> > > > > > What’s your take on the debug info representation for the
templated class
>> > > > > > type? The tentative patch introduces a typedef that declares
S<A> as a
>> > > > > > typedef for S<int>. The typedef doesn’t exist in the code,
thus I find it
>> > > > > > a bit of a lie to the debugger. I was more in favour of
something like :
>> > > > > >
>> > > > > > DW_TAG_variable
>> > > > > > DW_AT_type: -> DW_TAG_structure_type
>> > > > > > DW_AT_name: S<A>
>> > > > > > DW_AT_specification: -> DW_TAG_structure_type
>> > > > > > DW_AT_name: S<int>
>> > > > > >
>> > > > > > This way the canonical type is kept in the debug information,
and the
>> > > > > > declaration type is a real class type aliasing the canonical
type. But I’m
>> > > > > > not sure debuggers can digest this kind of aliasing.
>> > > > > >
>> > > > > > Fred
>> > > > >
>> > > > > Why introduce the extra typedef? S<A> should have a template
parameter
>> > > > > entry pointing to A which points to int. The info should all
be there
>> > > > > without any extra stuff. Or if you think something is missing,
please
>> > > > > provide a more complete example.
>> > > > My immediate concern here would be either loss of information or
bloat
>> > > > when using that with type units (either bloat because each
instantiation
>> > > > with differently spelled (but identical) parameters is treated as
a separate
>> > > > type - or loss when the types are considered the same and all but
one are
>> > > > dropped at link time)
>> > > You'll need to unpack that more because I'm not following the
concern.
>> > > If the typedefs are spelled differently, don't they count as
different types?
>> > > DWARF wants to describe the program as-written, and there's no
S<int> written
>> > > in the program.
>> > >
>> > > Maybe not in this TU, but possibly in another TU? Or by the user.
>> > >
>> > > void func(S<int>);
>> > > ...
>> > > typedef int A;
>> > > S<A> s;
>> > > func(s); // calls the same function
>> > >
>> > > The user probably wants to be able to call void func with S<int> or
S<A>
>> > Sure.
>> >
>> > > (and, actually, in theory, with S<B> where B is another typedef of
int, but
>> > > that'll /really/ require DWARF consumer support and/or new DWARF
wording).
>> >
>> > Not DWARF wording. DWARF doesn't say when you can and can't call
something;
>> > that's a debugger feature and therefore a debugger decision.
>> >
>> What I mean is we'd need some new DWARF to help explain which types are
>> equivalent (or the debugger would have to do a lot of spelunking to try
>> to find structurally equivalent types - "S<B>" and "S<A>", go look
through
>> their DW_TAG_template_type_params, see if they are typedefs to the same
>> underlying type, etc... )
>> >
>> >
>> > > We can't emit these as completely independent types - it would be
verbose
>> > > (every instantiation with different typedefs would be a whole
separate type
>> > > in the DWARF, not deduplicated by type units, etc) and wrong
>> >
>> > Yes, "typedef int A;" creates a synonym/alias not a new type, so S<A>
and S<int>
>> > describe the same type from the C++ perspective, so you don't want
two complete
>> > descriptions with different names, because that really would be
describing them
>> > as separate types. What wrinkles my brow is having S<int> be the
"real"
>> > description even though it isn't instantiated that way in the
program. I wonder
>> > if it should be marked artificial... but if you do instantiate S<int>
in another
>> > TU then you don't want that. Huh. It also seems weird to have this:
>> > DW_TAG_typedef
>> > DW_AT_name "S<A>"
>> > DW_AT_type -> S<int>
>> > but I seem to be coming around to thinking that's the most viable way
to have
>> > a single actual instantiated type, and still have the correct names
of things
>*mostly* correct; this still loses "A" as the type of the data member.
>
>For the DW_TAG_template_type_parameter, you mean? No, it wouldn't.
>
> (as a side note, if you do actually have a data member (or any other
mention) of
>the template parameter type, neither Clang nor GCC really get that
'right' -
>"template<typename T> struct foo { T t; }; foo<int> f;" - in both Clang
and GCC,
>the type of the 't' member of foo<int> is a direct reference to the "int"
DIE, not
>to the DW_TAG_template_type_parameter for "T" -> int)Huh. And DWARF doesn't say you should point to the
template_type_parameter...
I thought it did, but no. Okay, so nothing is lost, but it feels desirable
to me, that uses of the template parameter should cite it in the DWARF as
well.
But I guess we can leave that part of the debate for another time.>
>Crud.
>But I haven't come up with a way to get that back without basically
instantiating
>S<A> and S<int> separately.
>
>> >
>> Yep - it's the only way I can think of giving this information in a way
that's
>> likely to work with existing consumers. It would probably be harmless
to add
>> DW_AT_artificial to the DW_TAG_typedef, if that's any help to any debug
info
>> consumer.
>
>Hmmm no, S<A> is not the artificial name;
>
>It's not the artificial name, but it is an artificial typedef.If the source only says S<A>, then the entire S<int> description is
artificial,
because *that's not what the user wrote*. So both the typedef and the
class type
are artificial. Gah. Let's forget artificial here.>
>some debuggers treat DW_AT_artificial
>as meaning "don't show this to the user."
>
>In some sense that's what I want - we never wrote the typedef in the
source
>so I wouldn't want to see it rendered in the "list of typedefs" (or even
>probably in the list of types, maybe).
>
>But S<A> is the name we *do* want to
>show to the user.
>
>Maybe. Sometimes. But there could be many such aliases for the type. (&
many
>more that were never written in the source code, but are still valid in
the
>source language (every other typedef of int, every other way to name the
int
>type (decltype, etc)))But you *lose* cases where the typedef is the *same* *everywhere*. And in
many cases that typedef is a valuable thing, not the trivial rename we've
been bandying about. This is a more real example:typedef int int4 __attribute__((ext_vector_type(4)));
template<typename T> struct TypeTraits {};
template<>
struct TypeTraits<int4> {
static unsigned MysteryNumber;
};
unsigned TypeTraits<int4>::MysteryNumber = 3U;Displaying "TypeTraits<int __attribute__((ext_vector_type(4)))>" is much
worse than "TypeTraits<int4>" (and not just because it's shorter).
More to the point, having the debugger *complain* when the user says
something like "ptype TypeTraits<int4>" is a problem.Reducing debug-info size is a worthy goal, but don't degrade the debugging
experience to get there.
I'm not sure which part of what I've said seemed like a suggestion to
degrade the debugging experience to minimize debug info size (the
proposition that we should use a typedef or other alias on top of the
canonical type? It wouldn't cause "ptype TypeTraits<int4>" to complain -
indeed for GDB ptyping a typedef gives /exactly/ the same output as if you
ptype the underlying type - it doesn't even mention that there's a typedef
involved:
typedef fooA foo<int>;