Discussion:
AfxMessageBox?
(too old to reply)
pvdg42
2006-11-14 03:21:43 UTC
Permalink
Looking at a VS 2005 MFC application and trying to use AfxMessageBox.

Took an example right out of the MSDN documentation:

AfxMessageBox("This is the message"); using default 0 arguments for 2nd and
3rd parameters.

Yields:

error C2665: 'AfxMessageBox' : none of the 2 overloads could convert all
the argument types
e:\2006_fall_students\foltz\sketcher\sketcher\sketcher\sketcherview.cpp 164

What am I doing wrong?
Mihai N.
2006-11-14 04:45:30 UTC
Permalink
Post by pvdg42
Looking at a VS 2005 MFC application and trying to use AfxMessageBox.
AfxMessageBox("This is the message"); using default 0 arguments for 2nd and
3rd parameters.
...
Post by pvdg42
What am I doing wrong?
Projects in VS2005 use Unicode by default.
Try AfxMessageBox( _T("This is the message") );
--
Mihai Nita [Microsoft MVP, Windows - SDK]
http://www.mihai-nita.net
------------------------------------------
Replace _year_ with _ to get the real email
pvdg42
2006-11-14 05:13:16 UTC
Permalink
Post by Mihai N.
Projects in VS2005 use Unicode by default.
Try AfxMessageBox( _T("This is the message") );
--
Mihai Nita [Microsoft MVP, Windows - SDK]
http://www.mihai-nita.net
------------------------------------------
Replace _year_ with _ to get the real email
Doh!
Thanks very much. Works like a charm!
Mark Randall
2006-11-14 12:29:41 UTC
Permalink
Post by pvdg42
Doh!
Thanks very much. Works like a charm!
/me uses this thread to point out that _T( ) sucks and with the # of
platform extensions MS has you would think they could have put a single
character prefix in (like the L) that converted to unicode if relevant.

The _T( ) is just damned messy.
--
- Mark Randall
http://www.temporal-solutions.co.uk
http://www.awportals.com
David Ching
2006-11-14 13:44:00 UTC
Permalink
Post by Mark Randall
/me uses this thread to point out that _T( ) sucks and with the # of
platform extensions MS has you would think they could have put a single
character prefix in (like the L) that converted to unicode if relevant.
The _T( ) is just damned messy.
I concur 100%. If I know my code is always going to be compiled UNICODE, I
just use L"" and be done with it, just because it is easier to type.

-- David
Mihai N.
2006-11-15 04:30:49 UTC
Permalink
Post by David Ching
Post by Mark Randall
/me uses this thread to point out that _T( ) sucks and with the # of
platform extensions MS has you would think they could have put a single
character prefix in (like the L) that converted to unicode if relevant.
The _T( ) is just damned messy.
I concur 100%. If I know my code is always going to be compiled UNICODE, I
just use L"" and be done with it, just because it is easier to type.
"Easier to type" is not an argument.
It is easier to not check for errors, not catch exceptions, not check buffer
overruns.
If you have better arguments, I am willing to listen :-)
--
Mihai Nita [Microsoft MVP, Windows - SDK]
http://www.mihai-nita.net
------------------------------------------
Replace _year_ with _ to get the real email
Tom Serface
2006-11-15 06:07:36 UTC
Permalink
I think "easier to type" is appropriate so long as it still accomplishes the
task. For example, if you are sure you are never going to need a
non-unicode version (and why would you anyway) thne the _T() stuff seems
like overkill...

Tom
Post by Mihai N.
Post by David Ching
Post by Mark Randall
/me uses this thread to point out that _T( ) sucks and with the # of
platform extensions MS has you would think they could have put a single
character prefix in (like the L) that converted to unicode if relevant.
The _T( ) is just damned messy.
I concur 100%. If I know my code is always going to be compiled UNICODE, I
just use L"" and be done with it, just because it is easier to type.
"Easier to type" is not an argument.
It is easier to not check for errors, not catch exceptions, not check buffer
overruns.
If you have better arguments, I am willing to listen :-)
--
Mihai Nita [Microsoft MVP, Windows - SDK]
http://www.mihai-nita.net
------------------------------------------
Replace _year_ with _ to get the real email
Mihai N.
2006-11-16 05:39:57 UTC
Permalink
Post by Tom Serface
For example, if you are sure you are never going to need a
non-unicode version
You can never be sure of anything, and never say never :-)
--
Mihai Nita [Microsoft MVP, Windows - SDK]
http://www.mihai-nita.net
------------------------------------------
Replace _year_ with _ to get the real email
David Ching
2006-11-16 13:25:16 UTC
Permalink
Post by Mihai N.
Post by Tom Serface
For example, if you are sure you are never going to need a
non-unicode version
You can never be sure of anything, and never say never :-)
If the code relies on NT-based features and will never run on Win9x, then
yes, we can assume UNICODE will be present.

-- David
Tom Serface
2006-11-16 15:18:46 UTC
Permalink
OK, I can't arge with that... but being realistic I know that some of my
applications will always have to be Unicode... I know... never say always...
you win :o)

Tom
Post by Mihai N.
Post by Tom Serface
For example, if you are sure you are never going to need a
non-unicode version
You can never be sure of anything, and never say never :-)
--
Mihai Nita [Microsoft MVP, Windows - SDK]
http://www.mihai-nita.net
------------------------------------------
Replace _year_ with _ to get the real email
Mihai N.
2006-11-19 04:48:37 UTC
Permalink
David, Tom,

You are right, you can tell if something will be Unicode only.
This is why the smiley (because I lost the "argument" :-)

But I am used to use _T for a long time now, and I just find it cleaner
to go with _T than L.

It also bothers me a bit, because I know that MessageBox takes a char*
or a WCHAR* depending if it Unicode or not.
So I want to see:
MessageBox( hWnd, _T("Hello"), ... );
or
MessageBoxA( hWnd, "Hello", ... );
or
MessageBoxW( hWnd, L"Hello", ... );

It bothers me to see MessageBox( hWnd, L"Hello", ... ), it feels sloppy.
Because MessageBox is a generic API, taking a generic text string.
Might be too pedantic, but is a bit like indentation.
"Is ugly, but it compiles, and I save typing a lot of tabs" is not good
enough for me.

Fuzzy stuff like this breaks cross-platform (because of endianness, or
because wchar_t is 4 bytes on MacOSX/Unix/Linux).
Or might even break when you move from one version of OS to the next.

I had many projects moving very easy from Win 3.1 to 95, or from ANSI to
Unicode, or even from Win to Mac OS X, exactly because I did not do
this kind of hand-weaving about data types.


Is it mandatory? No. Is it good to do it? I think so.

Mihai
--
Mihai Nita [Microsoft MVP, Windows - SDK]
http://www.mihai-nita.net
------------------------------------------
Replace _year_ with _ to get the real email
David Ching
2006-11-23 00:39:01 UTC
Permalink
Post by Mihai N.
David, Tom,
You are right, you can tell if something will be Unicode only.
This is why the smiley (because I lost the "argument" :-)
But I am used to use _T for a long time now, and I just find it cleaner
to go with _T than L.
It also bothers me a bit, because I know that MessageBox takes a char*
or a WCHAR* depending if it Unicode or not.
MessageBox( hWnd, _T("Hello"), ... );
or
MessageBoxA( hWnd, "Hello", ... );
or
MessageBoxW( hWnd, L"Hello", ... );
It bothers me to see MessageBox( hWnd, L"Hello", ... ), it feels sloppy.
Because MessageBox is a generic API, taking a generic text string.
Well, we are all trying to do the best we can for the fact that wide strings
are an afterthought. All these ideas have tradeoffs. I wish they had made
a compiler switch that said "Treat literal strings as UNICODE" so that
"Hello" would be considered the same as L"Hello" if UNICODE was defined.
That would be like what they did with the rest of the Windows API where we
don't need to go out of our way to call MessageBoxW when we want the Unicode
one, if UNICODE is defined.

Why didn't they do this?

Thanks,
David
Mihai N.
2006-11-24 10:06:14 UTC
Permalink
Post by David Ching
Why didn't they do this?
Because "Hello" and L"Hello" are both standard C/C++.
Making "Hello" to mean Unicode string means the compiler
is not standard compliant.

The MS compilers moved towards being more standard compliant lately
(and is a good thing).

Plus there are situations when one might want to use narrow strings
in an Unicode application. If "Hello" means Unicode, then what?
--
Mihai Nita [Microsoft MVP, Windows - SDK]
http://www.mihai-nita.net
------------------------------------------
Replace _year_ with _ to get the real email
David Ching
2006-11-24 17:39:23 UTC
Permalink
Post by Mihai N.
Because "Hello" and L"Hello" are both standard C/C++.
Making "Hello" to mean Unicode string means the compiler
is not standard compliant.
The MS compilers moved towards being more standard compliant lately
(and is a good thing).
I disagree. MS already has a compiler option to "treat chars as unsigned",
this new option "treat chars as wide" would be the same situation. And the
standards committees could make this a standard also.
Post by Mihai N.
Plus there are situations when one might want to use narrow strings
in an Unicode application. If "Hello" means Unicode, then what?
I suggest something like the ATL libraries have used XCHAR and YCHAR. These
change meaning depending on whether UNICODE is defined. XCHAR is like
TCHAR... YCHAR is the opposite of XCHAR, at least I think that's how it
works.

So if UNICODE is defined, "Hello" is Unicode, and Y"Hello" is ANSI. If
UNICODE is not defined, "Hello" is ANSI and Y"Hello" is Unicode. Both of
these are easier to type than _T("").


-- David
Mihai N.
2006-11-24 19:13:00 UTC
Permalink
Post by David Ching
I disagree. MS already has a compiler option to "treat chars as unsigned",
this new option "treat chars as wide" would be the same situation.
Not the same case. The standard does not specify if the chars are signed or
not.
Post by David Ching
And the standards committees could make this a standard also.
Ok then, MS is waiting for this to happen.
Post by David Ching
I suggest something like the ATL libraries have used XCHAR and YCHAR.
These
change meaning depending on whether UNICODE is defined. XCHAR is like
TCHAR... YCHAR is the opposite of XCHAR, at least I think that's how it
works.
So if UNICODE is defined, "Hello" is Unicode, and Y"Hello" is ANSI. If
UNICODE is not defined, "Hello" is ANSI and Y"Hello" is Unicode. Both of
these are easier to type than _T("").
This means that you have to be aware and write your code with Y,
instead of _T. Back to square 1.
--
Mihai Nita [Microsoft MVP, Windows - SDK]
http://www.mihai-nita.net
------------------------------------------
Replace _year_ with _ to get the real email
David Ching
2006-11-24 22:42:16 UTC
Permalink
Post by Mihai N.
Post by David Ching
I disagree. MS already has a compiler option to "treat chars as unsigned",
this new option "treat chars as wide" would be the same situation.
Not the same case. The standard does not specify if the chars are signed or
not.
Post by David Ching
And the standards committees could make this a standard also.
Ok then, MS is waiting for this to happen.
My point is that a large population of us Windows programmers could care
less about the C++standards, we just want to write good Windows apps for our
customers, and our customers are demanding Unicode support. There is
nothing stopping Microsoft from enhancing the C++ compiler with proprietary
switches that would help us out, but they don't.

Instead they force us to go through our source code with a fine tooth comb
and surround all our strings with _T(""). Since almost all literals will be
surrounded with _T(""), they break a fundamental rule which is to make the
ordinary case EASY and the exception case HARD. To do so in the name of
supporting some standard a lot of us don't care about is kind of ridiculous.
Then again, the horse has already left the barn, so there's really nothing
we can do about it except save a little typing by using L"" instead of
_T("") when we can get away with it.
Post by Mihai N.
Post by David Ching
I suggest something like the ATL libraries have used XCHAR and YCHAR.
These
change meaning depending on whether UNICODE is defined. XCHAR is like
TCHAR... YCHAR is the opposite of XCHAR, at least I think that's how it
works.
So if UNICODE is defined, "Hello" is Unicode, and Y"Hello" is ANSI. If
UNICODE is not defined, "Hello" is ANSI and Y"Hello" is Unicode. Both of
these are easier to type than _T("").
This means that you have to be aware and write your code with Y,
instead of _T. Back to square 1.
Yes, all these compiler extensions would solve the problem elegantly:

"This undecorated string is ANSI if UNICODE is not defined, else it is a
WIDE string"
A"This is always an ANSI string"
W"This is always a WIDE string"


-- David
Mihai N.
2006-11-25 04:35:05 UTC
Permalink
Post by David Ching
My point is that a large population of us Windows programmers could care
less about the C++ standards
...
Post by David Ching
To do so in the name of supporting some standard a lot of us don't care
about is kind of ridiculous.
Too bad you don't care.
I am Windows programmer, and I have always hated the bad standard
compatibility of the MS C++ compiler.
Now the MS compiler is getting more and more compatible, and I am very happy
with the direction.
So let's agree to disagree here.
Post by David Ching
Instead they force us to go through our source code with a fine tooth comb
and surround all our strings with _T("").
Since almost all literals will be
surrounded with _T(""), they break a fundamental rule which is to make the
ordinary case EASY and the exception case HARD.
Just write a Perl script to add _T(" ") arround all your strings, if you are
so sure this does not break something else.
I am sure stuff will brea, and No-Thinking-Programming does not work, in my
experience.
Post by David Ching
Then again, the horse has already left the barn, so there's really nothing
we can do about it except save a little typing by using L"" instead of
_T("") when we can get away with it.
Ok, whatever. You go ahead with you sloppy codding style and disregard for
standards, and I will go my useless way, wasting time on extra brackets and
quotes.
And let's just hope one of us will not have an interview with the other, at
some point :-)
--
Mihai Nita [Microsoft MVP, Windows - SDK]
http://www.mihai-nita.net
------------------------------------------
Replace _year_ with _ to get the real email
David Ching
2006-11-25 11:22:15 UTC
Permalink
Post by Mihai N.
Now the MS compiler is getting more and more compatible, and I am very happy
with the direction.
So let's agree to disagree here.
OK.
Post by Mihai N.
Just write a Perl script to add _T(" ") arround all your strings, if you are
so sure this does not break something else.
I am sure stuff will brea, and No-Thinking-Programming does not work, in my
experience.
I don't have the knowledge to write a Perl script. Of course stuff will
break, but when was the last time you manually added _T("") to a bunch of
existing code and got it right the first time? Windows promotes trial and
error programming, this _T("") business is yet another example of it.
Post by Mihai N.
Ok, whatever. You go ahead with you sloppy codding style and disregard for
standards, and I will go my useless way, wasting time on extra brackets and
quotes.
And let's just hope one of us will not have an interview with the other, at
some point :-)
Nicely put. For the record, I've only recently begun using L"" because only
recently have clients begun asking for code that only works on the Win2K/XP
and later platforms. Win9x is going by the wayside, thank God. I'm sure we
can agree on that sentiment! :-)

-- David
Mihai N.
2006-11-26 05:16:27 UTC
Permalink
Post by David Ching
I don't have the knowledge to write a Perl script.
Then try this, in Visual Studio:
Start "Find and Replace", then do this:
Find what: ".*"
Replace with: _T(\0)
Use: Regular expressions
This will no handle properly two string on the same line:
MessageBox( ..., "Hello", "World", ...
will become
MessageBox( ..., _T("Hello", "World"), ...

You can also try (with the rest same as above)
Find what: "[^"]*"
This will not handle properly \" in the string:
"Press \"Ok\" to continue"
will become
_T("Press \")Ok\_T(" to continue")

Anyway, don't do global Replace All, confirm for each string.
Both options will cut a big chunk of your work, because a normal application
should not have too many strings in code anyway (all the messages are in the
resource file, right? :-)
Post by David Ching
Of course stuff will
break, but when was the last time you manually added _T("") to a bunch of
existing code and got it right the first time?
Although I use _T for a long time now, I had to do it about a year ago for
some old application that I don't own, but I had to help with.
And I got it quite ok, no problems.
--
Mihai Nita [Microsoft MVP, Windows - SDK]
http://www.mihai-nita.net
------------------------------------------
Replace _year_ with _ to get the real email
David Ching
2006-11-27 05:13:24 UTC
Permalink
Post by Mihai N.
Post by David Ching
I don't have the knowledge to write a Perl script.
Find what: ".*"
Replace with: _T(\0)
Use: Regular expressions
MessageBox( ..., "Hello", "World", ...
will become
MessageBox( ..., _T("Hello", "World"), ...
You can also try (with the rest same as above)
Find what: "[^"]*"
"Press \"Ok\" to continue"
will become
_T("Press \")Ok\_T(" to continue")
Anyway, don't do global Replace All, confirm for each string.
Hey, neat! I especially like using '\0' to specify the matching text. I
had forgotten about that.
Post by Mihai N.
Both options will cut a big chunk of your work, because a normal application
should not have too many strings in code anyway (all the messages are in the
resource file, right? :-)
Yeah sure. It's not only string literals, but also converting "char" to
TCHAR, LPCSTR to LPCTSTR, etc. The whole thing that I'm saying Microsoft
could have made easier with a compiler switch.

My point is not that the conversion can't be done, it's that they didn't
help us out at all. They gave us more tools to convert from Win16 to Win32
than they did to convert to TCHAR. Then they wonder why people are
reluctant to support it.


-- David
Mark Randall
2006-11-27 20:44:50 UTC
Permalink
*cry* I just want a _T prefix before that doesnt need the additional
horrible brackets.
--
- Mark Randall
http://www.temporal-solutions.co.uk
http://www.awportals.com
Post by David Ching
Post by Mihai N.
Post by David Ching
I don't have the knowledge to write a Perl script.
Find what: ".*"
Replace with: _T(\0)
Use: Regular expressions
MessageBox( ..., "Hello", "World", ...
will become
MessageBox( ..., _T("Hello", "World"), ...
You can also try (with the rest same as above)
Find what: "[^"]*"
"Press \"Ok\" to continue"
will become
_T("Press \")Ok\_T(" to continue")
Anyway, don't do global Replace All, confirm for each string.
Hey, neat! I especially like using '\0' to specify the matching text. I
had forgotten about that.
Post by Mihai N.
Both options will cut a big chunk of your work, because a normal application
should not have too many strings in code anyway (all the messages are in the
resource file, right? :-)
Yeah sure. It's not only string literals, but also converting "char" to
TCHAR, LPCSTR to LPCTSTR, etc. The whole thing that I'm saying Microsoft
could have made easier with a compiler switch.
My point is not that the conversion can't be done, it's that they didn't
help us out at all. They gave us more tools to convert from Win16 to
Win32 than they did to convert to TCHAR. Then they wonder why people are
reluctant to support it.
-- David
Ajay Kalra
2006-11-27 20:50:38 UTC
Permalink
Post by Mark Randall
*cry* I just want a _T prefix before that doesnt need the additional
horrible brackets.
That is your right and I am sure MSFT cannot take it away. If it does,
there may be mutiny.

---
Ajay
Mark Randall
2006-11-28 13:50:40 UTC
Permalink
Post by Ajay Kalra
That is your right and I am sure MSFT cannot take it away. If it does,
there may be mutiny.
Im just saying, L doesnt need brackets and doesnt waste space - why should
_T when it would take very little time for them to add something where say
_T (with the exception of brackets) why can there not be a pre-processor
directive that relates some symbol to L only when unicode is defined.
--
- Mark Randall
http://www.temporal-solutions.co.uk
http://www.awportals.com
Tom Serface
2006-11-28 20:53:18 UTC
Permalink
If you're only talking about strings, couldn't you simply do your own macro
for this sort of thing? I know that wouldn't be a "standard", but certainly
doing a macro that puts an L in if UNICODE is set would be pretty simple.

Tom
Post by Mark Randall
Post by Ajay Kalra
That is your right and I am sure MSFT cannot take it away. If it does,
there may be mutiny.
Im just saying, L doesnt need brackets and doesnt waste space - why should
_T when it would take very little time for them to add something where say
_T (with the exception of brackets) why can there not be a pre-processor
directive that relates some symbol to L only when unicode is defined.
--
- Mark Randall
http://www.temporal-solutions.co.uk
http://www.awportals.com
David Ching
2006-11-29 02:21:24 UTC
Permalink
Post by Tom Serface
If you're only talking about strings, couldn't you simply do your own
macro for this sort of thing? I know that wouldn't be a "standard", but
certainly doing a macro that puts an L in if UNICODE is set would be
pretty simple.
No, _T() is a macro, that's why the parens are needed. L is a compiler
extension and that's what is needed to avoid the parens.

-- David
David Wilkinson
2006-11-29 12:08:51 UTC
Permalink
Post by David Ching
Post by Tom Serface
If you're only talking about strings, couldn't you simply do your own
macro for this sort of thing? I know that wouldn't be a "standard", but
certainly doing a macro that puts an L in if UNICODE is set would be
pretty simple.
No, _T() is a macro, that's why the parens are needed. L is a compiler
extension and that's what is needed to avoid the parens.
-- David
David:

Yes, and that would lead to non-portable code. A Bad Thing.

If you need to, the _T macro with parens can easily be defined in other
platforms.

I am so used to _T("") that strings look wrong to me if they are not
that way (perhaps because I spent several days fixing an app that did
not use it). I don't think I have ever used L"".

David Wilkinson
David Ching
2006-11-29 13:09:44 UTC
Permalink
Post by David Wilkinson
If you need to, the _T macro with parens can easily be defined in other
platforms.
I am so used to _T("") that strings look wrong to me if they are not that
way (perhaps because I spent several days fixing an app that did not use
it). I don't think I have ever used L"".
To tell the truth, in production code, I also use _T(""), except when I am
explicitly using LPWSTR, in which case I use the appropriate L"". But when
I'm hacking in debug traces, I use L"" because it's easier and I remove them
before shipping.

My question is, why didn't whoever put in L"" also put in T""? Why one
useful extension but not the other?

-- David
David Wilkinson
2006-11-29 13:56:23 UTC
Permalink
Post by David Ching
Post by David Wilkinson
If you need to, the _T macro with parens can easily be defined in other
platforms.
I am so used to _T("") that strings look wrong to me if they are not that
way (perhaps because I spent several days fixing an app that did not use
it). I don't think I have ever used L"".
To tell the truth, in production code, I also use _T(""), except when I am
explicitly using LPWSTR, in which case I use the appropriate L"". But when
I'm hacking in debug traces, I use L"" because it's easier and I remove them
before shipping.
My question is, why didn't whoever put in L"" also put in T""? Why one
useful extension but not the other?
David:

Because L"" is part of the C++ standard, and the whole TCHAR thing is a
Microsoft-specific feature.

David Wilkinson
David Ching
2006-11-29 16:46:47 UTC
Permalink
Post by David Wilkinson
Because L"" is part of the C++ standard, and the whole TCHAR thing is a
Microsoft-specific feature.
Ah, that's the problem then. Why didn't the C++ community recognize the
importance of a codebase to support both ANSI and Unicode and provide a
proper standard for that instead of making Microsoft come up with a solution
that because it's proprietary isn't as tightly implemented as it could have
been? And given that the C++ community didn't provide a standard for such a
common problem, why not innovate and make something more usable?

The more I see how the C++ standards process works, the more respect I have
for Microsoft to go on their own with C# and make something practical
(although I think their implementation falls down in several areas compared
to C++).

-- David
David Wilkinson
2006-11-29 19:07:10 UTC
Permalink
Post by David Ching
Post by David Wilkinson
Because L"" is part of the C++ standard, and the whole TCHAR thing is a
Microsoft-specific feature.
Ah, that's the problem then. Why didn't the C++ community recognize the
importance of a codebase to support both ANSI and Unicode and provide a
proper standard for that instead of making Microsoft come up with a solution
that because it's proprietary isn't as tightly implemented as it could have
been? And given that the C++ community didn't provide a standard for such a
common problem, why not innovate and make something more usable?
David:

Maybe because many Unix systems had the good sense to go with UTF-8. If
everybody had done that, I'm sure by now we would have a good UTF-8
aware version of std::string in the standard library, and we wouldn't
have need of either L"" or _T("").

But as it is, the whole TCHAR thing is not to hard to implement in
standard C++. Using the standard library you can do things like

typedef std::basic_string<TCHAR> tstring;

David Wilkinson
Ajay Kalra
2006-11-29 19:20:05 UTC
Permalink
Post by David Wilkinson
typedef std::basic_string<TCHAR> tstring;
We did use it like this. But you still need to use _T("") to define the
strings. We used L"" as well for all our COM based strings. There is no
way to get around these.

---
Ajay
David Ching
2006-11-29 19:21:13 UTC
Permalink
Post by David Wilkinson
Maybe because many Unix systems had the good sense to go with UTF-8. If
everybody had done that, I'm sure by now we would have a good UTF-8 aware
version of std::string in the standard library, and we wouldn't have need
of either L"" or _T("").
But as it is, the whole TCHAR thing is not to hard to implement in
standard C++. Using the standard library you can do things like
typedef std::basic_string<TCHAR> tstring;
Thanks for the explanation. The problem with using tstring is things like
the Win API doesn't take tstring's, and we're back to the same issue.

-- David
David Wilkinson
2006-11-29 19:52:01 UTC
Permalink
Post by David Ching
Post by David Wilkinson
typedef std::basic_string<TCHAR> tstring;
Thanks for the explanation. The problem with using tstring is things like
the Win API doesn't take tstring's, and we're back to the same issue.
David:

Yes, you have to use c_str(). But it's like _T(""), you get used to it.

When I first started using std::string I hated the fact that it didn't
have a const char* cast operator. But having been bitten by some of
CString's automatic conversion constructors, I came to the conclusion
that the Standard Library was right to exclude all such things.

David Wilkinson
Alexander Grigoriev
2006-11-30 04:47:57 UTC
Permalink
You can disable automatic constructors in VC7.1 (at least). Needs some macro
defined.
Post by David Wilkinson
Post by David Ching
Post by David Wilkinson
typedef std::basic_string<TCHAR> tstring;
Thanks for the explanation. The problem with using tstring is things
like the Win API doesn't take tstring's, and we're back to the same
issue.
Yes, you have to use c_str(). But it's like _T(""), you get used to it.
When I first started using std::string I hated the fact that it didn't
have a const char* cast operator. But having been bitten by some of
CString's automatic conversion constructors, I came to the conclusion that
the Standard Library was right to exclude all such things.
David Wilkinson
Mihai N.
2006-11-30 06:10:37 UTC
Permalink
Post by David Ching
Thanks for the explanation. The problem with using tstring is things like
the Win API doesn't take tstring's, and we're back to the same issue.
No problem.
tstring str;
MessageBox( hWnd, str.c_str(), ... );
--
Mihai Nita [Microsoft MVP, Windows - SDK]
http://www.mihai-nita.net
------------------------------------------
Replace _year_ with _ to get the real email
Mihai N.
2006-11-30 06:15:53 UTC
Permalink
Post by David Wilkinson
Maybe because many Unix systems had the good sense to go with UTF-8.
This is very debatable. I don't see it as good sense, but as laziness.
I can tell you that in the Unicode world (UTC and such) UTF-16 is regarded
as the better option for processing, and utf-8 better for transfer/storage.
--
Mihai Nita [Microsoft MVP, Windows - SDK]
http://www.mihai-nita.net
------------------------------------------
Replace _year_ with _ to get the real email
David Wilkinson
2006-11-30 09:10:36 UTC
Permalink
Post by Mihai N.
Post by David Wilkinson
Maybe because many Unix systems had the good sense to go with UTF-8.
This is very debatable. I don't see it as good sense, but as laziness.
I can tell you that in the Unicode world (UTC and such) UTF-16 is regarded
as the better option for processing, and utf-8 better for transfer/storage.
Hi Mihai:

We have discussed this before. And I'm sure we will again...

I would agree with you, except that unfortunately there are now
surrogate pairs in UTF-16. This means that any program that does string
manipulation assuming each wchar_t is a single character is technically
incorrect, and could fail. Microsoft 16-bit "Unicode" no longer has the
advantage that motivated its creation.

I confess that one reason I like UTF-8 is that is backward compatible
with code that assumed all ASCII characters. Is this what you mean by lazy?

David Wilkinson
Mihai N.
2006-11-30 11:06:09 UTC
Permalink
Post by David Wilkinson
We have discussed this before. And I'm sure we will again...
You are right, but maybe now is not the moment :-)
Post by David Wilkinson
I confess that one reason I like UTF-8 is that is backward compatible
with code that assumed all ASCII characters. Is this what you mean by lazy?
I call lazyness (or worse) the unwilingness to change anything, good or bad.
Backward compatibility is good, until it compromises the new functionality.

One of the best examples of laziness is the tipical Linux/UNIX file system.
The system (and the kernel, and the drivers) don't care at all in what
encoding the file names are. "Just ASCIIZ." Not even the small decision
"it is UTF-8"!
Nope, whatever crap the application gives, the systems takes.

Try this to see the results:
set LANG to ja_jp.euc-jp
create a Japanese file name
ls // ok
set LANG to ja_jp.shift_jis
ls // Japanese-looking crap
set LANG to ja_jp.utf-8
ls // crap
set LANG to ru_ru.koi8-r
ls // Russian-looking crap

And because a sequence of bytes representing a Japanese in euc-jp is usually
an invalid utf-8 sequence, a lot of file operations on such files
will just fail if the locale is set to UTF-8.
This means there is not way for an application to manipulate all the files on
disk. Not possible to do a search. Or a high-level backup (have to get down
to bytes), and so on.

Imagine I give you 20000 text files, and I am saying "they are all in random
encodings, about 100 of them, you take them and make some sense"
Same problem here.
--
Mihai Nita [Microsoft MVP, Windows - SDK]
http://www.mihai-nita.net
------------------------------------------
Replace _year_ with _ to get the real email
Tom Serface
2006-11-30 15:17:12 UTC
Permalink
Post by Mihai N.
Post by David Wilkinson
Maybe because many Unix systems had the good sense to go with UTF-8.
This is very debatable. I don't see it as good sense, but as laziness.
I can tell you that in the Unicode world (UTC and such) UTF-16 is regarded
as the better option for processing, and utf-8 better for
transfer/storage.
I fully agree with this (see my last post). Until recently MFC functions
(like CStdioFile) didn't even understand UTF-8. I think that will change as
MSFT wants to move people away from MBCS (ANSI) altogether.

Tom
unknown
2006-12-01 12:02:07 UTC
Permalink
Post by Tom Serface
Until recently MFC functions
(like CStdioFile) didn't even understand UTF-8. I think that will change as
MSFT wants to move people away from MBCS (ANSI) altogether.
Do you mean by this that the version of MFC in VS2005 has
decent/better support for Unicode ? This subject is of some importance
to me as we'll be using VS2005 for the next version of our system, and
Unicode is suddenly* very important because our management want to
enter the middle and far east markets.

The client side stuff is probably going to migrate from Delphi to C#
so that shouldn't be a problem (apart from the tranlsation issues
inherent in the ghastly C# resource mess), but the server side stuff
is all C++/MFC, so I'm worrying about this right now.

* Of course the fact I've been warning them about our Ansi-centricness
for over five years gets conveniently forgotten at this point :-)


--
Bob Moore
http://bobmoore.mvps.org/
(this is a non-commercial site and does not accept advertising)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Do not reply via email unless specifically requested to do so.
Unsolicited email is NOT welcome and will go unanswered.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Ajay Kalra
2006-12-01 14:30:16 UTC
Permalink
Post by unknown
Do you mean by this that the version of MFC in VS2005 has
decent/better support for Unicode ? This subject is of some importance
to me as we'll be using VS2005 for the next version of our system, and
Unicode is suddenly* very important because our management want to
enter the middle and far east markets.
Do you really have a choice (of not using VS2005)? We shipped a UNICODE
version of app using VC6(followed by VS2003, VS2005) and we did not
encounter any problems.

---
Ajay
Tom Serface
2006-12-01 16:11:47 UTC
Permalink
I've found the support for Unicode in 2005 to be better, but still not
complete. I ended up writing my own extension of CStdioFile that uses the
BOM in a file to determine the type of file to read in and just makes
everything Unicode (Windows type) in memory. I wish MFC had something like
this built in.

I don't think the standard CStdioFile reads the BOM in the file even in the
latest version, but I haven't used it for some time so I'm not sure. I
heard some people from MSFT (at a trade show) say they were going to support
UTF-8 and that eventually MSFT was interested in moving people away from
using ANSI altogether. That may have just been one employee's opinion.

Tom
Post by unknown
Post by Tom Serface
Until recently MFC functions
(like CStdioFile) didn't even understand UTF-8. I think that will change as
MSFT wants to move people away from MBCS (ANSI) altogether.
Do you mean by this that the version of MFC in VS2005 has
decent/better support for Unicode ? This subject is of some importance
to me as we'll be using VS2005 for the next version of our system, and
Unicode is suddenly* very important because our management want to
enter the middle and far east markets.
The client side stuff is probably going to migrate from Delphi to C#
so that shouldn't be a problem (apart from the tranlsation issues
inherent in the ghastly C# resource mess), but the server side stuff
is all C++/MFC, so I'm worrying about this right now.
* Of course the fact I've been warning them about our Ansi-centricness
for over five years gets conveniently forgotten at this point :-)
--
Bob Moore
http://bobmoore.mvps.org/
(this is a non-commercial site and does not accept advertising)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Do not reply via email unless specifically requested to do so.
Unsolicited email is NOT welcome and will go unanswered.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Mihai N.
2006-12-02 06:15:21 UTC
Permalink
Post by unknown
Do you mean by this that the version of MFC in VS2005 has
decent/better support for Unicode ? This subject is of some importance
to me as we'll be using VS2005 for the next version of our system, and
Unicode is suddenly* very important because our management want to
enter the middle and far east markets.
The client side stuff is probably going to migrate from Delphi to C#
so that shouldn't be a problem (apart from the tranlsation issues
inherent in the ghastly C# resource mess), but the server side stuff
is all C++/MFC, so I'm worrying about this right now.
I would go with VS 2005 without a second though.
The Unicode support in MFC 8.0 is slightly better, but there are other
advantages:
- The Unicode support in the IDE itself is better
(including in the RC editor, for the first time)
- The C++ compiler is more standard-compliant
- If you want .NET, then XP you need VS 2005 to target .NET 2.0 or 3.0
Older versions only support .NET 1.0/1.1
(and VS 6 does not support .NET at all)
- Vista is out, is a good strategic move if you want your client to run on
it is to use VS2005. Any other version is not supported for Vista
development (as usual, not supported does not mean "does not work", but
it will be more work, and if something goes wrong, you are on your own)
- On the same page, the Orcas (Vista extensions for VS) are released for
2005 only

So, unlike the VS2002 to VS2003 move (which did not bring much),
VS2005 is really better (talking features). The stability might use some
improvement, but the SP1 is in beta, and I hope it will fix some of the
problems.
--
Mihai Nita [Microsoft MVP, Windows - SDK]
http://www.mihai-nita.net
------------------------------------------
Replace _year_ with _ to get the real email
Tom Serface
2006-12-02 16:22:27 UTC
Permalink
Having Unicode .RC files is worth the price of update alone in my opinion.
Post by Mihai N.
I would go with VS 2005 without a second though.
The Unicode support in MFC 8.0 is slightly better, but there are other
- The Unicode support in the IDE itself is better
(including in the RC editor, for the first time)
- The C++ compiler is more standard-compliant
- If you want .NET, then XP you need VS 2005 to target .NET 2.0 or 3.0
Older versions only support .NET 1.0/1.1
(and VS 6 does not support .NET at all)
- Vista is out, is a good strategic move if you want your client to run on
it is to use VS2005. Any other version is not supported for Vista
development (as usual, not supported does not mean "does not work", but
it will be more work, and if something goes wrong, you are on your own)
- On the same page, the Orcas (Vista extensions for VS) are released for
2005 only
So, unlike the VS2002 to VS2003 move (which did not bring much),
VS2005 is really better (talking features). The stability might use some
improvement, but the SP1 is in beta, and I hope it will fix some of the
problems.
Mihai N.
2006-11-30 06:14:12 UTC
Permalink
Post by David Ching
Ah, that's the problem then. Why didn't the C++ community recognize the
importance of a codebase to support both ANSI and Unicode and provide a
proper standard for that instead of making Microsoft come up with a solution
that because it's proprietary isn't as tightly implemented as it could have
been? And given that the C++ community didn't provide a standard for such
a common problem, why not innovate and make something more usable?
Because most of the C++ comunity did not understand Unicode until very-very
recently.
A while ago I have followed some discution thread on boost (about an Unicode
string implementation) and it was scarry.

It looks like there is some traction lately, with the guys at the "high
levels" of the C++ standard.
I have no doubts they can grok Unicode, if they want, so let's hope they will
get it right.
--
Mihai Nita [Microsoft MVP, Windows - SDK]
http://www.mihai-nita.net
------------------------------------------
Replace _year_ with _ to get the real email
Tom Serface
2006-11-30 15:16:11 UTC
Permalink
Mihai,
Post by Mihai N.
Because most of the C++ comunity did not understand Unicode until very-very
recently.
I'm not sure most of us understand it completely even now. I find out new
things (like about surrogate pairs) all the time.
Post by Mihai N.
A while ago I have followed some discution thread on boost (about an Unicode
string implementation) and it was scarry.
Indeed!
Post by Mihai N.
It looks like there is some traction lately, with the guys at the "high
levels" of the C++ standard.
I have no doubts they can grok Unicode, if they want, so let's hope they will
get it right.
It sure would be nice if there was more standard implementation. I wouldn't
mind if MBCS went away altogether so long as there was support for reading
and writing UTF-8 files that was more consistent. I certainly don't mind
using Unicode in memory.

Tom
Mihai N.
2006-12-01 07:02:36 UTC
Permalink
Post by Tom Serface
It sure would be nice if there was more standard implementation.
From what I understand, there is some traction in this direction for the
next version of the standard.
I know there are some smart guys, big names, working to add better Unicode
support in the standard.
It might take another 10 years until we get to use it, but it might happen
in our lifetime :-)
Post by Tom Serface
I wouldn't
mind if MBCS went away altogether so long as there was support for reading
and writing UTF-8 files that was more consistent. I certainly don't mind
using Unicode in memory.
UTF-8 is as Unicode as UTF-7, UTF-16 and UTF-32 :-)
--
Mihai Nita [Microsoft MVP, Windows - SDK]
http://www.mihai-nita.net
------------------------------------------
Replace _year_ with _ to get the real email
Tom Serface
2006-12-01 16:08:40 UTC
Permalink
Mihai,

OK, point takena and I apologize for my symantical (and somewhat
Windows-centric) slip :o)

More specifically, I like using UTF-16 (although the recent discovery about
pairs is annoying) in memory, but I like to store files using UTF-8 to make
them more compact. I also like to use UTF-8 for XML.

I've got most of this working in my code, there just isn't much "built-in"
to MFC to support this sort of thing.

Yes, I know there are different kinds of UTF-16 :o)

Tom
Post by Mihai N.
Post by Tom Serface
It sure would be nice if there was more standard implementation.
From what I understand, there is some traction in this direction for the
next version of the standard.
I know there are some smart guys, big names, working to add better Unicode
support in the standard.
It might take another 10 years until we get to use it, but it might happen
in our lifetime :-)
Post by Tom Serface
I wouldn't
mind if MBCS went away altogether so long as there was support for reading
and writing UTF-8 files that was more consistent. I certainly don't mind
using Unicode in memory.
UTF-8 is as Unicode as UTF-7, UTF-16 and UTF-32 :-)
--
Mihai Nita [Microsoft MVP, Windows - SDK]
http://www.mihai-nita.net
------------------------------------------
Replace _year_ with _ to get the real email
Mihai N.
2006-12-02 06:06:11 UTC
Permalink
Post by Tom Serface
OK, point takena and I apologize for my symantical
(and somewhat Windows-centric) slip :o)
Hey, not need to apologize :-)
It was just correcting the lingo (see also below :-)
Post by Tom Serface
More specifically, I like using UTF-16 (although the recent discovery about
pairs is annoying) in memory, but I like to store files using UTF-8 to make
them more compact. I also like to use UTF-8 for XML.
I've got most of this working in my code, there just isn't much "built-in"
to MFC to support this sort of thing.
I understand your pain.
For XML I see no problem. Depends what you use.
I am using Xerces and I can write UTF-8, no problem.
The XMLWriter in .NET can also write UTF-8.
I did not use MSXML (because is not cross-platform, is COM, and needs a real
installation, compared with Xerces, which is only one DLL), but I am quite
sure it is also capable to write UTF-8.

Now, if you do your own, I think you deserve the pain :-)
Reinventing the wheel, when good quality and free elternatives exist,
deserves to be punished :-)
Post by Tom Serface
Yes, I know there are different kinds of UTF-16 :o)
Well, depends.
If you think encoding forms, there is only one UTF-16.
If you think encoding schemes, there are two: UTF-16BE and UTF-16LE
Confusing, isnt't it?
I keep promising myself to write something about the basic Unicode
terminology and concepts, but I don't seem to manage.
--
Mihai Nita [Microsoft MVP, Windows - SDK]
http://www.mihai-nita.net
------------------------------------------
Replace _year_ with _ to get the real email
Tom Serface
2006-12-02 16:21:02 UTC
Permalink
I use Xerces and that's painful enough. I'm not writing my own :o)

You're right about the Unicode "scheme". There are also planes to consider,
and ... it's so confusing. Why can't everyone just speak English :o)

Tom
Post by Mihai N.
Post by Tom Serface
OK, point takena and I apologize for my symantical
(and somewhat Windows-centric) slip :o)
Hey, not need to apologize :-)
It was just correcting the lingo (see also below :-)
Post by Tom Serface
More specifically, I like using UTF-16 (although the recent discovery about
pairs is annoying) in memory, but I like to store files using UTF-8 to make
them more compact. I also like to use UTF-8 for XML.
I've got most of this working in my code, there just isn't much "built-in"
to MFC to support this sort of thing.
I understand your pain.
For XML I see no problem. Depends what you use.
I am using Xerces and I can write UTF-8, no problem.
The XMLWriter in .NET can also write UTF-8.
I did not use MSXML (because is not cross-platform, is COM, and needs a real
installation, compared with Xerces, which is only one DLL), but I am quite
sure it is also capable to write UTF-8.
Now, if you do your own, I think you deserve the pain :-)
Reinventing the wheel, when good quality and free elternatives exist,
deserves to be punished :-)
Post by Tom Serface
Yes, I know there are different kinds of UTF-16 :o)
Well, depends.
If you think encoding forms, there is only one UTF-16.
If you think encoding schemes, there are two: UTF-16BE and UTF-16LE
Confusing, isnt't it?
I keep promising myself to write something about the basic Unicode
terminology and concepts, but I don't seem to manage.
--
Mihai Nita [Microsoft MVP, Windows - SDK]
http://www.mihai-nita.net
------------------------------------------
Replace _year_ with _ to get the real email
Mark Randall
2006-12-02 17:04:55 UTC
Permalink
Post by Tom Serface
Why can't everyone just speak English :o)
Because France would nuke everyone for eroding its 'cultural values'
--
- Mark Randall
http://www.temporal-solutions.co.uk
http://www.awportals.com
BobF
2006-12-02 17:16:00 UTC
Permalink
Post by Mark Randall
Post by Tom Serface
Why can't everyone just speak English :o)
Because France would nuke everyone for eroding its 'cultural values'
More likely France would surrender ...
Tom Serface
2006-12-02 21:00:24 UTC
Permalink
I'd be happy enough if everyone just spoke French :o) I just want everyone
to speak the same language...

Tom
Post by Mark Randall
Post by Tom Serface
Why can't everyone just speak English :o)
Because France would nuke everyone for eroding its 'cultural values'
--
- Mark Randall
http://www.temporal-solutions.co.uk
http://www.awportals.com
Mark Randall
2006-12-04 12:43:45 UTC
Permalink
Post by Tom Serface
I'd be happy enough if everyone just spoke French :o) I just want
everyone to speak the same language...
int i = 1; speak( ) << new fluent(L"C++"); for(;;) { while (a) { ; } }
--
- Mark Randall
http://www.temporal-solutions.co.uk
http://www.awportals.com
Mihai N.
2006-12-05 03:10:59 UTC
Permalink
Post by Tom Serface
I'd be happy enough if everyone just spoke French :o) I just want everyone
to speak the same language...
Ok. So let's agree:

1. We should all learn Chinese
You should learn thousands of characters in order to read/write,
have to deal with surrogates or with GB-18030 (up to 4 bytes/character),
and have to be able to hear/pronounce the tones (to speak/listen)

2. We should all learn Hindi.
Complex script, only supported (in Windows) thru Unicode.

Take your choice :-)
--
Mihai Nita [Microsoft MVP, Windows - SDK]
http://www.mihai-nita.net
------------------------------------------
Replace _year_ with _ to get the real email
Tom Serface
2006-12-05 04:27:47 UTC
Permalink
Um, let's just stick with French. Although, I got my dogs from Taiwan so
they speak fluent chinese (of some sort).

Tom
Post by Mihai N.
Post by Tom Serface
I'd be happy enough if everyone just spoke French :o) I just want everyone
to speak the same language...
1. We should all learn Chinese
You should learn thousands of characters in order to read/write,
have to deal with surrogates or with GB-18030 (up to 4 bytes/character),
and have to be able to hear/pronounce the tones (to speak/listen)
2. We should all learn Hindi.
Complex script, only supported (in Windows) thru Unicode.
Take your choice :-)
--
Mihai Nita [Microsoft MVP, Windows - SDK]
http://www.mihai-nita.net
------------------------------------------
Replace _year_ with _ to get the real email
Mark Randall
2006-12-05 06:09:36 UTC
Permalink
Post by Tom Serface
Um, let's just stick with French. Although, I got my dogs from Taiwan so
they speak fluent chinese (of some sort).
Ok Class,

Welcome to your first french lesson - now to just guage out how far along
people are id like everyone who can speak French to raise their right hand
please. Yes, just stick it straight up in the air...

Yes, good. Ok thats good.

Now, for those with their right hand up, if any of you actually are French
let me just see you raise your left hand
too.............................................
Post by Tom Serface
:)
--
- Mark Randall
http://www.temporal-solutions.co.uk
http://www.awportals.com
Mark Randall
2006-12-11 18:42:41 UTC
Permalink
Well I thought it was funny :(
--
- Mark Randall
http://www.temporal-solutions.co.uk
http://www.awportals.com
Post by Mark Randall
Post by Tom Serface
Um, let's just stick with French. Although, I got my dogs from Taiwan so
they speak fluent chinese (of some sort).
Ok Class,
Welcome to your first french lesson - now to just guage out how far along
people are id like everyone who can speak French to raise their right hand
please. Yes, just stick it straight up in the air...
Yes, good. Ok thats good.
Now, for those with their right hand up, if any of you actually are French
let me just see you raise your left hand
too.............................................
Post by Tom Serface
:)
--
- Mark Randall
http://www.temporal-solutions.co.uk
http://www.awportals.com
Tom Serface
2006-12-11 18:45:39 UTC
Permalink
It was funny. I just didn't have anymore snappy comebacks... :o)

Tom
Post by Mark Randall
Well I thought it was funny :(
--
- Mark Randall
http://www.temporal-solutions.co.uk
http://www.awportals.com
Tom Serface
2006-11-29 15:59:17 UTC
Permalink
Hi David,

The syntax you are describing here just works without putting anything. For
example, if you put L"" (that's a wide string) and if you just put ""
(that's what your considing a T""). So the T isn't really need. The nice
thing about the macros is that it fills in the blanks for you.

I admit that typing the extra ()'s in is a drag sometimes, but like David W.
I kind of just do it now without even thinking about it. At least the macro
is pretty flexible (you can use TCHAR or TEXT or _T or ... with strings or
chars). I think it is a great mechanism for those of us who want to be able
to compile our programs either way.

However, I can't argue with just using the L"" syntax if you never intend to
use anything except Unicode. We have several programs that would die
miserable without Unicode and it seems kind of dumb to do all that typing
since it would never be used.

Tom
Post by David Ching
Post by David Wilkinson
If you need to, the _T macro with parens can easily be defined in other
platforms.
I am so used to _T("") that strings look wrong to me if they are not that
way (perhaps because I spent several days fixing an app that did not use
it). I don't think I have ever used L"".
To tell the truth, in production code, I also use _T(""), except when I am
explicitly using LPWSTR, in which case I use the appropriate L"". But
when I'm hacking in debug traces, I use L"" because it's easier and I
remove them before shipping.
My question is, why didn't whoever put in L"" also put in T""? Why one
useful extension but not the other?
-- David
David Ching
2006-11-29 17:39:27 UTC
Permalink
Post by Tom Serface
The syntax you are describing here just works without putting anything.
For example, if you put L"" (that's a wide string) and if you just put ""
(that's what your considing a T""). So the T isn't really need. The nice
thing about the macros is that it fills in the blanks for you.
No, I don't think "" with nothing before it acts like _T(""). "" is forever
defined as an ANSI string. I was proposing T"" which would act like _T(""),
which is ANSI or WIDE, depending on whether UNICODE is defined.
Post by Tom Serface
I admit that typing the extra ()'s in is a drag sometimes, but like David
W. I kind of just do it now without even thinking about it. At least the
macro is pretty flexible (you can use TCHAR or TEXT or _T or ... with
strings or chars). I think it is a great mechanism for those of us who
want to be able to compile our programs either way.
Yes, I use _T("") because the flexibility is needed in a lot of cases
(although less so now that Win9x support is dying) and we can seriously
consider delivering purely UNICODE versions of our apps. The easiest I've
gotten is to use the Visual Assist plug-in and program a keyboard macro that
expands a word starting with

T

replacing it with

_T("")

and putting the cursor between the quotes. This makes it as easy to start
typing the string as if I were using L"" but it's not as easy when I'm done
because I physically need to move the cursor to the right of the right
paren.
Post by Tom Serface
However, I can't argue with just using the L"" syntax if you never intend
to use anything except Unicode. We have several programs that would die
miserable without Unicode and it seems kind of dumb to do all that typing
since it would never be used.
Yes, I agree... but then all data types need to be e.g. LPWSTR instead of
LPTSTR for constency, right?

Thanks,
David
Mark Randall
2006-11-16 15:08:08 UTC
Permalink
Post by Mihai N.
"Easier to type" is not an argument.
My friend Mr. Productivity says different.
--
- Mark Randall
http://www.temporal-solutions.co.uk
http://www.awportals.com
Joseph M. Newcomer
2006-11-16 17:39:58 UTC
Permalink
It is important to develop good programming style early and use it consistently.

One of the serious defects in programming style is an assumption that all strings are
8-bit characters. Any use of a string literal undecorated by _T() can be thought of as a
mistake (the only exception is the second argument to GetProcAddress which must be an
8-bit character string). Had you developed good programming habits, you would never have
written the line as shown; you would have written
AfxMessageBox(_T("This is the message"));
and it would have compiled properly.

This is because VS.NET 2005 assumes Unicode apps. Always program Unicode-aware. Always.
joe
Post by pvdg42
Looking at a VS 2005 MFC application and trying to use AfxMessageBox.
AfxMessageBox("This is the message"); using default 0 arguments for 2nd and
3rd parameters.
error C2665: 'AfxMessageBox' : none of the 2 overloads could convert all
the argument types
e:\2006_fall_students\foltz\sketcher\sketcher\sketcher\sketcherview.cpp 164
What am I doing wrong?
Joseph M. Newcomer [MVP]
email: ***@flounder.com
Web: http://www.flounder.com
MVP Tips: http://www.flounder.com/mvp_tips.htm
Hans-J. Ude
2006-11-17 17:56:15 UTC
Permalink
Post by Joseph M. Newcomer
It is important to develop good programming style early and use it consistently.
One of the serious defects in programming style is an assumption that all strings are
8-bit characters. Any use of a string literal undecorated by _T() can be thought of as a
mistake (the only exception is the second argument to GetProcAddress which must be an
8-bit character string). Had you developed good programming habits, you would never have
written the line as shown; you would have written
AfxMessageBox(_T("This is the message"));
and it would have compiled properly.
This is because VS.NET 2005 assumes Unicode apps. Always program Unicode-aware. Always.
joe
I've learned that when i started making CE versions of some exsisting
programs. That's a Unicode only environment. And it's yet better to
put everything that appears on the screen into the resources than into
the sourcecode and use AfxMessageBox(IDS_MYMESSAGE). Otherwise the
program becomes hard to maintain an even harder to translate to other
languages at any time.

Hans
Joseph M. Newcomer
2006-11-19 04:36:40 UTC
Permalink
Yes. Using native-language strings in the source code is what I lump into the
"fundamental error" category. Even using commas can be fatal, depending on the
localization. I've only done a couple really serious localized apps (which other people
dealt with the translation issues) but I've only begun to scratch the surface of
localization issues.
joe
Post by Hans-J. Ude
Post by Joseph M. Newcomer
It is important to develop good programming style early and use it consistently.
One of the serious defects in programming style is an assumption that all strings are
8-bit characters. Any use of a string literal undecorated by _T() can be thought of as a
mistake (the only exception is the second argument to GetProcAddress which must be an
8-bit character string). Had you developed good programming habits, you would never have
written the line as shown; you would have written
AfxMessageBox(_T("This is the message"));
and it would have compiled properly.
This is because VS.NET 2005 assumes Unicode apps. Always program Unicode-aware. Always.
joe
I've learned that when i started making CE versions of some exsisting
programs. That's a Unicode only environment. And it's yet better to
put everything that appears on the screen into the resources than into
the sourcecode and use AfxMessageBox(IDS_MYMESSAGE). Otherwise the
program becomes hard to maintain an even harder to translate to other
languages at any time.
Hans
Joseph M. Newcomer [MVP]
email: ***@flounder.com
Web: http://www.flounder.com
MVP Tips: http://www.flounder.com/mvp_tips.htm
Loading...