Wednesday, April 11, 2007

std::min and std::max versus Visual Studio

Recently, months back I wrote a wicked-cool solution using boost to encode binary data as base64. Sadly, it would not compile, b/c Microsoft did something evil in Windows.h when they defined min() and max(). I'd seen this problem and coded around it before. So I modified the boost code that broke:

C:\dev\sdk\boost_1_32_0\boost\archive\iterators>svn diff
Index: transform_width.hpp
===================================================================
--- transform_width.hpp (revision 201)
+++ transform_width.hpp (working copy)
@@ -142,7 +142,7 @@
}
else
bcount = BitsIn - m_displacement;
- unsigned int i = std::min(bcount, missing_bits);
+ unsigned int i = min(bcount, missing_bits);
// shift interesting bits to least significant position
unsigned int j = m_buffer >> (bcount - i);


But was this the most righteous solution?

Just last week I had to code away from a righteous std::numeric_limits::max() to the less righteous INT_MAX for the exact same reason. Today, this problem recurred. I had a choice between committing my change to boost, or powering through the problem. "OK, Microsoft, you've exceeded your kluge allocation."

The real problem isn't in boost, but in Visual Studio. This sent me googling to this link that said:

The Standard Library defines the two template functions std::min() and std::max() in the header. In general, you should use these template functions for calculating the min and max values of a pair. Unfortunately, Visual C++ does not define these function templates. This is because the names min and max clash with the traditional min and max macros defined in . As a workaround, Visual C++ defines two alternative templates with identical functionality called _cpp_min() and _cpp_max(). You can use them instead of std::min() and std::max().To disable the generation of the min and max macros in Visual C++, #define NOMINMAX before #including .

Therefore, I REVERTED my change to transform_width.hpp. Since I do not #include it doesn't matter where I #define NOMINMAX, so I put it in my project file's manifest of #defines.

It works and I feel more righteous.

2 comments:

Anonymous said...

I just had trouble using std::numeric_limits's max() - again - using Visual Studio 2005. Instead of just defining NOMINMAX, I investigated where windows.h actually comes from (I'm writing a service DLL that's meant to be portable). The first and most intrusive occurrence was in stdafx.h - VS.NET 2005 automatically puts it there when you create a project. They actually define WINDOWS_LEAN_AND_MEAN which is at least half correct :-)

I deleted the whole block of code and recompiled. DllMain would fail to compile, so I put #include <windows.h> there and only there. But then the linker complained about a missing definition for SomeClass::GetMessageW. The name of this method is supposed to be GetMessage. Obviously, somewhere the header containing this class is included after windows.h, which happens to #define GetMessage GetMessageW. Because windows.h used to be included everywhere throughout the project, this means that SomeClass::GetMessage was actually known to compiler and linker as GetMessageW all along! Aaargh! Good thing this problem was noticed pre-release.

Since except for the tiny main source file, there was no direct inclusion of windows.h, I used #ifdef and #error to identify the culpable third-party header - it was boost/thread/recursive_mutex.hpp - and put an #undef GetMessage after it. It's not pretty but it works for now. Sigh. Unfortunately, there is no way to be sure that this was the only name that wasn't messed up by the defines in windows.h.

Conclusion: Windows.h does not do some evil things. Windows.h IS evil.

Anonymous said...
This comment has been removed by a blog administrator.