Memcheck: a heavyweight memory checkerTo use this tool, you may specify
on the Valgrind command line. You don't have to, though, since Memcheck
is the default tool.Kinds of bugs that Memcheck can findMemcheck is Valgrind's heavyweight memory checking tool. All
reads and writes of memory are checked, and calls to
malloc/new/free/delete are intercepted. As a result, Memcheck can detect
the following problems:Use of uninitialised memoryReading/writing memory after it has been free'dReading/writing off the end of malloc'd blocksReading/writing inappropriate areas on the stackMemory leaks - where pointers to malloc'd blocks are
lost foreverMismatched use of malloc/new/new [] vs
free/delete/delete []Overlapping src and
dst pointers in
memcpy() and related
functionsCommand-line flags specific to MemcheckWhen enabled, search for memory leaks when the client
program finishes. A memory leak means a malloc'd block, which has
not yet been free'd, but to which no pointer can be found. Such a
block can never be free'd by the program, since no pointer to it
exists. If set to summary, it says how many
leaks occurred. If set to full or
yes, it gives details of each individual
leak.When disabled, the memory leak detector only shows blocks
for which it cannot find a pointer to at all, or it can only find
a pointer to the middle of. These blocks are prime candidates for
memory leaks. When enabled, the leak detector also reports on
blocks which it could find a pointer to. Your program could, at
least in principle, have freed such blocks before exit. Contrast
this to blocks for which no pointer, or only an interior pointer
could be found: they are more likely to indicate memory leaks,
because you do not actually have a pointer to the start of the
block which you can hand to free, even if you
wanted to.When doing leak checking, determines how willing
memcheck is to consider different backtraces to
be the same. When set to low, only the first
two entries need match. When med, four entries
have to match. When high, all entries need to
match.For hardcore leak debugging, you probably want to use
together with
or some such large number. Note
however that this can give an overwhelming amount of information,
which is why the defaults are 4 callers and low-resolution
matching.Note that the setting
does not affect memcheck's ability to find
leaks. It only changes how the results are presented.When the client program releases memory using
free (in C) or delete
(C++), that memory is not immediately made
available for re-allocation. Instead, it is marked inaccessible
and placed in a queue of freed blocks. The purpose is to defer as
long as possible the point at which freed-up memory comes back
into circulation. This increases the chance that
memcheck will be able to detect invalid
accesses to blocks for some significant period of time after they
have been freed.This flag specifies the maximum total size, in bytes, of the
blocks in the queue. The default value is ten million bytes.
Increasing this increases the total amount of memory used by
memcheck but may detect invalid uses of freed
blocks which would otherwise go undetected.When enabled, assume that reads and writes some small
distance below the stack pointer are due to bugs in gcc 2.96, and
does not report them. The "small distance" is 256 bytes by
default. Note that gcc 2.96 is the default compiler on some ancient
Linux distributions (RedHat 7.X) and so you may need to use this
flag. Do not use it if you do not have to, as it can cause real
errors to be overlooked. A better alternative is to use a more
recent gcc/g++ in which this bug is fixed.You may also need to use this flag when working with
gcc/g++ 3.X or 4.X on 32-bit PowerPC Linux. This is because
gcc/g++ generates code which occasionally accesses below the
stack pointer, particularly for floating-point to/from integer
conversions. This is in violation of the 32-bit PowerPC ELF
specification, which makes no provision for locations below the
stack pointer to be accessible.Controls how memcheck handles word-sized,
word-aligned loads from addresses for which some bytes are
addressable and others are not. When yes, such
loads do not produce an address error. Instead, loaded bytes
originating from illegal addresses are marked as uninitialised, and
those corresponding to legal addresses are handled in the normal
way.When no, loads from partially invalid
addresses are treated the same as loads from completely invalid
addresses: an illegal-address error is issued, and the resulting
bytes are marked as initialised.Note that code that behaves in this way is in violation of
the the ISO C/C++ standards, and should be considered broken. If
at all possible, such code should be fixed. This flag should be
used only as a last resort.Controls whether memcheck detects
dangerous uses of undefined value errors. Set this to
no if you don't like seeing undefined value
errors; it also has the side effect of speeding
memcheck up somewhat.
Fills blocks allocated
by malloc,
new, etc, but not
by calloc, with the specified
byte. This can be useful when trying to shake out obscure
memory corruption problems. The allocated area is still
regarded by Memcheck as undefined -- this flag only affects its
contents.
Fills blocks freed
by free,
delete, etc, with the
specified byte. This can be useful when trying to shake out
obscure memory corruption problems. The freed area is still
regarded by Memcheck as not valid for access -- this flag only
affects its contents.
Explanation of error messages from MemcheckDespite considerable sophistication under the hood, Memcheck can
only really detect two kinds of errors: use of illegal addresses, and
use of undefined values. Nevertheless, this is enough to help you
discover all sorts of memory-management problems in your code. This
section presents a quick summary of what error messages mean. The
precise behaviour of the error-checking machinery is described in
.Illegal read / Illegal write errorsFor example:This happens when your program reads or writes memory at a place
which Memcheck reckons it shouldn't. In this example, the program did a
4-byte read at address 0xBFFFF0E0, somewhere within the system-supplied
library libpng.so.2.1.0.9, which was called from somewhere else in the
same library, called from line 326 of qpngio.cpp,
and so on.Memcheck tries to establish what the illegal address might relate
to, since that's often useful. So, if it points into a block of memory
which has already been freed, you'll be informed of this, and also where
the block was free'd at. Likewise, if it should turn out to be just off
the end of a malloc'd block, a common result of off-by-one-errors in
array subscripting, you'll be informed of this fact, and also where the
block was malloc'd.In this example, Memcheck can't identify the address. Actually
the address is on the stack, but, for some reason, this is not a valid
stack address -- it is below the stack pointer and that isn't allowed.
In this particular case it's probably caused by gcc generating invalid
code, a known bug in some ancient versions of gcc.Note that Memcheck only tells you that your program is about to
access memory at an illegal address. It can't stop the access from
happening. So, if your program makes an access which normally would
result in a segmentation fault, you program will still suffer the same
fate -- but you will get a message from Memcheck immediately prior to
this. In this particular example, reading junk on the stack is
non-fatal, and the program stays alive.Use of uninitialised valuesFor example:An uninitialised-value use error is reported when your program
uses a value which hasn't been initialised -- in other words, is
undefined. Here, the undefined value is used somewhere inside the
printf() machinery of the C library. This error was reported when
running the following small program:It is important to understand that your program can copy around
junk (uninitialised) data as much as it likes. Memcheck observes this
and keeps track of the data, but does not complain. A complaint is
issued only when your program attempts to make use of uninitialised
data. In this example, x is uninitialised. Memcheck observes the value
being passed to _IO_printf and thence to
_IO_vfprintf, but makes no comment. However,
_IO_vfprintf has to examine the value of
x so it can turn it into the
corresponding ASCII string, and it is at this point that Memcheck
complains.Sources of uninitialised data tend to be:Local variables in procedures which have not been initialised,
as in the example above.The contents of malloc'd blocks, before you write something
there. In C++, the new operator is a wrapper round malloc, so if
you create an object with new, its fields will be uninitialised
until you (or the constructor) fill them in.Illegal freesFor example:Memcheck keeps track of the blocks allocated by your program with
malloc/new, so it can know exactly whether or not the argument to
free/delete is legitimate or not. Here, this test program has freed the
same block twice. As with the illegal read/write errors, Memcheck
attempts to make sense of the address free'd. If, as here, the address
is one which has previously been freed, you wil be told that -- making
duplicate frees of the same block easy to spot.When a block is freed with an inappropriate deallocation
functionIn the following example, a block allocated with
new[] has wrongly been deallocated with
free:In C++ it's important to deallocate memory in a
way compatible with how it was allocated. The deal is:If allocated with
malloc,
calloc,
realloc,
valloc or
memalign, you must
deallocate with free.If allocated with new[], you must
deallocate with delete[].If allocated with new, you must deallocate
with delete.The worst thing is that on Linux apparently it doesn't matter if
you do mix these up, but the same program may then crash on a
different platform, Solaris for example. So it's best to fix it
properly. According to the KDE folks "it's amazing how many C++
programmers don't know this".The reason behind the requirement is as follows. In some C++
implementations, delete[] must be used for
objects allocated by new[] because the compiler
stores the size of the array and the pointer-to-member to the
destructor of the array's content just before the pointer actually
returned. This implies a variable-sized overhead in what's returned
by new or new[].Passing system call parameters with inadequate read/write
permissionsMemcheck checks all parameters to system calls:
It checks all the direct parameters themselves.Also, if a system call needs to read from a buffer provided by
your program, Memcheck checks that the entire buffer is addressable
and has valid data, ie, it is readable.Also, if the system call needs to write to a user-supplied
buffer, Memcheck checks that the buffer is addressable.After the system call, Memcheck updates its tracked information to
precisely reflect any changes in memory permissions caused by the system
call.Here's an example of two system calls with invalid parameters:
#include
int main( void )
{
char* arr = malloc(10);
int* arr2 = malloc(sizeof(int));
write( 1 /* stdout */, arr, 10 );
exit(arr2[0]);
}
]]>You get these complaints ...... because the program has (a) tried to write uninitialised junk
from the malloc'd block to the standard output, and (b) passed an
uninitialised value to exit. Note that the first
error refers to the memory pointed to by
buf (not
buf itself), but the second error
refers directly to exit's argument
arr2[0].Overlapping source and destination blocksThe following C library functions copy some data from one
memory block to another (or something similar):
memcpy(),
strcpy(),
strncpy(),
strcat(),
strncat().
The blocks pointed to by their src and
dst pointers aren't allowed to overlap.
Memcheck checks for this.For example:You don't want the two blocks to overlap because one of them could
get partially overwritten by the copying.You might think that Memcheck is being overly pedantic reporting
this in the case where dst is less than
src. For example, the obvious way to
implement memcpy() is by copying from the first
byte to the last. However, the optimisation guides of some
architectures recommend copying from the last byte down to the first.
Also, some implementations of memcpy() zero
dst before copying, because zeroing the
destination's cache line(s) can improve performance.In addition, for many of these functions, the POSIX standards
have wording along the lines "If copying takes place between objects
that overlap, the behavior is undefined." Hence overlapping copies
violate the standard.The moral of the story is: if you want to write truly portable
code, don't make any assumptions about the language
implementation.Memory leak detectionMemcheck keeps track of all memory blocks issued in response to
calls to malloc/calloc/realloc/new. So when the program exits, it knows
which blocks have not been freed.
If is set appropriately, for each
remaining block, Memcheck scans the entire address space of the process,
looking for pointers to the block. Each block fits into one of the
three following categories.Still reachable: A pointer to the start of the block is found.
This usually indicates programming sloppiness. Since the block is
still pointed at, the programmer could, at least in principle, free
it before program exit. Because these are very common and arguably
not a problem, Memcheck won't report such blocks unless
is specified.Possibly lost, or "dubious": A pointer to the interior of the
block is found. The pointer might originally have pointed to the
start and have been moved along, or it might be entirely unrelated.
Memcheck deems such a block as "dubious", because it's unclear
whether or not a pointer to it still exists.Definitely lost, or "leaked": The worst outcome is that no
pointer to the block can be found. The block is classified as
"leaked", because the programmer could not possibly have freed it at
program exit, since no pointer to it exists. This is likely a
symptom of having lost the pointer at some earlier point in the
program.For each block mentioned, Memcheck will also tell you where the
block was allocated. It cannot tell you how or why the pointer to a
leaked block has been lost; you have to work that out for yourself. In
general, you should attempt to ensure your programs do not have any
leaked or dubious blocks at exit.For example:The first message describes a simple case of a single 8 byte block
that has been definitely lost. The second case mentions both "direct"
and "indirect" leaks. The distinction is that a direct leak is a block
which has no pointers to it. An indirect leak is a block which is only
pointed to by other leaked blocks. Both kinds of leak are bad.The precise area of memory in which Memcheck searches for pointers
is: all naturally-aligned machine-word-sized words found in memory
that Memcheck's records indicate is both accessible and initialised.
Writing suppression filesThe basic suppression format is described in
.The suppression-type (second) line should have the form:The Memcheck suppression types are as follows:Value1,
Value2,
Value4,
Value8,
Value16,
meaning an uninitialised-value error when
using a value of 1, 2, 4, 8 or 16 bytes.Cond (or its old
name, Value0), meaning use
of an uninitialised CPU condition code.Addr1,
Addr2,
Addr4,
Addr8,
Addr16,
meaning an invalid address during a
memory access of 1, 2, 4, 8 or 16 bytes respectively.Jump, meaning an
jump to an unaddressable location error.Param, meaning an
invalid system call parameter error.Free, meaning an
invalid or mismatching free.Overlap, meaning a
src /
dst overlap in
memcpy() or a similar function.Leak, meaning
a memory leak.Param errors have an extra
information line at this point, which is the name of the offending
system call parameter. No other error kinds have this extra
line.The first line of the calling context: for Value and Addr errors,
it is either the name of the function in which the error occurred, or,
failing that, the full path of the .so file or executable containing the
error location. For Free errors, is the name of the function doing the
freeing (eg, free,
__builtin_vec_delete, etc). For Overlap errors, is
the name of the function with the overlapping arguments (eg.
memcpy(), strcpy(),
etc).Lastly, there's the rest of the calling context.Details of Memcheck's checking machineryRead this section if you want to know, in detail, exactly
what and how Memcheck is checking.Valid-value (V) bitsIt is simplest to think of Memcheck implementing a synthetic CPU
which is identical to a real CPU, except for one crucial detail. Every
bit (literally) of data processed, stored and handled by the real CPU
has, in the synthetic CPU, an associated "valid-value" bit, which says
whether or not the accompanying bit has a legitimate value. In the
discussions which follow, this bit is referred to as the V (valid-value)
bit.Each byte in the system therefore has a 8 V bits which follow it
wherever it goes. For example, when the CPU loads a word-size item (4
bytes) from memory, it also loads the corresponding 32 V bits from a
bitmap which stores the V bits for the process' entire address space.
If the CPU should later write the whole or some part of that value to
memory at a different address, the relevant V bits will be stored back
in the V-bit bitmap.In short, each bit in the system has an associated V bit, which
follows it around everywhere, even inside the CPU. Yes, all the CPU's
registers (integer, floating point, vector and condition registers) have
their own V bit vectors.Copying values around does not cause Memcheck to check for, or
report on, errors. However, when a value is used in a way which might
conceivably affect the outcome of your program's computation, the
associated V bits are immediately checked. If any of these indicate
that the value is undefined, an error is reported.Here's an (admittedly nonsensical) example:Memcheck emits no complaints about this, since it merely copies
uninitialised values from a[] into
b[], and doesn't use them in a way which could
affect the behaviour of the program. However, if
the loop is changed to:then Memcheck will complain, at the
if, that the condition depends on
uninitialised values. Note that it doesn't complain
at the j += a[i];, since at that point the
undefinedness is not "observable". It's only when a decision has to be
made as to whether or not to do the printf -- an
observable action of your program -- that Memcheck complains.Most low level operations, such as adds, cause Memcheck to use the
V bits for the operands to calculate the V bits for the result. Even if
the result is partially or wholly undefined, it does not
complain.Checks on definedness only occur in three places: when a value is
used to generate a memory address, when control flow decision needs to
be made, and when a system call is detected, Memcheck checks definedness
of parameters as required.If a check should detect undefinedness, an error message is
issued. The resulting value is subsequently regarded as well-defined.
To do otherwise would give long chains of error messages. In other
words, once Memcheck reports an undefined value error, it tries to
avoid reporting further errors derived from that same undefined
value.This sounds overcomplicated. Why not just check all reads from
memory, and complain if an undefined value is loaded into a CPU
register? Well, that doesn't work well, because perfectly legitimate C
programs routinely copy uninitialised values around in memory, and we
don't want endless complaints about that. Here's the canonical example.
Consider a struct like this:The question to ask is: how large is struct S,
in bytes? An int is 4 bytes and a
char one byte, so perhaps a struct
S occupies 5 bytes? Wrong. All non-toy compilers we know
of will round the size of struct S up to a whole
number of words, in this case 8 bytes. Not doing this forces compilers
to generate truly appalling code for accessing arrays of
struct S's on some architectures.So s1 occupies 8 bytes, yet only 5 of them will
be initialised. For the assignment s2 = s1, gcc
generates code to copy all 8 bytes wholesale into s2
without regard for their meaning. If Memcheck simply checked values as
they came out of memory, it would yelp every time a structure assignment
like this happened. So the more complicated behaviour described above
is necessary. This allows gcc to copy
s1 into s2 any way it likes, and a
warning will only be emitted if the uninitialised values are later
used.Valid-address (A) bitsNotice that the previous subsection describes how the validity of
values is established and maintained without having to say whether the
program does or does not have the right to access any particular memory
location. We now consider the latter question.As described above, every bit in memory or in the CPU has an
associated valid-value (V) bit. In addition, all bytes in memory, but
not in the CPU, have an associated valid-address (A) bit. This
indicates whether or not the program can legitimately read or write that
location. It does not give any indication of the validity or the data
at that location -- that's the job of the V bits -- only whether or not
the location may be accessed.Every time your program reads or writes memory, Memcheck checks
the A bits associated with the address. If any of them indicate an
invalid address, an error is emitted. Note that the reads and writes
themselves do not change the A bits, only consult them.So how do the A bits get set/cleared? Like this:When the program starts, all the global data areas are
marked as accessible.When the program does malloc/new, the A bits for exactly the
area allocated, and not a byte more, are marked as accessible. Upon
freeing the area the A bits are changed to indicate
inaccessibility.When the stack pointer register (SP) moves
up or down, A bits are set. The rule is that the area from
SP up to the base of the stack is marked as
accessible, and below SP is inaccessible. (If
that sounds illogical, bear in mind that the stack grows down, not
up, on almost all Unix systems, including GNU/Linux.) Tracking
SP like this has the useful side-effect that the
section of stack used by a function for local variables etc is
automatically marked accessible on function entry and inaccessible
on exit.When doing system calls, A bits are changed appropriately.
For example, mmap
magically makes files appear in the process'
address space, so the A bits must be updated if mmap
succeeds.Optionally, your program can tell Memcheck about such changes
explicitly, using the client request mechanism described
above.Putting it all togetherMemcheck's checking machinery can be summarised as
follows:Each byte in memory has 8 associated V (valid-value) bits,
saying whether or not the byte has a defined value, and a single A
(valid-address) bit, saying whether or not the program currently has
the right to read/write that address.When memory is read or written, the relevant A bits are
consulted. If they indicate an invalid address, Memcheck emits an
Invalid read or Invalid write error.When memory is read into the CPU's registers, the relevant V
bits are fetched from memory and stored in the simulated CPU. They
are not consulted.When a register is written out to memory, the V bits for that
register are written back to memory too.When values in CPU registers are used to generate a memory
address, or to determine the outcome of a conditional branch, the V
bits for those values are checked, and an error emitted if any of
them are undefined.When values in CPU registers are used for any other purpose,
Memcheck computes the V bits for the result, but does not check
them.Once the V bits for a value in the CPU have been checked, they
are then set to indicate validity. This avoids long chains of
errors.When values are loaded from memory, Memcheck checks the A bits
for that location and issues an illegal-address warning if needed.
In that case, the V bits loaded are forced to indicate Valid,
despite the location being invalid.This apparently strange choice reduces the amount of confusing
information presented to the user. It avoids the unpleasant
phenomenon in which memory is read from a place which is both
unaddressable and contains invalid values, and, as a result, you get
not only an invalid-address (read/write) error, but also a
potentially large set of uninitialised-value errors, one for every
time the value is used.There is a hazy boundary case to do with multi-byte loads from
addresses which are partially valid and partially invalid. See
details of the flag for details.
Memcheck intercepts calls to malloc, calloc, realloc, valloc,
memalign, free, new, new[], delete and delete[]. The behaviour you get
is:malloc/new/new[]: the returned memory is marked as addressable
but not having valid values. This means you have to write to it
before you can read it.calloc: returned memory is marked both addressable and valid,
since calloc clears the area to zero.realloc: if the new size is larger than the old, the new
section is addressable but invalid, as with malloc.If the new size is smaller, the dropped-off section is marked
as unaddressable. You may only pass to realloc a pointer previously
issued to you by malloc/calloc/realloc.free/delete/delete[]: you may only pass to these functions a
pointer previously issued to you by the corresponding allocation
function. Otherwise, Memcheck complains. If the pointer is indeed
valid, Memcheck marks the entire area it points at as unaddressable,
and places the block in the freed-blocks-queue. The aim is to defer
as long as possible reallocation of this block. Until that happens,
all attempts to access it will elicit an invalid-address error, as
you would hope.Client RequestsThe following client requests are defined in
memcheck.h.
See memcheck.h for exact details of their
arguments.VALGRIND_MAKE_MEM_NOACCESS,
VALGRIND_MAKE_MEM_UNDEFINED and
VALGRIND_MAKE_MEM_DEFINED.
These mark address ranges as completely inaccessible,
accessible but containing undefined data, and accessible and
containing defined data, respectively. Subsequent errors may
have their faulting addresses described in terms of these
blocks. Returns a "block handle". Returns zero when not run
on Valgrind.VALGRIND_MAKE_MEM_DEFINED_IF_ADDRESSABLE.
This is just like VALGRIND_MAKE_MEM_DEFINED but only
affects those bytes that are already addressable.VALGRIND_DISCARD: At some point you may
want Valgrind to stop reporting errors in terms of the blocks
defined by the previous three macros. To do this, the above macros
return a small-integer "block handle". You can pass this block
handle to VALGRIND_DISCARD. After doing so,
Valgrind will no longer be able to relate addressing errors to the
user-defined block associated with the handle. The permissions
settings associated with the handle remain in place; this just
affects how errors are reported, not whether they are reported.
Returns 1 for an invalid handle and 0 for a valid handle (although
passing invalid handles is harmless). Always returns 0 when not run
on Valgrind.VALGRIND_CHECK_MEM_IS_ADDRESSABLE and
VALGRIND_CHECK_MEM_IS_DEFINED: check immediately
whether or not the given address range has the relevant property,
and if not, print an error message. Also, for the convenience of
the client, returns zero if the relevant property holds; otherwise,
the returned value is the address of the first byte for which the
property is not true. Always returns 0 when not run on
Valgrind.VALGRIND_CHECK_VALUE_IS_DEFINED: a quick and easy
way to find out whether Valgrind thinks a particular value
(lvalue, to be precise) is addressable and defined. Prints an error
message if not. Returns no value.VALGRIND_DO_LEAK_CHECK: runs the memory
leak detector right now. Is useful for incrementally checking for
leaks between arbitrary places in the program's execution. Returns
no value.VALGRIND_COUNT_LEAKS: fills in the four
arguments with the number of bytes of memory found by the previous
leak check to be leaked, dubious, reachable and suppressed. Again,
useful in test harness code, after calling
VALGRIND_DO_LEAK_CHECK.VALGRIND_GET_VBITS and
VALGRIND_SET_VBITS: allow you to get and set the
V (validity) bits for an address range. You should probably only
set V bits that you have got with
VALGRIND_GET_VBITS. Only for those who really
know what they are doing.Memory Pools: describing and working with custom allocatorsSome programs use custom memory allocators, often for performance
reasons. Left to itself, Memcheck is unable to "understand" the
behaviour of custom allocation schemes and so may miss errors and
leaks in your program. What this section describes is a way to give
Memcheck enough of a description of your custom allocator that it can
make at least some sense of what is happening.There are many different sorts of custom allocator, so Memcheck
attempts to reason about them using a loose, abstract model. We
use the following terminology when describing custom allocation
systems:Custom allocation involves a set of independent "memory pools".
Memcheck's notion of a a memory pool consists of a single "anchor
address" and a set of non-overlapping "chunks" associated with the
anchor address.Typically a pool's anchor address is the address of a
book-keeping "header" structure.Typically the pool's chunks are drawn from a contiguous
"superblock" acquired through the system malloc() or mmap().Keep in mind that the last two points above say "typically": the
Valgrind mempool client request API is intentionally vague about the
exact structure of a mempool. There is no specific mention made of
headers or superblocks. Nevertheless, the following picture may help
elucidate the intention of the terms in the API:
Note that the header and the superblock may be contiguous or
discontiguous, and there may be multiple superblocks associated with a
single header; such variations are opaque to Memcheck. The API
only requires that your allocation scheme can present sensible values
of "pool", "addr" and "size".
Typically, before making client requests related to mempools, a client
program will have allocated such a header and superblock for their
mempool, and marked the superblock NOACCESS using the
VALGRIND_MAKE_MEM_NOACCESS client request.
When dealing with mempools, the goal is to maintain a particular
invariant condition: that Memcheck believes the unallocated portions
of the pool's superblock (including redzones) are NOACCESS. To
maintain this invariant, the client program must ensure that the
superblock starts out in that state; Memcheck cannot make it so, since
Memcheck never explicitly learns about the superblock of a pool, only
the allocated chunks within the pool.
Once the header and superblock for a pool are established and properly
marked, there are a number of client requests programs can use to
inform Memcheck about changes to the state of a mempool:VALGRIND_CREATE_MEMPOOL(pool, rzB, is_zeroed):
This request registers the address "pool" as the anchor address
for a memory pool. It also provides a size "rzB", specifying how
large the redzones placed around chunks allocated from the pool
should be. Finally, it provides an "is_zeroed" flag that specifies
whether the pool's chunks are zeroed (more precisely: defined)
when allocated.
Upon completion of this request, no chunks are associated with the
pool. The request simply tells Memcheck that the pool exists, so that
subsequent calls can refer to it as a pool.
VALGRIND_DESTROY_MEMPOOL(pool):
This request tells Memcheck that a pool is being torn down. Memcheck
then removes all records of chunks associated with the pool, as well
as its record of the pool's existence. While destroying its records of
a mempool, Memcheck resets the redzones of any live chunks in the pool
to NOACCESS.
VALGRIND_MEMPOOL_ALLOC(pool, addr, size):
This request informs Memcheck that a "size"-byte chunk has been
allocated at "addr", and associates the chunk with the specified
"pool". If the pool was created with nonzero "rzB" redzones, Memcheck
will mark the "rzB" bytes before and after the chunk as NOACCESS. If
the pool was created with the "is_zeroed" flag set, Memcheck will mark
the chunk as DEFINED, otherwise Memcheck will mark the chunk as
UNDEFINED.
VALGRIND_MEMPOOL_FREE(pool, addr):
This request informs Memcheck that the chunk at "addr" should no
longer be considered allocated. Memcheck will mark the chunk
associated with "addr" as NOACCESS, and delete its record of the
chunk's existence.
VALGRIND_MEMPOOL_TRIM(pool, addr, size):
This request "trims" the chunks associated with "pool". The request
only operates on chunks associated with "pool". Trimming is formally
defined as: All chunks entirely inside the range [addr,addr+size) are
preserved.All chunks entirely outside the range [addr,addr+size) are
discarded, as though VALGRIND_MEMPOOL_FREE
was called on them. All other chunks must intersect with the range
[addr,addr+size); areas outside the intersection are marked as
NOACCESS, as though they had been independently freed with
VALGRIND_MEMPOOL_FREE.This is a somewhat rare request, but can be useful in
implementing the type of mass-free operations common in custom
LIFO allocators.VALGRIND_MOVE_MEMPOOL(poolA, poolB):
This request informs Memcheck that the pool previously anchored at
address "poolA" has moved to anchor address "poolB". This is a rare
request, typically only needed if you realloc() the header of
a mempool.No memory-status bits are altered by this request.VALGRIND_MEMPOOL_CHANGE(pool, addrA, addrB, size):
This request informs Memcheck that the chunk previously allocated at
address "addrA" within "pool" has been moved and/or resized, and should
be changed to cover the region [addrB,addrB+size). This is a rare
request, typically only needed if you realloc() a superblock or wish
to extend a chunk without changing its memory-status bits.
No memory-status bits are altered by this request.
VALGRIND_MEMPOOL_EXISTS(pool):
This request informs the caller whether or not Memcheck is currently
tracking a mempool at anchor address "pool". It evaluates to 1 when
there is a mempool associated with that address, 0 otherwise. This is a
rare request, only useful in circumstances when client code might have
lost track of the set of active mempools.
Debugging MPI Parallel Programs with Valgrind Valgrind supports debugging of distributed-memory applications
which use the MPI message passing standard. This support consists of a
library of wrapper functions for the
PMPI_* interface. When incorporated
into the application's address space, either by direct linking or by
LD_PRELOAD, the wrappers intercept
calls to PMPI_Send,
PMPI_Recv, etc. They then
use client requests to inform Valgrind of memory state changes caused
by the function being wrapped. This reduces the number of false
positives that Memcheck otherwise typically reports for MPI
applications.The wrappers also take the opportunity to carefully check
size and definedness of buffers passed as arguments to MPI functions, hence
detecting errors such as passing undefined data to
PMPI_Send, or receiving data into a
buffer which is too small.Unlike most of the rest of Valgrind, the wrapper library is subject to a
BSD-style license, so you can link it into any code base you like.
See the top of auxprogs/libmpiwrap.c
for license details.Building and installing the wrappers The wrapper library will be built automatically if possible.
Valgrind's configure script will look for a suitable
mpicc to build it with. This must be
the same mpicc you use to build the
MPI application you want to debug. By default, Valgrind tries
mpicc, but you can specify a
different one by using the configure-time flag
--with-mpicc=. Currently the
wrappers are only buildable with
mpiccs which are based on GNU
gcc or Intel's
icc.Check that the configure script prints a line like this:If it says ... no, your
mpicc has failed to compile and link
a test MPI2 program.If the configure test succeeds, continue in the usual way with
make and make
install. The final install tree should then contain
libmpiwrap.so.
Compile up a test MPI program (eg, MPI hello-world) and try
this:/libmpiwrap.so \
mpirun [args] $prefix/bin/valgrind ./hello
]]>You should see something similar to the followingrepeated for every process in the group. If you do not see
these, there is an build/installation problem of some kind. The MPI functions to be wrapped are assumed to be in an ELF
shared object with soname matching
libmpi.so*. This is known to be
correct at least for Open MPI and Quadrics MPI, and can easily be
changed if required.Getting startedCompile your MPI application as usual, taking care to link it
using the same mpicc that your
Valgrind build was configured with.
Use the following basic scheme to run your application on Valgrind with
the wrappers engaged:/libmpiwrap.so \
mpirun [mpirun-args] \
$prefix/bin/valgrind [valgrind-args] \
[application] [app-args]
]]>As an alternative to
LD_PRELOADing
libmpiwrap.so, you can simply link it
to your application if desired. This should not disturb native
behaviour of your application in any way.Controlling the wrapper libraryEnvironment variable
MPIWRAP_DEBUG is consulted at
startup. The default behaviour is to print a starting banner and then be relatively quiet.You can give a list of comma-separated options in
MPIWRAP_DEBUG. These areverbose:
show entries/exits of all wrappers. Also show extra
debugging info, such as the status of outstanding
MPI_Requests resulting
from uncompleted MPI_Irecvs.quiet:
opposite of verbose, only print
anything when the wrappers want
to report a detected programming error, or in case of catastrophic
failure of the wrappers.warn:
by default, functions which lack proper wrappers
are not commented on, just silently
ignored. This causes a warning to be printed for each unwrapped
function used, up to a maximum of three warnings per function.strict:
print an error message and abort the program if
a function lacking a wrapper is used. If you want to use Valgrind's XML output facility
(--xml=yes), you should pass
quiet in
MPIWRAP_DEBUG so as to get rid of any
extraneous printing from the wrappers.Abilities and limitationsFunctionsAll MPI2 functions except
MPI_Wtick,
MPI_Wtime and
MPI_Pcontrol have wrappers. The
first two are not wrapped because they return a
double, and Valgrind's
function-wrap mechanism cannot handle that (it could easily enough be
extended to). MPI_Pcontrol cannot be
wrapped as it has variable arity:
int MPI_Pcontrol(const int level, ...)Most functions are wrapped with a default wrapper which does
nothing except complain or abort if it is called, depending on
settings in MPIWRAP_DEBUG listed
above. The following functions have "real", do-something-useful
wrappers: A few functions such as
PMPI_Address are listed as
HAS_NO_WRAPPER. They have no wrapper
at all as there is nothing worth checking, and giving a no-op wrapper
would reduce performance for no reason. Note that the wrapper library itself can itself generate large
numbers of calls to the MPI implementation, especially when walking
complex types. The most common functions called are
PMPI_Extent,
PMPI_Type_get_envelope,
PMPI_Type_get_contents, and
PMPI_Type_free. Types MPI-1.1 structured types are supported, and walked exactly.
The currently supported combiners are
MPI_COMBINER_NAMED,
MPI_COMBINER_CONTIGUOUS,
MPI_COMBINER_VECTOR,
MPI_COMBINER_HVECTORMPI_COMBINER_INDEXED,
MPI_COMBINER_HINDEXED and
MPI_COMBINER_STRUCT. This should
cover all MPI-1.1 types. The mechanism (function
walk_type) should extend easily to
cover MPI2 combiners.MPI defines some named structured types
(MPI_FLOAT_INT,
MPI_DOUBLE_INT,
MPI_LONG_INT,
MPI_2INT,
MPI_SHORT_INT,
MPI_LONG_DOUBLE_INT) which are pairs
of some basic type and a C int.
Unfortunately the MPI specification makes it impossible to look inside
these types and see where the fields are. Therefore these wrappers
assume the types are laid out as struct { float val;
int loc; } (for
MPI_FLOAT_INT), etc, and act
accordingly. This appears to be correct at least for Open MPI 1.0.2
and for Quadrics MPI.If strict is an option specified
in MPIWRAP_DEBUG, the application
will abort if an unhandled type is encountered. Otherwise, the
application will print a warning message and continue.Some effort is made to mark/check memory ranges corresponding to
arrays of values in a single pass. This is important for performance
since asking Valgrind to mark/check any range, no matter how small,
carries quite a large constant cost. This optimisation is applied to
arrays of primitive types (double,
float,
int,
long, long
long, short,
char, and long
double on platforms where sizeof(long
double) == 8). For arrays of all other types, the
wrappers handle each element individually and so there can be a very
large performance cost.Writing new wrappers
For the most part the wrappers are straightforward. The only
significant complexity arises with nonblocking receives.The issue is that MPI_Irecv
states the recv buffer and returns immediately, giving a handle
(MPI_Request) for the transaction.
Later the user will have to poll for completion with
MPI_Wait etc, and when the
transaction completes successfully, the wrappers have to paint the
recv buffer. But the recv buffer details are not presented to
MPI_Wait -- only the handle is. The
library therefore maintains a shadow table which associates
uncompleted MPI_Requests with the
corresponding buffer address/count/type. When an operation completes,
the table is searched for the associated address/count/type info, and
memory is marked accordingly.Access to the table is guarded by a (POSIX pthreads) lock, so as
to make the library thread-safe.The table is allocated with
malloc and never
freed, so it will show up in leak
checks.Writing new wrappers should be fairly easy. The source file is
auxprogs/libmpiwrap.c. If possible,
find an existing wrapper for a function of similar behaviour to the
one you want to wrap, and use it as a starting point. The wrappers
are organised in sections in the same order as the MPI 1.1 spec, to
aid navigation. When adding a wrapper, remember to comment out the
definition of the default wrapper in the long list of defaults at the
bottom of the file (do not remove it, just comment it out).What to expect when using the wrappersThe wrappers should reduce Memcheck's false-error rate on MPI
applications. Because the wrapping is done at the MPI interface,
there will still potentially be a large number of errors reported in
the MPI implementation below the interface. The best you can do is
try to suppress them.You may also find that the input-side (buffer
length/definedness) checks find errors in your MPI use, for example
passing too short a buffer to
MPI_Recv.Functions which are not wrapped may increase the false
error rate. A possible approach is to run with
MPI_DEBUG containing
warn. This will show you functions
which lack proper wrappers but which are nevertheless used. You can
then write wrappers for them.
A known source of potential false errors are the
PMPI_Reduce family of functions, when
using a custom (user-defined) reduction function. In a reduction
operation, each node notionally sends data to a "central point" which
uses the specified reduction function to merge the data items into a
single item. Hence, in general, data is passed between nodes and fed
to the reduction function, but the wrapper library cannot mark the
transferred data as initialised before it is handed to the reduction
function, because all that happens "inside" the
PMPI_Reduce call. As a result you
may see false positives reported in your reduction function.