40% speedup on artificial programs which just do realloc() and nothing
else, and about a 3-4% speedup on starting kpresenter-1.5.0 and
loading a 16-slide presentation.
git-svn-id: svn://svn.valgrind.org/valgrind/trunk@5880
use an mmx register (which is the same thing in disguise) since mmx
loads/stores are guaranteed to be the identity. This should fix
failures of this test on x86-linux.
git-svn-id: svn://svn.valgrind.org/valgrind/trunk@5843
stores of char/short/int/int64/double at random offsets and hence
alignments in an array. It does it in a way in which the computation
just computes the expected V bits, and hence can check whether these
seem correct.
git-svn-id: svn://svn.valgrind.org/valgrind/trunk@5811
on PPC32 now but break it on the other platforms. Julian will commit a
change to ensure the 32-bit floats are copied through the FP regs on all
platforms to make the broken ones work again.
git-svn-id: svn://svn.valgrind.org/valgrind/trunk@5808
noaccess, writable, readable, other
Now they are:
noaccess, undefined, defined, partdefined
As a result, the following names:
make_writable, make_readable,
check_writable, check_readable, check_defined
have become:
make_mem_undefined, make_mem_defined,
check_mem_is_addressable, check_mem_is_defined, check_value_is_defined
(and likewise for the upper-case versions for client request macros).
The old MAKE_* and CHECK_* macros still work for backwards compatibility.
This is much better, because the old names were subtly misleading. For
example:
- "readable" really meant "readable and writable".
- "writable" really meant "writable and maybe readable, depending on how
the read value is used".
- "check_writable" really meant "check writable or readable"
The new names avoid these problems.
The recently-added macro which was called MAKE_DEFINED is now
MAKE_MEM_DEFINED_IF_ADDRESSABLE.
I also corrected the spelling of "addressable" in numerous places in
memcheck.h.
git-svn-id: svn://svn.valgrind.org/valgrind/trunk@5802
Memcheck, replacing the 9-bits-per-byte shadow memory representation to a
2-bits-per-byte representation (with possibly a little more on the side) by
taking advantage of the fact that extremely few memory bytes are partially
defined.
For the SPEC2k benchmarks with "test" inputs, this speeds up Memcheck by a
(geometric mean) factor of 1.20, and reduces the size of shadow memory by a
(geometric mean) factor of 4.26.
At the same time, Addrcheck is removed. It hadn't worked for quite some
time, and with these improvements in Memcheck its raisons-d'etre have
shrivelled so much that it's not worth the effort to keep around. Hooray!
Nb: this code hasn't been tested on PPC. If things go wrong, look first in
the fast stack-handling functions (eg. mc_new_mem_stack_160,
MC_(helperc_MAKE_STACK_UNINIT)).
git-svn-id: svn://svn.valgrind.org/valgrind/trunk@5791
* test all wrapped-function arities from 0 to 12
* try hard to run both callers and callees out of integer registers,
so as to detect problems where the CALL_FN_* macros do not
properly save registers around the call
This will cause failure in building the regtests on all non-x86
platforms. Will fix shortly.
git-svn-id: svn://svn.valgrind.org/valgrind/trunk@5747
For each byte in the range, if the byte is addressible, make it be
initialised, but if it isn't addressible, leave it alone. So it's
like a version of make_readable which doesn't alter addressibility.
git-svn-id: svn://svn.valgrind.org/valgrind/trunk@5736
it zeroes out that area (as a result of one of the mmaps) and the
program consequently goes into an infinite loop. Change the map sizes
to just one page to avoid that.
git-svn-id: svn://svn.valgrind.org/valgrind/trunk@5616
to the extent needed to make ppc32 work.
* As a result, remove the replacements for glibc's floor/ceil fns on
ppc32/64, since vex can now correctly simulate the real ones.
git-svn-id: svn://svn.valgrind.org/valgrind/trunk@5605