should be able to access the redzone.
So, when computing fp_min, substract the redzone.
Currently, only amd64 and ppc64 have a non 0 redzone.
Regtested on amd64 and ppc64le, no regression.
git-svn-id: svn://svn.valgrind.org/valgrind/trunk@15309
The interesting part is the demangled backtrace in the error message.
Suppress the memory allocation/blocks summary which can differ slightly
depending on the underlying arch/libs.
git-svn-id: svn://svn.valgrind.org/valgrind/trunk@15308
We want to disabled Identical Code Folding for the tools preload shared
objects to get better backraces. For GCC 5.1 -fipa-icf is enabled by
default at -O2.
The optimization reduces code size and may disturb
unwind stacks by replacing a function by equivalent
one with a different name.
Add a configure check to see if GCC supports -fno-ipa-icf.
If it does then add the flag to AM_CFLAGS_PSO_BASE.
Without this GCC will notice some of the preload replacement functions
in vg_replace_strmem are identical and fold them all into one picking
a random (existing) function name. This causes backtraces showing
completely unexpected function names.
git-svn-id: svn://svn.valgrind.org/valgrind/trunk@15305
e.g. perf/memrw is improved by 2% to 3% with this patch.
The unwinding code on x86 is trying to unwind using
either the %ebp-chain or CFI unwinding.
If these 2 techniques fail, then it tries to unwind
using FPO (PDB) debug info.
However, unless running wine or similar, there will never be
such FPO/PDB info.
The function VG_(use_FPO_info) is thus called for nothing
for each 'end of stack'. This function scans all the loaded di
to find a debug info that has some FP, to not find anything.
With this patch, the unwind code on x86 will only call VG_(use_FPO_info) if
some FPO/PDB info was loaded.
The fact that FPO/PDB info was loaded is cached and updated similarly to
cfi cache : each time new debug info is loaded, the cache value is refreshed
using the debuginfo generation.
The patch also changes the name of VG_(CF_info_generation)
to VG_(debuginfo_generation), as this generation is changed for
any kind of load or unload of debug info, not only for CFI based debug
info
git-svn-id: svn://svn.valgrind.org/valgrind/trunk@15293
On these platforms, implement read_/write_<type> by doing a direct
access, rather than calling a function that will read or write
'byte per byte'.
For platforms that do not have efficient unaligned access,
or that do not support at all unaligned access, call function
readUAS_/writeUAS_<type> that works as before.
Currently, direct acecss is activated only for x86 and amd64.
Unclear what other platforms support (efficiently) unaligned access.
On unwind intensive code (such as perf/memrw on amd64), this patch
gives up to 5% improvement.
git-svn-id: svn://svn.valgrind.org/valgrind/trunk@15290
slightly increases the performance. It also moderately improves
the nr of cases where helgrind can provide the stack trace of the old
access (when using the same amount of memory for the OldRef entries).
The patch also provides a new helgrind monitor command to show
the recorded accesses for an address+len, and adds an optional argument
lock_address to the monitor command 'info locks', to show the info
about just this lock.
Currently, oldref are maintained in a sparse WA, that points to N
entries, as specified by --conflict-cache-size=N.
For each entry (associated to an address), we have the last 5 accesses.
Old entries are recycled in an exact LRU order.
But inside an entry, we could have a recent access, and 4 very
old accesses that are kept 'alive' by a single thread accessing
repetitively the address shared with the 4 other old entries.
The attached patch replaces the sparse WA that maintains the OldREf
by an hash table.
Each OldRef now also only maintains one single access for an address.
As an OldRef now maintains only one access, all the entries are now
strictly in LRU mode.
Memory used for OldRef
-----------------------
For the trunk, an OldRef has a size of 72 bytes (on 32 bits archs)
maintaining up to 5 accesses to the same address.
On 64 bits arch, an OldRef is 104 bytes.
With the patch, an OldRef has a size of 32 bytes (on 32 bits archs)
or 56 bytes (on 64 bits archs).
So, for one single access, the new code needs (on 32 bits)
32 bytes, while the trunk needs only 14.4 bytes.
However, that is the worst case, assuming that the 5 entries in the
accs array are all used.
Looking on 2 big apps (one of them being firefox), we see that
we have very few OldRef entries that have the 5 entries occupied.
On a firefox startup, of the 5x1,000,000 accesses, we only have
1,406,939 accesses that are used.
So, in average, the trunk uses in reality around 52 bytes per access.
The default value for --conflict-cache-size has been doubled to 2000000.
This ensures that the memory used for the OldRef is more or less the
same as the trunk (104Mb for OldRef entries).
Memory used for sparseWA versus hashtable
-----------------------------------------
Looking on 2 big apps (one of them being firefox), we see that
there are big variations on the size of the WA : it can go in a few
seconds from 10MB to 250MB, or can decrease back to 10 MB.
This all depends where the last N accesses were done: if well localised,
the WA will be small.
If the last N accesses were distributed over a big address space,
then the WA will be big: the last level of WA (the biggest memory consumer)
uses slightly more than 1KB (2KB on 64 bits) for each '256 bytes' memory
zone where there is an oldref. So, in the worst case, on 32 bits, we
need > 1_000_000_000 sparseWA memory to keep 1_000_000 OldRef.
The hash table has between 1 to 2 Word overhead per OldRef
(as the chain array is +- doubled each time the hash table is full).
So, unless the OldRef are extremely localised, the overhead of the
hash table will be significantly less.
With the patch, the core arena total alloc is:
5299535/1201448632 totalloc-blocks/bytes
The trunk is
6693111/3959050280 totalloc-blocks/bytes
(so, around 1.20Gb versus 3.95Gb).
This big difference is due to the fact that the sparseWA repetitively
allocates then frees Level0 or LevelN when OldRef in the region covered
by the Level0/N have all been recycled.
In terms of CPU
---------------
With the patch, on amd64, a firefox startup seems slightly faster (around 1%).
The peak memory mmaped/used decreases by 200Mb.
For a libreoffice test, the memory decreases by 230Mb. CPU also decreases
slightly (1%).
In terms of correctness:
-----------------------
The trunk could potentially show not the most recent access
to the memory of a race : the first OldRef entry matching the raced upon
address was used, while we could have a more recent access in a following
OldRef entry. In other words, the trunk only guaranteed to find the
most recent access in an OldRef, but not between the several OldRef that
could cover the raced upon address.
So, assuming it is important to show the most recent access, this patch
ensures we really show the most recent access, even in presence of overlapping
accesses.
git-svn-id: svn://svn.valgrind.org/valgrind/trunk@15289
For bz#344936 procfs-non-linux.stderr.exp was renamed and split into
procfs-non-linux.stderr.exp-with-readlinkat and
procfs-non-linux.stderr.exp-without-readlinkat add both to EXTRA_DIST.
Fixes make post-regtest-checks.
git-svn-id: svn://svn.valgrind.org/valgrind/trunk@15271
The support assumed that if HWCAP2 is present that the system also supports
ISA2.07. That assumption is not correct as we have found a few systems (OS)
where the HWCAP2 entry is present but the ISA2.07 bit is not set. This patch
fixes the assertion test to specifically check the ISA2.07 support bit setting
in the HWCAP2 and vex_archinfo->hwcaps variable. The setting for the
ISA2.07 support must be the same in both variables if the HWCAP2 entry exists.
This patch updates Vagrind bugzilla 345695.
git-svn-id: svn://svn.valgrind.org/valgrind/trunk@15257
Having a one elt free lineF cache avoids many PA calls.
This seems to slightly improve (a few %) a firefox startup.
git-svn-id: svn://svn.valgrind.org/valgrind/trunk@15254