The interesting part is the demangled backtrace in the error message.
Suppress the memory allocation/blocks summary which can differ slightly
depending on the underlying arch/libs.
git-svn-id: svn://svn.valgrind.org/valgrind/trunk@15308
We want to disabled Identical Code Folding for the tools preload shared
objects to get better backraces. For GCC 5.1 -fipa-icf is enabled by
default at -O2.
The optimization reduces code size and may disturb
unwind stacks by replacing a function by equivalent
one with a different name.
Add a configure check to see if GCC supports -fno-ipa-icf.
If it does then add the flag to AM_CFLAGS_PSO_BASE.
Without this GCC will notice some of the preload replacement functions
in vg_replace_strmem are identical and fold them all into one picking
a random (existing) function name. This causes backtraces showing
completely unexpected function names.
git-svn-id: svn://svn.valgrind.org/valgrind/trunk@15305
e.g. perf/memrw is improved by 2% to 3% with this patch.
The unwinding code on x86 is trying to unwind using
either the %ebp-chain or CFI unwinding.
If these 2 techniques fail, then it tries to unwind
using FPO (PDB) debug info.
However, unless running wine or similar, there will never be
such FPO/PDB info.
The function VG_(use_FPO_info) is thus called for nothing
for each 'end of stack'. This function scans all the loaded di
to find a debug info that has some FP, to not find anything.
With this patch, the unwind code on x86 will only call VG_(use_FPO_info) if
some FPO/PDB info was loaded.
The fact that FPO/PDB info was loaded is cached and updated similarly to
cfi cache : each time new debug info is loaded, the cache value is refreshed
using the debuginfo generation.
The patch also changes the name of VG_(CF_info_generation)
to VG_(debuginfo_generation), as this generation is changed for
any kind of load or unload of debug info, not only for CFI based debug
info
git-svn-id: svn://svn.valgrind.org/valgrind/trunk@15293
On these platforms, implement read_/write_<type> by doing a direct
access, rather than calling a function that will read or write
'byte per byte'.
For platforms that do not have efficient unaligned access,
or that do not support at all unaligned access, call function
readUAS_/writeUAS_<type> that works as before.
Currently, direct acecss is activated only for x86 and amd64.
Unclear what other platforms support (efficiently) unaligned access.
On unwind intensive code (such as perf/memrw on amd64), this patch
gives up to 5% improvement.
git-svn-id: svn://svn.valgrind.org/valgrind/trunk@15290
slightly increases the performance. It also moderately improves
the nr of cases where helgrind can provide the stack trace of the old
access (when using the same amount of memory for the OldRef entries).
The patch also provides a new helgrind monitor command to show
the recorded accesses for an address+len, and adds an optional argument
lock_address to the monitor command 'info locks', to show the info
about just this lock.
Currently, oldref are maintained in a sparse WA, that points to N
entries, as specified by --conflict-cache-size=N.
For each entry (associated to an address), we have the last 5 accesses.
Old entries are recycled in an exact LRU order.
But inside an entry, we could have a recent access, and 4 very
old accesses that are kept 'alive' by a single thread accessing
repetitively the address shared with the 4 other old entries.
The attached patch replaces the sparse WA that maintains the OldREf
by an hash table.
Each OldRef now also only maintains one single access for an address.
As an OldRef now maintains only one access, all the entries are now
strictly in LRU mode.
Memory used for OldRef
-----------------------
For the trunk, an OldRef has a size of 72 bytes (on 32 bits archs)
maintaining up to 5 accesses to the same address.
On 64 bits arch, an OldRef is 104 bytes.
With the patch, an OldRef has a size of 32 bytes (on 32 bits archs)
or 56 bytes (on 64 bits archs).
So, for one single access, the new code needs (on 32 bits)
32 bytes, while the trunk needs only 14.4 bytes.
However, that is the worst case, assuming that the 5 entries in the
accs array are all used.
Looking on 2 big apps (one of them being firefox), we see that
we have very few OldRef entries that have the 5 entries occupied.
On a firefox startup, of the 5x1,000,000 accesses, we only have
1,406,939 accesses that are used.
So, in average, the trunk uses in reality around 52 bytes per access.
The default value for --conflict-cache-size has been doubled to 2000000.
This ensures that the memory used for the OldRef is more or less the
same as the trunk (104Mb for OldRef entries).
Memory used for sparseWA versus hashtable
-----------------------------------------
Looking on 2 big apps (one of them being firefox), we see that
there are big variations on the size of the WA : it can go in a few
seconds from 10MB to 250MB, or can decrease back to 10 MB.
This all depends where the last N accesses were done: if well localised,
the WA will be small.
If the last N accesses were distributed over a big address space,
then the WA will be big: the last level of WA (the biggest memory consumer)
uses slightly more than 1KB (2KB on 64 bits) for each '256 bytes' memory
zone where there is an oldref. So, in the worst case, on 32 bits, we
need > 1_000_000_000 sparseWA memory to keep 1_000_000 OldRef.
The hash table has between 1 to 2 Word overhead per OldRef
(as the chain array is +- doubled each time the hash table is full).
So, unless the OldRef are extremely localised, the overhead of the
hash table will be significantly less.
With the patch, the core arena total alloc is:
5299535/1201448632 totalloc-blocks/bytes
The trunk is
6693111/3959050280 totalloc-blocks/bytes
(so, around 1.20Gb versus 3.95Gb).
This big difference is due to the fact that the sparseWA repetitively
allocates then frees Level0 or LevelN when OldRef in the region covered
by the Level0/N have all been recycled.
In terms of CPU
---------------
With the patch, on amd64, a firefox startup seems slightly faster (around 1%).
The peak memory mmaped/used decreases by 200Mb.
For a libreoffice test, the memory decreases by 230Mb. CPU also decreases
slightly (1%).
In terms of correctness:
-----------------------
The trunk could potentially show not the most recent access
to the memory of a race : the first OldRef entry matching the raced upon
address was used, while we could have a more recent access in a following
OldRef entry. In other words, the trunk only guaranteed to find the
most recent access in an OldRef, but not between the several OldRef that
could cover the raced upon address.
So, assuming it is important to show the most recent access, this patch
ensures we really show the most recent access, even in presence of overlapping
accesses.
git-svn-id: svn://svn.valgrind.org/valgrind/trunk@15289
For bz#344936 procfs-non-linux.stderr.exp was renamed and split into
procfs-non-linux.stderr.exp-with-readlinkat and
procfs-non-linux.stderr.exp-without-readlinkat add both to EXTRA_DIST.
Fixes make post-regtest-checks.
git-svn-id: svn://svn.valgrind.org/valgrind/trunk@15271
The support assumed that if HWCAP2 is present that the system also supports
ISA2.07. That assumption is not correct as we have found a few systems (OS)
where the HWCAP2 entry is present but the ISA2.07 bit is not set. This patch
fixes the assertion test to specifically check the ISA2.07 support bit setting
in the HWCAP2 and vex_archinfo->hwcaps variable. The setting for the
ISA2.07 support must be the same in both variables if the HWCAP2 entry exists.
This patch updates Vagrind bugzilla 345695.
git-svn-id: svn://svn.valgrind.org/valgrind/trunk@15257
Having a one elt free lineF cache avoids many PA calls.
This seems to slightly improve (a few %) a firefox startup.
git-svn-id: svn://svn.valgrind.org/valgrind/trunk@15254
Currently, each SecMap has an array of linesF, referenced by the linesZ
of the secmap that needs a lineF, via an index stored in dict[1].
When the array is full, its size is doubled.
The linesF array of a secmap is freed when the SecMap is GC-ed.
The above strategy has the following consequences:
A. in average, 25% of the LinesF are unused.
B. if a SecMap has 'temporarily' a need for linesF, but afterwards,
these linesF are converted to normal lineZ representation, the linesF
will not be recuperated unless the SecMap is GC-ed (i.e. fully marked
no access).
The patch replaces the linesF array private per SecMap
by a pool allocator of LinesF shared between all SecMap.
A lineZ that needs a lineF will directly point to its lineF (using a pointer
stored in dict[1]), instead of having in dict[1] the index in the SecMap
linesF array.
When a lineZ needs a lineF, it is allocated from the pool allocator.
When a lineZ does not need anymore a lineF, it is returned back to the
pool allocator.
On a firefox startup, the above strategy reduces the memory for linesF
by about 42Mb. It seems that the more firefox is used (e.g. to visit
a few websites), the bigger the memory gain.
After opening the home page of valgrind, wikipedia and google, the memory
gain is about 94Mb:
trunk:
linesF: 392,181 allocd ( 203,934,120 bytes occupied) ( 173,279 used)
patch:
linesF: 212,966 allocd ( 109,038,592 bytes occupied) ( 170,252 used)
There is also less alloc/free operations in core arena with the patch:
trunk:
core : 810,680,320/ 802,291,712 max/curr mmap'd, 17/19 unsplit/split sb unmmap'd, 759,441,224/ 703,191,896 max/curr, 40631760/16376828248 totalloc-blocks/bytes, 188015696 searches 8 rzB
patch:
core : 701,628,416/ 690,753,536 max/curr mmap'd, 12/29 unsplit/split sb unmmap'd, 643,041,944/ 577,793,712 max/curr, 32050040/14056017712 totalloc-blocks/bytes, 174097728 searches 8 rzB
In terms of performance, no CPU impact detected on Firefox startup.
Note we have no representative reproducible (and preferrably small)
perf test that uses extensively linesF. Firefox is a good heavy lineF
user but is far to be reproducible, and is very far to be small.
Theoretically, in terms of CPU performance, the patch might have some
small benefits here and there for read operations, as the lineF pointer
is directly retrieved from the lineZ, rather than retrieved via an indirection
in the linesF array.
For write operations, the patch might need a little bit more CPU,
as we replace an
assignment to lineF inUse boolean to False (and then probably back to True
when the cacheline is written back)
by
a call to pool allocator VG_(freeEltPA) (and then probably a call to
VG_(allocEltPA) when the cacheline is written back).
These PA functions are small, so cost should be ok.
We might however still maintain in clear_LineF_of_Z the last cleared lineF
and re-use it in alloc_LineF_for_Z. Not sure how many calls to the PA functions
would be avoided by this '1 elt cache' (and the needed 'if elt == NULL'
check in both clear_LineF_of_Z and alloc_LineF_for_Z.
This possible optimisationwill be looked at later.
git-svn-id: svn://svn.valgrind.org/valgrind/trunk@15253