Sync VEX/LICENSE.GPL with top-level COPYING file. We used 3 different
addresses for writing to the FSF to receive a copy of the GPL. Replace
all different variants with an URL <http://www.gnu.org/licenses/>.
The following files might still have some slightly different (L)GPL
copyright notice because they were derived from other programs:
- files under coregrind/m_demangle which come from libiberty:
cplus-dem.c, d-demangle.c, demangle.h, rust-demangle.c,
safe-ctype.c and safe-ctype.h
- coregrind/m_demangle/dyn-string.[hc] derived from GCC.
- coregrind/m_demangle/ansidecl.h derived from glibc.
- VEX files for FMA detived from glibc:
host_generic_maddf.h and host_generic_maddf.c
- files under coregrin/m_debuginfo derived from LZO:
lzoconf.h, lzodefs.h, minilzo-inl.c and minilzo.h
- files under coregrind/m_gdbserver detived from GDB:
gdb/signals.h, inferiors.c, regcache.c, regcache.h,
regdef.h, remote-utils.c, server.c, server.h, signals.c,
target.c, target.h and utils.c
Plus the following test files:
- none/tests/ppc32/testVMX.c derived from testVMX.
- ppc tests derived from QEMU: jm-insns.c, ppc64_helpers.h
and test_isa_3_0.c
- tests derived from bzip2 (with embedded GPL text in code):
hackedbz2.c, origin5-bz2.c, varinfo6.c
- tests detived from glibc: str_tester.c, pth_atfork1.c
- test detived from GCC libgomp: tc17_sembar.c
- performance tests derived from bzip2 or tinycc (with embedded GPL
text in code): bz2.c, test_input_for_tinycc.c and tinycc.c
When the elt to allocate is bigger than the pool size, allocate
a specific pool only for this element.
git-svn-id: svn://svn.valgrind.org/valgrind/trunk@15787
adler32 is not very good as a hash function.
sdbm_hash gives more different keys that adler32,
and in a large majority of the cases, shorter chains.
git-svn-id: svn://svn.valgrind.org/valgrind/trunk@15142
that once an element has been allocated and added to the pool it must
not be modified afterwards. See the documentation in pub_tool_deduppoolalloc.h
The rest of the patch is ripple.
git-svn-id: svn://svn.valgrind.org/valgrind/trunk@14654
Testcases are not compiled with -Wcast-qual.
Introduce CONST_CAST macro to work around in the few spots
where a cast that drops type qualifiers is needed.
git-svn-id: svn://svn.valgrind.org/valgrind/trunk@14652
This is (a) consistent with how the other containers are defined
and, more importantly, (b) allows the constification of the hash table API.
git-svn-id: svn://svn.valgrind.org/valgrind/trunk@14639
pub_tool_deduppoolalloc.h, for "numbering" pool, there is no guarantee
that the address of an element is stable if a new element is inserted.
But m_deduppoolalloc.c was itself not taking this 'no guarantee' into account.
So, when the addresses of the elements are changed due to reallocation
of the only pool, apply an offset to the element addresses stored in
the dedup hash table.
git-svn-id: svn://svn.valgrind.org/valgrind/trunk@14127
The dedup pool can now be used to allocate elements and identify
them with a number rather than an address.
This new feature is not used (yet) but is intended to be used to
decrease the memory needed to store the CFSI information.
git-svn-id: svn://svn.valgrind.org/valgrind/trunk@14123
only decrease the size of a block, does not change the address,
does not need to alloc another block and copy the memory,
and (if big enough) makes the excess memory available for other
allocations.
VG_(arena_realloc_shrink) is then used for debuginfo storage.c
(replacing an allocation + copy).
Also use it in the dedup pool, to recuperate the unused
memory of the last pool.
This also allows to re-increase the string pool size to the original
3.9.0 value of 64Kb. All this slightly decrease the peak and in use
memory of dinfo.
VG_(arena_realloc_shrink) will also be used to implement (in another patch)
a dedup pool which "numbers" the allocated elements.
git-svn-id: svn://svn.valgrind.org/valgrind/trunk@14122
include/pub_tool_deduppoolalloc.h
coregrind/pub_core_deduppoolalloc.h
coregrind/m_deduppoolalloc.c
and uses it (currently only) for the strings in m_debuginfo/storage.c
The idea is that such ddup pool allocator will also be used for other
highly duplicated information (e.g. the DiCFSI information), where
significant gains can also be achieved.
The dedup pool for strings also decreases significantly the memory
needed by the read inline information (patch still to be committed,
see bug 278972).
When testing with a big executable (tacot_process),
this reduces the size of the dinfo arena from
trunk: 158941184/109760512 max/curr mmap'd, 156775944/107882728 max/curr,
to
ddup: 157892608/106614784 max/curr mmap'd, 156362160/101414712 max/curr
(so 3Mb less mmap-ed once debug info is read, 1Mb less mmap-ed in peak,
6Mb less allocated once debug info is read).
This is all gained due to the string which changes from:
trunk: 17,434,704 in 266: di.storage.addStr.1
to
ddup: 10,966,608 in 750: di.storage.addStr.1
(6.5Mb less memory used by strings)
The gain in mmap-ed memory is smaller due to fragmentation.
Probably one could decrease the fragmentation by using bigger
size for the dedup pool, but then we would lose memory on the last
allocated pool (and for small libraries, we often do not use much
of a big pool block).
Solution might be to increase the pool size but have a "shrink_block"
operation. To be looked at in the future.
In terms of performance, startup of a big executable (on an old pentium)
is not influenced significantly (something like 0.1 seconds on 15 seconds
startup for a big executable, on a slow pentium).
The dedup pool uses a hash table. The hash function used currently
is the VG_(adler32) check sum. It is reported (and visible also here)
that this checksum is not a very good hash function (many collisions).
To have statistics about collisions, use --stats -v -v -v
As an example of the collisions, on the strings in debug info of memcheck tool on x86,
one obtain:
--4789-- dedupPA:di.storage.addStr.1 9983 allocs (8174 uniq) 11 pools (4820 bytes free in last pool)
--4789-- nr occurences of chains of len N, N-plicated keys, N-plicated elts
--4789-- N: 0 : nr chain 6975, nr keys 0, nr elts 0
--4789-- N: 1 : nr chain 3670, nr keys 6410, nr elts 8174
--4789-- N: 2 : nr chain 1070, nr keys 226, nr elts 0
--4789-- N: 3 : nr chain 304, nr keys 100, nr elts 0
--4789-- N: 4 : nr chain 104, nr keys 84, nr elts 0
--4789-- N: 5 : nr chain 72, nr keys 42, nr elts 0
--4789-- N: 6 : nr chain 44, nr keys 34, nr elts 0
--4789-- N: 7 : nr chain 18, nr keys 13, nr elts 0
--4789-- N: 8 : nr chain 17, nr keys 8, nr elts 0
--4789-- N: 9 : nr chain 4, nr keys 6, nr elts 0
--4789-- N:10 : nr chain 9, nr keys 4, nr elts 0
--4789-- N:11 : nr chain 1, nr keys 0, nr elts 0
--4789-- N:13 : nr chain 1, nr keys 1, nr elts 0
--4789-- total nr of unique chains: 12289, keys 6928, elts 8174
which shows that on 8174 different strings, we have only 6410 strings which have
a unique hash value. As other examples, N:13 line shows we have 13 strings
mapping to the same key. N:14 line shows we have 4 groups of 10 strings mapping to the
same key, etc.
So, adler32 is definitely a bad hash function.
Trials have been done with another hash function, giving a much lower
collision rate. So, a better (but still fast) hash function would probably
be beneficial. To be looked at ...
git-svn-id: svn://svn.valgrind.org/valgrind/trunk@14029