mirror of
https://github.com/Zenithsiz/ftmemsim-valgrind.git
synced 2026-02-03 18:13:01 +00:00
there were a lot of loss records. The fix was: - Avoid the O(m * n) looping over the chunks when creating the loss records, by putting loss records into an OSet instead of a list, which makes duplicate detection for each chunk an O(log n) operation instead of an O(n) operation. - Avoid the looping over loss records which was used to do a poor man's sort, but was O(n^2). Instead copy pointers to the loss records from the OSet into an array and sort it normally with VG_(ssort) (n log n, usually) before printing. This approach was similar to that used in the patch Philippe attached to the bug report. Other changes: - Added Philippe's test programs in the new memcheck/perf directory. It used to take 57s on my machine, now it takes 1.6s. - Cleaned up massif/perf/Makefile.am to be consistent with other Makefiles. - Improved some comments relating to VgHashTable and OSet. - Avoided a redundant traversal of the hash table in VG_(HT_to_array), also identified by Philippe.. - Made memcheck/tests/mempool's results independent of the pointer size, and thus was able to remove its .stderr.exp64 file. git-svn-id: svn://svn.valgrind.org/valgrind/trunk@9781