Nicholas Nethercote 991367c922 Merge in the COMPVBITS branch to the trunk. This is a big change to
Memcheck, replacing the 9-bits-per-byte shadow memory representation to a
2-bits-per-byte representation (with possibly a little more on the side) by
taking advantage of the fact that extremely few memory bytes are partially
defined.

For the SPEC2k benchmarks with "test" inputs, this speeds up Memcheck by a
(geometric mean) factor of 1.20, and reduces the size of shadow memory by a
(geometric mean) factor of 4.26.

At the same time, Addrcheck is removed.  It hadn't worked for quite some
time, and with these improvements in Memcheck its raisons-d'etre have
shrivelled so much that it's not worth the effort to keep around.  Hooray!

Nb: this code hasn't been tested on PPC.  If things go wrong, look first in
the fast stack-handling functions (eg. mc_new_mem_stack_160,
MC_(helperc_MAKE_STACK_UNINIT)).


git-svn-id: svn://svn.valgrind.org/valgrind/trunk@5791
2006-03-27 11:37:07 +00:00

71 lines
868 B
C

#include <stdlib.h>
#include "../memcheck.h"
struct n {
struct n *l;
struct n *r;
};
struct n *mk(struct n *l, struct n *r)
{
struct n *n = malloc(sizeof(*n));
n->l = l;
n->r = r;
return n;
}
static struct n *mkcycle()
{
register struct n *a, *b, *c;
a = mk(0,0);
b = mk(a,0);
c = mk(b,0);
a->l = c;
return a;
}
int main()
{
struct n *volatile c1, *volatile c2;
/* two simple cycles */
c1 = mkcycle();
c2 = mkcycle();
c1 = c2 = 0;
//VALGRIND_DO_LEAK_CHECK;
/* one cycle linked to another */
c1 = mkcycle();
c2 = mkcycle();
/* This is to make sure we end up merging cliques; see
mc_leakcheck.c */
if (c1 < c2)
c2->r = c1;
else
c1->r = c2;
c1 = c2 = 0;
//VALGRIND_DO_LEAK_CHECK;
/* two linked cycles */
c1 = mkcycle();
c2 = mkcycle();
c1->r = c2;
c2->r = c1;
c1 = c2 = 0;
VALGRIND_DO_LEAK_CHECK;
return 0;
}