Overhauled the docs. Removed all the HTML files, put in XML files as

converted by Donna.  Hooked it into the build system so they are only
built when specifically asked for, and when doing "make dist".

They're not perfect;  in particular, there are the following problems:
- The plain-text FAQ should be built from FAQ.xml, but this is not
  currently done.  (The text FAQ has been left in for now.)

- The PS/PDF building doesn't work -- it fails with an incomprehensible
  error message which I haven't yet deciphered.

Nonetheless, I'm putting it in so others can see it.



git-svn-id: svn://svn.valgrind.org/valgrind/trunk@3153
This commit is contained in:
Nicholas Nethercote 2004-11-30 10:43:45 +00:00
parent e62fd9ba79
commit 7a75a9f583
67 changed files with 12149 additions and 7431 deletions

10
COPYING
View File

@ -55,7 +55,7 @@ patent must be licensed for everyone's free use or not licensed at all.
The precise terms and conditions for copying, distribution and
modification follow.
GNU GENERAL PUBLIC LICENSE
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
@ -110,7 +110,7 @@ above, provided that you also meet all of these conditions:
License. (Exception: if the Program itself is interactive but
does not normally print such an announcement, your work based on
the Program is not required to print an announcement.)
These requirements apply to the modified work as a whole. If
identifiable sections of that work are not derived from the Program,
and can be reasonably considered independent and separate works in
@ -168,7 +168,7 @@ access to copy from a designated place, then offering equivalent
access to copy the source code from the same place counts as
distribution of the source code, even though third parties are not
compelled to copy the source along with the object code.
4. You may not copy, modify, sublicense, or distribute the Program
except as expressly provided under this License. Any attempt
otherwise to copy, modify, sublicense or distribute the Program is
@ -225,7 +225,7 @@ impose that choice.
This section is intended to make thoroughly clear what is believed to
be a consequence of the rest of this License.
8. If the distribution and/or use of the Program is restricted in
certain countries either by patents or by copyrighted interfaces, the
original copyright holder who places the Program under this License
@ -278,7 +278,7 @@ PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
POSSIBILITY OF SUCH DAMAGES.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest

398
COPYING.DOCS Normal file
View File

@ -0,0 +1,398 @@
GNU Free Documentation License
Version 1.2, November 2002
Copyright (C) 2000,2001,2002 Free Software Foundation, Inc.
59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
0. PREAMBLE
The purpose of this License is to make a manual, textbook, or other
functional and useful document "free" in the sense of freedom: to
assure everyone the effective freedom to copy and redistribute it,
with or without modifying it, either commercially or noncommercially.
Secondarily, this License preserves for the author and publisher a way
to get credit for their work, while not being considered responsible
for modifications made by others.
This License is a kind of "copyleft", which means that derivative
works of the document must themselves be free in the same sense. It
complements the GNU General Public License, which is a copyleft
license designed for free software.
We have designed this License in order to use it for manuals for free
software, because free software needs free documentation: a free
program should come with manuals providing the same freedoms that the
software does. But this License is not limited to software manuals;
it can be used for any textual work, regardless of subject matter or
whether it is published as a printed book. We recommend this License
principally for works whose purpose is instruction or reference.
1. APPLICABILITY AND DEFINITIONS
This License applies to any manual or other work, in any medium, that
contains a notice placed by the copyright holder saying it can be
distributed under the terms of this License. Such a notice grants a
world-wide, royalty-free license, unlimited in duration, to use that
work under the conditions stated herein. The "Document", below,
refers to any such manual or work. Any member of the public is a
licensee, and is addressed as "you". You accept the license if you
copy, modify or distribute the work in a way requiring permission
under copyright law.
A "Modified Version" of the Document means any work containing the
Document or a portion of it, either copied verbatim, or with
modifications and/or translated into another language.
A "Secondary Section" is a named appendix or a front-matter section of
the Document that deals exclusively with the relationship of the
publishers or authors of the Document to the Document's overall subject
(or to related matters) and contains nothing that could fall directly
within that overall subject. (Thus, if the Document is in part a
textbook of mathematics, a Secondary Section may not explain any
mathematics.) The relationship could be a matter of historical
connection with the subject or with related matters, or of legal,
commercial, philosophical, ethical or political position regarding
them.
The "Invariant Sections" are certain Secondary Sections whose titles
are designated, as being those of Invariant Sections, in the notice
that says that the Document is released under this License. If a
section does not fit the above definition of Secondary then it is not
allowed to be designated as Invariant. The Document may contain zero
Invariant Sections. If the Document does not identify any Invariant
Sections then there are none.
The "Cover Texts" are certain short passages of text that are listed,
as Front-Cover Texts or Back-Cover Texts, in the notice that says that
the Document is released under this License. A Front-Cover Text may
be at most 5 words, and a Back-Cover Text may be at most 25 words.
A "Transparent" copy of the Document means a machine-readable copy,
represented in a format whose specification is available to the
general public, that is suitable for revising the document
straightforwardly with generic text editors or (for images composed of
pixels) generic paint programs or (for drawings) some widely available
drawing editor, and that is suitable for input to text formatters or
for automatic translation to a variety of formats suitable for input
to text formatters. A copy made in an otherwise Transparent file
format whose markup, or absence of markup, has been arranged to thwart
or discourage subsequent modification by readers is not Transparent.
An image format is not Transparent if used for any substantial amount
of text. A copy that is not "Transparent" is called "Opaque".
Examples of suitable formats for Transparent copies include plain
ASCII without markup, Texinfo input format, LaTeX input format, SGML
or XML using a publicly available DTD, and standard-conforming simple
HTML, PostScript or PDF designed for human modification. Examples of
transparent image formats include PNG, XCF and JPG. Opaque formats
include proprietary formats that can be read and edited only by
proprietary word processors, SGML or XML for which the DTD and/or
processing tools are not generally available, and the
machine-generated HTML, PostScript or PDF produced by some word
processors for output purposes only.
The "Title Page" means, for a printed book, the title page itself,
plus such following pages as are needed to hold, legibly, the material
this License requires to appear in the title page. For works in
formats which do not have any title page as such, "Title Page" means
the text near the most prominent appearance of the work's title,
preceding the beginning of the body of the text.
A section "Entitled XYZ" means a named subunit of the Document whose
title either is precisely XYZ or contains XYZ in parentheses following
text that translates XYZ in another language. (Here XYZ stands for a
specific section name mentioned below, such as "Acknowledgements",
"Dedications", "Endorsements", or "History".) To "Preserve the Title"
of such a section when you modify the Document means that it remains a
section "Entitled XYZ" according to this definition.
The Document may include Warranty Disclaimers next to the notice which
states that this License applies to the Document. These Warranty
Disclaimers are considered to be included by reference in this
License, but only as regards disclaiming warranties: any other
implication that these Warranty Disclaimers may have is void and has
no effect on the meaning of this License.
2. VERBATIM COPYING
You may copy and distribute the Document in any medium, either
commercially or noncommercially, provided that this License, the
copyright notices, and the license notice saying this License applies
to the Document are reproduced in all copies, and that you add no other
conditions whatsoever to those of this License. You may not use
technical measures to obstruct or control the reading or further
copying of the copies you make or distribute. However, you may accept
compensation in exchange for copies. If you distribute a large enough
number of copies you must also follow the conditions in section 3.
You may also lend copies, under the same conditions stated above, and
you may publicly display copies.
3. COPYING IN QUANTITY
If you publish printed copies (or copies in media that commonly have
printed covers) of the Document, numbering more than 100, and the
Document's license notice requires Cover Texts, you must enclose the
copies in covers that carry, clearly and legibly, all these Cover
Texts: Front-Cover Texts on the front cover, and Back-Cover Texts on
the back cover. Both covers must also clearly and legibly identify
you as the publisher of these copies. The front cover must present
the full title with all words of the title equally prominent and
visible. You may add other material on the covers in addition.
Copying with changes limited to the covers, as long as they preserve
the title of the Document and satisfy these conditions, can be treated
as verbatim copying in other respects.
If the required texts for either cover are too voluminous to fit
legibly, you should put the first ones listed (as many as fit
reasonably) on the actual cover, and continue the rest onto adjacent
pages.
If you publish or distribute Opaque copies of the Document numbering
more than 100, you must either include a machine-readable Transparent
copy along with each Opaque copy, or state in or with each Opaque copy
a computer-network location from which the general network-using
public has access to download using public-standard network protocols
a complete Transparent copy of the Document, free of added material.
If you use the latter option, you must take reasonably prudent steps,
when you begin distribution of Opaque copies in quantity, to ensure
that this Transparent copy will remain thus accessible at the stated
location until at least one year after the last time you distribute an
Opaque copy (directly or through your agents or retailers) of that
edition to the public.
It is requested, but not required, that you contact the authors of the
Document well before redistributing any large number of copies, to give
them a chance to provide you with an updated version of the Document.
4. MODIFICATIONS
You may copy and distribute a Modified Version of the Document under
the conditions of sections 2 and 3 above, provided that you release
the Modified Version under precisely this License, with the Modified
Version filling the role of the Document, thus licensing distribution
and modification of the Modified Version to whoever possesses a copy
of it. In addition, you must do these things in the Modified Version:
A. Use in the Title Page (and on the covers, if any) a title distinct
from that of the Document, and from those of previous versions
(which should, if there were any, be listed in the History section
of the Document). You may use the same title as a previous version
if the original publisher of that version gives permission.
B. List on the Title Page, as authors, one or more persons or entities
responsible for authorship of the modifications in the Modified
Version, together with at least five of the principal authors of the
Document (all of its principal authors, if it has fewer than five),
unless they release you from this requirement.
C. State on the Title page the name of the publisher of the
Modified Version, as the publisher.
D. Preserve all the copyright notices of the Document.
E. Add an appropriate copyright notice for your modifications
adjacent to the other copyright notices.
F. Include, immediately after the copyright notices, a license notice
giving the public permission to use the Modified Version under the
terms of this License, in the form shown in the Addendum below.
G. Preserve in that license notice the full lists of Invariant Sections
and required Cover Texts given in the Document's license notice.
H. Include an unaltered copy of this License.
I. Preserve the section Entitled "History", Preserve its Title, and add
to it an item stating at least the title, year, new authors, and
publisher of the Modified Version as given on the Title Page. If
there is no section Entitled "History" in the Document, create one
stating the title, year, authors, and publisher of the Document as
given on its Title Page, then add an item describing the Modified
Version as stated in the previous sentence.
J. Preserve the network location, if any, given in the Document for
public access to a Transparent copy of the Document, and likewise
the network locations given in the Document for previous versions
it was based on. These may be placed in the "History" section.
You may omit a network location for a work that was published at
least four years before the Document itself, or if the original
publisher of the version it refers to gives permission.
K. For any section Entitled "Acknowledgements" or "Dedications",
Preserve the Title of the section, and preserve in the section all
the substance and tone of each of the contributor acknowledgements
and/or dedications given therein.
L. Preserve all the Invariant Sections of the Document,
unaltered in their text and in their titles. Section numbers
or the equivalent are not considered part of the section titles.
M. Delete any section Entitled "Endorsements". Such a section
may not be included in the Modified Version.
N. Do not retitle any existing section to be Entitled "Endorsements"
or to conflict in title with any Invariant Section.
O. Preserve any Warranty Disclaimers.
If the Modified Version includes new front-matter sections or
appendices that qualify as Secondary Sections and contain no material
copied from the Document, you may at your option designate some or all
of these sections as invariant. To do this, add their titles to the
list of Invariant Sections in the Modified Version's license notice.
These titles must be distinct from any other section titles.
You may add a section Entitled "Endorsements", provided it contains
nothing but endorsements of your Modified Version by various
parties--for example, statements of peer review or that the text has
been approved by an organization as the authoritative definition of a
standard.
You may add a passage of up to five words as a Front-Cover Text, and a
passage of up to 25 words as a Back-Cover Text, to the end of the list
of Cover Texts in the Modified Version. Only one passage of
Front-Cover Text and one of Back-Cover Text may be added by (or
through arrangements made by) any one entity. If the Document already
includes a cover text for the same cover, previously added by you or
by arrangement made by the same entity you are acting on behalf of,
you may not add another; but you may replace the old one, on explicit
permission from the previous publisher that added the old one.
The author(s) and publisher(s) of the Document do not by this License
give permission to use their names for publicity for or to assert or
imply endorsement of any Modified Version.
5. COMBINING DOCUMENTS
You may combine the Document with other documents released under this
License, under the terms defined in section 4 above for modified
versions, provided that you include in the combination all of the
Invariant Sections of all of the original documents, unmodified, and
list them all as Invariant Sections of your combined work in its
license notice, and that you preserve all their Warranty Disclaimers.
The combined work need only contain one copy of this License, and
multiple identical Invariant Sections may be replaced with a single
copy. If there are multiple Invariant Sections with the same name but
different contents, make the title of each such section unique by
adding at the end of it, in parentheses, the name of the original
author or publisher of that section if known, or else a unique number.
Make the same adjustment to the section titles in the list of
Invariant Sections in the license notice of the combined work.
In the combination, you must combine any sections Entitled "History"
in the various original documents, forming one section Entitled
"History"; likewise combine any sections Entitled "Acknowledgements",
and any sections Entitled "Dedications". You must delete all sections
Entitled "Endorsements".
6. COLLECTIONS OF DOCUMENTS
You may make a collection consisting of the Document and other documents
released under this License, and replace the individual copies of this
License in the various documents with a single copy that is included in
the collection, provided that you follow the rules of this License for
verbatim copying of each of the documents in all other respects.
You may extract a single document from such a collection, and distribute
it individually under this License, provided you insert a copy of this
License into the extracted document, and follow this License in all
other respects regarding verbatim copying of that document.
7. AGGREGATION WITH INDEPENDENT WORKS
A compilation of the Document or its derivatives with other separate
and independent documents or works, in or on a volume of a storage or
distribution medium, is called an "aggregate" if the copyright
resulting from the compilation is not used to limit the legal rights
of the compilation's users beyond what the individual works permit.
When the Document is included in an aggregate, this License does not
apply to the other works in the aggregate which are not themselves
derivative works of the Document.
If the Cover Text requirement of section 3 is applicable to these
copies of the Document, then if the Document is less than one half of
the entire aggregate, the Document's Cover Texts may be placed on
covers that bracket the Document within the aggregate, or the
electronic equivalent of covers if the Document is in electronic form.
Otherwise they must appear on printed covers that bracket the whole
aggregate.
8. TRANSLATION
Translation is considered a kind of modification, so you may
distribute translations of the Document under the terms of section 4.
Replacing Invariant Sections with translations requires special
permission from their copyright holders, but you may include
translations of some or all Invariant Sections in addition to the
original versions of these Invariant Sections. You may include a
translation of this License, and all the license notices in the
Document, and any Warranty Disclaimers, provided that you also include
the original English version of this License and the original versions
of those notices and disclaimers. In case of a disagreement between
the translation and the original version of this License or a notice
or disclaimer, the original version will prevail.
If a section in the Document is Entitled "Acknowledgements",
"Dedications", or "History", the requirement (section 4) to Preserve
its Title (section 1) will typically require changing the actual
title.
9. TERMINATION
You may not copy, modify, sublicense, or distribute the Document except
as expressly provided for under this License. Any other attempt to
copy, modify, sublicense or distribute the Document is void, and will
automatically terminate your rights under this License. However,
parties who have received copies, or rights, from you under this
License will not have their licenses terminated so long as such
parties remain in full compliance.
10. FUTURE REVISIONS OF THIS LICENSE
The Free Software Foundation may publish new, revised versions
of the GNU Free Documentation License from time to time. Such new
versions will be similar in spirit to the present version, but may
differ in detail to address new problems or concerns. See
http://www.gnu.org/copyleft/.
Each version of the License is given a distinguishing version number.
If the Document specifies that a particular numbered version of this
License "or any later version" applies to it, you have the option of
following the terms and conditions either of that specified version or
of any later version that has been published (not as a draft) by the
Free Software Foundation. If the Document does not specify a version
number of this License, you may choose any version ever published (not
as a draft) by the Free Software Foundation.
ADDENDUM: How to use this License for your documents
To use this License in a document you have written, include a copy of
the License in the document and put the following copyright and
license notices just after the title page:
Copyright (c) YEAR YOUR NAME.
Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License, Version 1.2
or any later version published by the Free Software Foundation;
with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts.
A copy of the license is included in the section entitled "GNU
Free Documentation License".
If you have Invariant Sections, Front-Cover Texts and Back-Cover Texts,
replace the "with...Texts." line with this:
with the Invariant Sections being LIST THEIR TITLES, with the
Front-Cover Texts being LIST, and with the Back-Cover Texts being LIST.
If you have Invariant Sections without Cover Texts, or some other
combination of the three, merge those two alternatives to suit the
situation.
If your document contains nontrivial examples of program code, we
recommend releasing these examples in parallel under your choice of
free software license, such as the GNU General Public License,
to permit their use in free software.

View File

@ -1,7 +1,7 @@
AUTOMAKE_OPTIONS = foreign 1.6 dist-bzip2
include $(top_srcdir)/Makefile.all.am
include $(top_srcdir)/Makefile.all.am
## include must be first for tool.h
## addrcheck must come after memcheck, for mac_*.o

View File

@ -1,3 +1 @@
docdir = $(datadir)/doc/valgrind
dist_doc_DATA = ac_main.html
EXTRA_DIST = ac-manual.xml

View File

@ -0,0 +1,131 @@
<?xml version="1.0"?> <!-- -*- sgml -*- -->
<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
<chapter id="ac-manual" xreflabel="Addrcheck: a lightweight memory checker">
<title>Addrcheck: a lightweight memory checker</title>
<para>To use this tool, you must specify
<computeroutput>--tool=addrcheck</computeroutput> on the Valgrind
command line.</para>
<sect1>
<title>Kinds of bugs that Addrcheck can find</title>
<para>Addrcheck is a simplified version of the Memcheck tool
described in Section 3. It is identical in every way to
Memcheck, except for one important detail: it does not do the
undefined-value checks that Memcheck does. This means Addrcheck
is about twice as fast as Memcheck, and uses less memory.
Addrcheck can detect the following errors:</para>
<itemizedlist>
<listitem>
<para>Reading/writing memory after it has been free'd</para>
</listitem>
<listitem>
<para>Reading/writing off the end of malloc'd blocks</para>
</listitem>
<listitem>
<para>Reading/writing inappropriate areas on the stack</para>
</listitem>
<listitem>
<para>Memory leaks -- where pointers to malloc'd blocks are lost
forever</para>
</listitem>
<listitem>
<para>Mismatched use of malloc/new/new [] vs free/delete/delete []</para>
</listitem>
<listitem>
<para>Overlapping <computeroutput>src</computeroutput> and
<computeroutput>dst</computeroutput> pointers in
<computeroutput>memcpy()</computeroutput> and related
functions</para>
</listitem>
<listitem>
<para>Some misuses of the POSIX pthreads API</para>
</listitem>
</itemizedlist>
<para>Rather than duplicate much of the Memcheck docs here
(a.k.a. since I am a lazy b'stard), users of Addrcheck are
advised to read <xref linkend="mc-manual.bugs"/>. Some important
points:</para>
<itemizedlist>
<listitem>
<para>Addrcheck is exactly like Memcheck, except that all the
value-definedness tracking machinery has been removed.
Therefore, the Memcheck documentation which discusses
definedess ("V-bits") is irrelevant. The stuff on
addressibility ("A-bits") is still relevant.</para>
</listitem>
<listitem>
<para>Addrcheck accepts the same command-line flags as
Memcheck, with the exception of ... (to be filled in).</para>
</listitem>
<listitem>
<para>Like Memcheck, Addrcheck will do memory leak checking
(internally, the same code does leak checking for both
tools). The only difference is how the two tools decide
which memory locations to consider when searching for
pointers to blocks. Memcheck will only consider 4-byte
aligned locations which are validly addressible and which
hold defined values. Addrcheck does not track definedness
and so cannot apply the last, "defined value",
criteria.</para>
<para>The result is that Addrcheck's leak checker may
"discover" pointers to blocks that Memcheck would not. So it
is possible that Memcheck could (correctly) conclude that a
block is leaked, yet Addrcheck would not conclude
that.</para>
<para>Whether or not this has any effect in practice is
unknown. I suspect not, but that is mere speculation at this
stage.</para>
</listitem>
</itemizedlist>
<para>Addrcheck is, therefore, a fine-grained address checker.
All it really does is check each memory reference to say whether
or not that location may validly be addressed. Addrcheck has a
memory overhead of one bit per byte of used address space. In
contrast, Memcheck has an overhead of nine bits per byte.</para>
<para>Due to lazyness on the part of the implementor (Julian),
error messages from Addrcheck do not distinguish reads from
writes. So it will say, for example, "Invalid memory access of
size 4", whereas Memcheck would have said whether the access is a
read or a write. This could easily be remedied, if anyone is
particularly bothered.</para>
<para>Addrcheck is quite pleasant to use. It's faster than
Memcheck, and the lack of valid-value checks has another side
effect: the errors it does report are relatively easy to track
down, compared to the tedious and often confusing search
sometimes needed to find the cause of uninitialised-value errors
reported by Memcheck.</para>
<para>Because it is faster and lighter than Memcheck, our hope is
that Addrcheck is more suitable for less-intrusive, larger scale
testing than is viable with Memcheck. As of mid-November 2002,
we have experimented with running the KDE-3.1 desktop on
Addrcheck (the entire process tree, starting from
<computeroutput>startkde</computeroutput>). Running on a 512MB,
1.7 GHz P4, the result is nearly usable. The ultimate aim is
that is fast and unintrusive enough that (eg) KDE sessions may be
unintrusively monitored for addressing errors whilst people do
real work with their KDE desktop.</para>
<para>Addrcheck is a new experiment in the Valgrind world. We'd
be interested to hear your feedback on it.</para>
</sect1>
</chapter>

View File

@ -1,103 +0,0 @@
<html>
<head>
<title>Addrcheck: a lightweight memory checker</title>
</head>
<body>
<a name="ac-top"></a>
<h2>5&nbsp; <b>Addrcheck</b>: a lightweight memory checker</h2>
To use this tool, you must specify <code>--tool=addrcheck</code>
on the Valgrind command line.
<h3>5.1&nbsp; Kinds of bugs that Addrcheck can find</h3>
Addrcheck is a simplified version of the Memcheck tool described
in Section 3. It is identical in every way to Memcheck, except for
one important detail: it does not do the undefined-value checks that
Memcheck does. This means Addrcheck is about twice as fast as
Memcheck, and uses less memory. Addrcheck can detect the following
errors:
<ul>
<li>Reading/writing memory after it has been free'd</li>
<li>Reading/writing off the end of malloc'd blocks</li>
<li>Reading/writing inappropriate areas on the stack</li>
<li>Memory leaks -- where pointers to malloc'd blocks are lost
forever</li>
<li>Mismatched use of malloc/new/new [] vs free/delete/delete []</li>
<li>Overlapping <code>src</code> and <code>dst</code> pointers in
<code>memcpy()</code> and related functions</li>
<li>Some misuses of the POSIX pthreads API</li>
</ul>
<p>
<p>
Rather than duplicate much of the Memcheck docs here (a.k.a. since I
am a lazy b'stard), users of Addrcheck are advised to read
the section on Memcheck. Some important points:
<ul>
<li>Addrcheck is exactly like Memcheck, except that all the
value-definedness tracking machinery has been removed. Therefore,
the Memcheck documentation which discusses definedess ("V-bits") is
irrelevant. The stuff on addressibility ("A-bits") is still
relevant.
<p>
<li>Addrcheck accepts the same command-line flags as Memcheck, with
the exception of ... (to be filled in).
<p>
<li>Like Memcheck, Addrcheck will do memory leak checking (internally,
the same code does leak checking for both tools). The only
difference is how the two tools decide which memory locations
to consider when searching for pointers to blocks. Memcheck will
only consider 4-byte aligned locations which are validly
addressible and which hold defined values. Addrcheck does not
track definedness and so cannot apply the last, "defined value",
criteria.
<p>
The result is that Addrcheck's leak checker may "discover"
pointers to blocks that Memcheck would not. So it is possible
that Memcheck could (correctly) conclude that a block is leaked,
yet Addrcheck would not conclude that.
<p>
Whether or not this has any effect in practice is unknown. I
suspect not, but that is mere speculation at this stage.
</ul>
<p>
Addrcheck is, therefore, a fine-grained address checker. All it
really does is check each memory reference to say whether or not that
location may validly be addressed. Addrcheck has a memory overhead of
one bit per byte of used address space. In contrast, Memcheck has an
overhead of nine bits per byte.
<p>
Due to lazyness on the part of the implementor (Julian), error
messages from Addrcheck do not distinguish reads from writes. So it
will say, for example, "Invalid memory access of size 4", whereas
Memcheck would have said whether the access is a read or a write.
This could easily be remedied, if anyone is particularly bothered.
<p>
Addrcheck is quite pleasant to use. It's faster than Memcheck, and
the lack of valid-value checks has another side effect: the errors it
does report are relatively easy to track down, compared to the
tedious and often confusing search sometimes needed to find the
cause of uninitialised-value errors reported by Memcheck.
<p>
Because it is faster and lighter than Memcheck, our hope is that
Addrcheck is more suitable for less-intrusive, larger scale testing
than is viable with Memcheck. As of mid-November 2002, we have
experimented with running the KDE-3.1 desktop on Addrcheck (the entire
process tree, starting from <code>startkde</code>). Running on a
512MB, 1.7 GHz P4, the result is nearly usable. The ultimate aim is
that is fast and unintrusive enough that (eg) KDE sessions may be
unintrusively monitored for addressing errors whilst people do real
work with their KDE desktop.
<p>
Addrcheck is a new experiment in the Valgrind world. We'd be
interested to hear your feedback on it.
</body>
</html>

View File

@ -1,3 +1 @@
docdir = $(datadir)/doc/valgrind
dist_doc_DATA = cg_main.html cg_techdocs.html
EXTRA_DIST = cg-manual.xml cg-tech-docs.xml

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,560 @@
<?xml version="1.0"?> <!-- -*- sgml -*- -->
<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
<chapter id="cg-tech-docs" xreflabel="How Cachegrind works">
<title>How Cachegrind works</title>
<sect1 id="cg-tech-docs.profiling" xreflabel="Cache profiling">
<title>Cache profiling</title>
<para>Valgrind is a very nice platform for doing cache profiling
and other kinds of simulation, because it converts horrible x86
instructions into nice clean RISC-like UCode. For example, for
cache profiling we are interested in instructions that read and
write memory; in UCode there are only four instructions that do
this: <computeroutput>LOAD</computeroutput>,
<computeroutput>STORE</computeroutput>,
<computeroutput>FPU_R</computeroutput> and
<computeroutput>FPU_W</computeroutput>. By contrast, because of
the x86 addressing modes, almost every instruction can read or
write memory.</para>
<para>Most of the cache profiling machinery is in the file
<filename>vg_cachesim.c</filename>.</para>
<para>These notes are a somewhat haphazard guide to how
Valgrind's cache profiling works.</para>
</sect1>
<sect1 id="cg-tech-docs.costcentres" xreflabel="Cost centres">
<title>Cost centres</title>
<para>Valgrind gathers cache profiling about every instruction
executed, individually. Each instruction has a <command>cost
centre</command> associated with it. There are two kinds of cost
centre: one for instructions that don't reference memory
(<computeroutput>iCC</computeroutput>), and one for instructions
that do (<computeroutput>idCC</computeroutput>):</para>
<programlisting><![CDATA[
typedef struct _CC {
ULong a;
ULong m1;
ULong m2;
} CC;
typedef struct _iCC {
/* word 1 */
UChar tag;
UChar instr_size;
/* words 2+ */
Addr instr_addr;
CC I;
} iCC;
typedef struct _idCC {
/* word 1 */
UChar tag;
UChar instr_size;
UChar data_size;
/* words 2+ */
Addr instr_addr;
CC I;
CC D;
} idCC; ]]></programlisting>
<para>Each <computeroutput>CC</computeroutput> has three fields
<computeroutput>a</computeroutput>,
<computeroutput>m1</computeroutput>,
<computeroutput>m2</computeroutput> for recording references,
level 1 misses and level 2 misses. Each of these is a 64-bit
<computeroutput>ULong</computeroutput> -- the numbers can get
very large, ie. greater than 4.2 billion allowed by a 32-bit
unsigned int.</para>
<para>A <computeroutput>iCC</computeroutput> has one
<computeroutput>CC</computeroutput> for instruction cache
accesses. A <computeroutput>idCC</computeroutput> has two, one
for instruction cache accesses, and one for data cache
accesses.</para>
<para>The <computeroutput>iCC</computeroutput> and
<computeroutput>dCC</computeroutput> structs also store
unchanging information about the instruction:</para>
<itemizedlist>
<listitem>
<para>An instruction-type identification tag (explained
below)</para>
</listitem>
<listitem>
<para>Instruction size</para>
</listitem>
<listitem>
<para>Data reference size
(<computeroutput>idCC</computeroutput> only)</para>
</listitem>
<listitem>
<para>Instruction address</para>
</listitem>
</itemizedlist>
<para>Note that data address is not one of the fields for
<computeroutput>idCC</computeroutput>. This is because for many
memory-referencing instructions the data address can change each
time it's executed (eg. if it uses register-offset addressing).
We have to give this item to the cache simulation in a different
way (see Instrumentation section below). Some memory-referencing
instructions do always reference the same address, but we don't
try to treat them specialy in order to keep things simple.</para>
<para>Also note that there is only room for recording info about
one data cache access in an
<computeroutput>idCC</computeroutput>. So what about
instructions that do a read then a write, such as:</para>
<programlisting><![CDATA[
inc %(esi)]]></programlisting>
<para>In a write-allocate cache, as simulated by Valgrind, the
write cannot miss, since it immediately follows the read which
will drag the block into the cache if it's not already there. So
the write access isn't really interesting, and Valgrind doesn't
record it. This means that Valgrind doesn't measure memory
references, but rather memory references that could miss in the
cache. This behaviour is the same as that used by the AMD Athlon
hardware counters. It also has the benefit of simplifying the
implementation -- instructions that read and write memory can be
treated like instructions that read memory.</para>
</sect1>
<sect1 id="cg-tech-docs.ccstore" xreflabel="Storing cost-centres">
<title>Storing cost-centres</title>
<para>Cost centres are stored in a way that makes them very cheap
to lookup, which is important since one is looked up for every
original x86 instruction executed.</para>
<para>Valgrind does JIT translations at the basic block level,
and cost centres are also setup and stored at the basic block
level. By doing things carefully, we store all the cost centres
for a basic block in a contiguous array, and lookup comes almost
for free.</para>
<para>Consider this part of a basic block (for exposition
purposes, pretend it's an entire basic block):</para>
<programlisting><![CDATA[
movl $0x0,%eax
movl $0x99, -4(%ebp)]]></programlisting>
<para>The translation to UCode looks like this:</para>
<programlisting><![CDATA[
MOVL $0x0, t20
PUTL t20, %EAX
INCEIPo $5
LEA1L -4(t4), t14
MOVL $0x99, t18
STL t18, (t14)
INCEIPo $7]]></programlisting>
<para>The first step is to allocate the cost centres. This
requires a preliminary pass to count how many x86 instructions
were in the basic block, and their types (and thus sizes). UCode
translations for single x86 instructions are delimited by the
<computeroutput>INCEIPo</computeroutput> instruction, the
argument of which gives the byte size of the instruction (note
that lazy INCEIP updating is turned off to allow this).</para>
<para>We can tell if an x86 instruction references memory by
looking for <computeroutput>LDL</computeroutput> and
<computeroutput>STL</computeroutput> UCode instructions, and thus
what kind of cost centre is required. From this we can determine
how many cost centres we need for the basic block, and their
sizes. We can then allocate them in a single array.</para>
<para>Consider the example code above. After the preliminary
pass, we know we need two cost centres, one
<computeroutput>iCC</computeroutput> and one
<computeroutput>dCC</computeroutput>. So we allocate an array to
store these which looks like this:</para>
<programlisting><![CDATA[
|(uninit)| tag (1 byte)
|(uninit)| instr_size (1 bytes)
|(uninit)| (padding) (2 bytes)
|(uninit)| instr_addr (4 bytes)
|(uninit)| I.a (8 bytes)
|(uninit)| I.m1 (8 bytes)
|(uninit)| I.m2 (8 bytes)
|(uninit)| tag (1 byte)
|(uninit)| instr_size (1 byte)
|(uninit)| data_size (1 byte)
|(uninit)| (padding) (1 byte)
|(uninit)| instr_addr (4 bytes)
|(uninit)| I.a (8 bytes)
|(uninit)| I.m1 (8 bytes)
|(uninit)| I.m2 (8 bytes)
|(uninit)| D.a (8 bytes)
|(uninit)| D.m1 (8 bytes)
|(uninit)| D.m2 (8 bytes)]]></programlisting>
<para>(We can see now why we need tags to distinguish between the
two types of cost centres.)</para>
<para>We also record the size of the array. We look up the debug
info of the first instruction in the basic block, and then stick
the array into a table indexed by filename and function name.
This makes it easy to dump the information quickly to file at the
end.</para>
</sect1>
<sect1 id="cg-tech-docs.instrum" xreflabel="Instrumentation">
<title>Instrumentation</title>
<para>The instrumentation pass has two main jobs:</para>
<orderedlist>
<listitem>
<para>Fill in the gaps in the allocated cost centres.</para>
</listitem>
<listitem>
<para>Add UCode to call the cache simulator for each
instruction.</para>
</listitem>
</orderedlist>
<para>The instrumentation pass steps through the UCode and the
cost centres in tandem. As each original x86 instruction's UCode
is processed, the appropriate gaps in the instructions cost
centre are filled in, for example:</para>
<programlisting><![CDATA[
|INSTR_CC| tag (1 byte)
|5 | instr_size (1 bytes)
|(uninit)| (padding) (2 bytes)
|i_addr1 | instr_addr (4 bytes)
|0 | I.a (8 bytes)
|0 | I.m1 (8 bytes)
|0 | I.m2 (8 bytes)
|WRITE_CC| tag (1 byte)
|7 | instr_size (1 byte)
|4 | data_size (1 byte)
|(uninit)| (padding) (1 byte)
|i_addr2 | instr_addr (4 bytes)
|0 | I.a (8 bytes)
|0 | I.m1 (8 bytes)
|0 | I.m2 (8 bytes)
|0 | D.a (8 bytes)
|0 | D.m1 (8 bytes)
|0 | D.m2 (8 bytes)]]></programlisting>
<para>(Note that this step is not performed if a basic block is
re-translated; see <xref linkend="cg-tech-docs.retranslations"/> for
more information.)</para>
<para>GCC inserts padding before the
<computeroutput>instr_size</computeroutput> field so that it is
word aligned.</para>
<para>The instrumentation added to call the cache simulation
function looks like this (instrumentation is indented to
distinguish it from the original UCode):</para>
<programlisting><![CDATA[
MOVL $0x0, t20
PUTL t20, %EAX
PUSHL %eax
PUSHL %ecx
PUSHL %edx
MOVL $0x4091F8A4, t46 # address of 1st CC
PUSHL t46
CALLMo $0x12 # second cachesim function
CLEARo $0x4
POPL %edx
POPL %ecx
POPL %eax
INCEIPo $5
LEA1L -4(t4), t14
MOVL $0x99, t18
MOVL t14, t42
STL t18, (t14)
PUSHL %eax
PUSHL %ecx
PUSHL %edx
PUSHL t42
MOVL $0x4091F8C4, t44 # address of 2nd CC
PUSHL t44
CALLMo $0x13 # second cachesim function
CLEARo $0x8
POPL %edx
POPL %ecx
POPL %eax
INCEIPo $7]]></programlisting>
<para>Consider the first instruction's UCode. Each call is
surrounded by three <computeroutput>PUSHL</computeroutput> and
<computeroutput>POPL</computeroutput> instructions to save and
restore the caller-save registers. Then the address of the
instruction's cost centre is pushed onto the stack, to be the
first argument to the cache simulation function. The address is
known at this point because we are doing a simultaneous pass
through the cost centre array. This means the cost centre lookup
for each instruction is almost free (just the cost of pushing an
argument for a function call). Then the call to the cache
simulation function for non-memory-reference instructions is made
(note that the <computeroutput>CALLMo</computeroutput>
UInstruction takes an offset into a table of predefined
functions; it is not an absolute address), and the single
argument is <computeroutput>CLEAR</computeroutput>ed from the
stack.</para>
<para>The second instruction's UCode is similar. The only
difference is that, as mentioned before, we have to pass the
address of the data item referenced to the cache simulation
function too. This explains the <computeroutput>MOVL t14,
t42</computeroutput> and <computeroutput>PUSHL
t42</computeroutput> UInstructions. (Note that the seemingly
redundant <computeroutput>MOV</computeroutput>ing will probably
be optimised away during register allocation.)</para>
<para>Note that instead of storing unchanging information about
each instruction (instruction size, data size, etc) in its cost
centre, we could have passed in these arguments to the simulation
function. But this would slow the calls down (two or three extra
arguments pushed onto the stack). Also it would bloat the UCode
instrumentation by amounts similar to the space required for them
in the cost centre; bloated UCode would also fill the translation
cache more quickly, requiring more translations for large
programs and slowing them down more.</para>
</sect1>
<sect1 id="cg-tech-docs.retranslations"
xreflabel="Handling basic block retranslations">
<title>Handling basic block retranslations</title>
<para>The above description ignores one complication. Valgrind
has a limited size cache for basic block translations; if it
fills up, old translations are discarded. If a discarded basic
block is executed again, it must be re-translated.</para>
<para>However, we can't use this approach for profiling -- we
can't throw away cost centres for instructions in the middle of
execution! So when a basic block is translated, we first look
for its cost centre array in the hash table. If there is no cost
centre array, it must be the first translation, so we proceed as
described above. But if there is a cost centre array already, it
must be a retranslation. In this case, we skip the cost centre
allocation and initialisation steps, but still do the UCode
instrumentation step.</para>
</sect1>
<sect1 id="cg-tech-docs.cachesim" xreflabel="The cache simulation">
<title>The cache simulation</title>
<para>The cache simulation is fairly straightforward. It just
tracks which memory blocks are in the cache at the moment (it
doesn't track the contents, since that is irrelevant).</para>
<para>The interface to the simulation is quite clean. The
functions called from the UCode contain calls to the simulation
functions in the files
<filename>vg_cachesim_{I1,D1,L2}.c</filename>; these calls are
inlined so that only one function call is done per simulated x86
instruction. The file <filename>vg_cachesim.c</filename> simply
<computeroutput>#include</computeroutput>s the three files
containing the simulation, which makes plugging in new cache
simulations is very easy -- you just replace the three files and
recompile.</para>
</sect1>
<sect1 id="cg-tech-docs.output" xreflabel="Output">
<title>Output</title>
<para>Output is fairly straightforward, basically printing the
cost centre for every instruction, grouped by files and
functions. Total counts (eg. total cache accesses, total L1
misses) are calculated when traversing this structure rather than
during execution, to save time; the cache simulation functions
are called so often that even one or two extra adds can make a
sizeable difference.</para>
<para>Input file has the following format:</para>
<programlisting><![CDATA[
file ::= desc_line* cmd_line events_line data_line+ summary_line
desc_line ::= "desc:" ws? non_nl_string
cmd_line ::= "cmd:" ws? cmd
events_line ::= "events:" ws? (event ws)+
data_line ::= file_line | fn_line | count_line
file_line ::= ("fl=" | "fi=" | "fe=") filename
fn_line ::= "fn=" fn_name
count_line ::= line_num ws? (count ws)+
summary_line ::= "summary:" ws? (count ws)+
count ::= num | "."]]></programlisting>
<para>Where:</para>
<itemizedlist>
<listitem>
<para><computeroutput>non_nl_string</computeroutput> is any
string not containing a newline.</para>
</listitem>
<listitem>
<para><computeroutput>cmd</computeroutput> is a command line
invocation.</para>
</listitem>
<listitem>
<para><computeroutput>filename</computeroutput> and
<computeroutput>fn_name</computeroutput> can be anything.</para>
</listitem>
<listitem>
<para><computeroutput>num</computeroutput> and
<computeroutput>line_num</computeroutput> are decimal
numbers.</para>
</listitem>
<listitem>
<para><computeroutput>ws</computeroutput> is whitespace.</para>
</listitem>
<listitem>
<para><computeroutput>nl</computeroutput> is a newline.</para>
</listitem>
</itemizedlist>
<para>The contents of the "desc:" lines is printed out at the top
of the summary. This is a generic way of providing simulation
specific information, eg. for giving the cache configuration for
cache simulation.</para>
<para>Counts can be "." to represent "N/A", eg. the number of
write misses for an instruction that doesn't write to
memory.</para>
<para>The number of counts in each
<computeroutput>line</computeroutput> and the
<computeroutput>summary_line</computeroutput> should not exceed
the number of events in the
<computeroutput>event_line</computeroutput>. If the number in
each <computeroutput>line</computeroutput> is less, cg_annotate
treats those missing as though they were a "." entry.</para>
<para>A <computeroutput>file_line</computeroutput> changes the
current file name. A <computeroutput>fn_line</computeroutput>
changes the current function name. A
<computeroutput>count_line</computeroutput> contains counts that
pertain to the current filename/fn_name. A "fn="
<computeroutput>file_line</computeroutput> and a
<computeroutput>fn_line</computeroutput> must appear before any
<computeroutput>count_line</computeroutput>s to give the context
of the first <computeroutput>count_line</computeroutput>s.</para>
<para>Each <computeroutput>file_line</computeroutput> should be
immediately followed by a
<computeroutput>fn_line</computeroutput>. "fi="
<computeroutput>file_lines</computeroutput> are used to switch
filenames for inlined functions; "fe="
<computeroutput>file_lines</computeroutput> are similar, but are
put at the end of a basic block in which the file name hasn't
been switched back to the original file name. (fi and fe lines
behave the same, they are only distinguished to help
debugging.)</para>
</sect1>
<sect1 id="cg-tech-docs.summary"
xreflabel="Summary of performance features">
<title>Summary of performance features</title>
<para>Quite a lot of work has gone into making the profiling as
fast as possible. This is a summary of the important
features:</para>
<itemizedlist>
<listitem>
<para>The basic block-level cost centre storage allows almost
free cost centre lookup.</para>
</listitem>
<listitem>
<para>Only one function call is made per instruction
simulated; even this accounts for a sizeable percentage of
execution time, but it seems unavoidable if we want
flexibility in the cache simulator.</para>
</listitem>
<listitem>
<para>Unchanging information about an instruction is stored
in its cost centre, avoiding unnecessary argument pushing,
and minimising UCode instrumentation bloat.</para>
</listitem>
<listitem>
<para>Summary counts are calculated at the end, rather than
during execution.</para>
</listitem>
<listitem>
<para>The <computeroutput>cachegrind.out</computeroutput>
output files can contain huge amounts of information; file
format was carefully chosen to minimise file sizes.</para>
</listitem>
</itemizedlist>
</sect1>
<sect1 id="cg-tech-docs.annotate" xreflabel="Annotation">
<title>Annotation</title>
<para>Annotation is done by cg_annotate. It is a fairly
straightforward Perl script that slurps up all the cost centres,
and then runs through all the chosen source files, printing out
cost centres with them. It too has been carefully optimised.</para>
</sect1>
<sect1 id="cg-tech-docs.extensions" xreflabel="Similar work, extensions">
<title>Similar work, extensions</title>
<para>It would be relatively straightforward to do other
simulations and obtain line-by-line information about interesting
events. A good example would be branch prediction -- all
branches could be instrumented to interact with a branch
prediction simulator, using very similar techniques to those
described above.</para>
<para>In particular, cg_annotate would not need to change -- the
file format is such that it is not specific to the cache
simulation, but could be used for any kind of line-by-line
information. The only part of cg_annotate that is specific to
the cache simulation is the name of the input file
(<computeroutput>cachegrind.out</computeroutput>), although it
would be very simple to add an option to control this.</para>
</sect1>
</chapter>

View File

@ -1,714 +0,0 @@
<html>
<head>
<title>Cachegrind: a cache-miss profiler</title>
</head>
<body>
<a name="cg-top"></a>
<h2>4&nbsp; <b>Cachegrind</b>: a cache-miss profiler</h2>
To use this tool, you must specify <code>--tool=cachegrind</code>
on the Valgrind command line.
<p>
Detailed technical documentation on how Cachegrind works is available
<A HREF="cg_techdocs.html">here</A>. If you want to know how
to <b>use</b> it, you only need to read this page.
<a name="cache"></a>
<h3>4.1&nbsp; Cache profiling</h3>
Cachegrind is a tool for doing cache simulations and annotating your source
line-by-line with the number of cache misses. In particular, it records:
<ul>
<li>L1 instruction cache reads and misses;
<li>L1 data cache reads and read misses, writes and write misses;
<li>L2 unified cache reads and read misses, writes and writes misses.
</ul>
On a modern x86 machine, an L1 miss will typically cost around 10 cycles,
and an L2 miss can cost as much as 200 cycles. Detailed cache profiling can be
very useful for improving the performance of your program.<p>
Also, since one instruction cache read is performed per instruction executed,
you can find out how many instructions are executed per line, which can be
useful for traditional profiling and test coverage.<p>
Any feedback, bug-fixes, suggestions, etc, welcome.
<h3>4.2&nbsp; Overview</h3>
First off, as for normal Valgrind use, you probably want to compile with
debugging info (the <code>-g</code> flag). But by contrast with normal
Valgrind use, you probably <b>do</b> want to turn optimisation on, since you
should profile your program as it will be normally run.
The two steps are:
<ol>
<li>Run your program with <code>valgrind --tool=cachegrind</code> in front of
the normal command line invocation. When the program finishes,
Cachegrind will print summary cache statistics. It also collects
line-by-line information in a file
<code>cachegrind.out.<i>pid</i></code>, where <code><i>pid</i></code>
is the program's process id.
<p>
This step should be done every time you want to collect
information about a new program, a changed program, or about the
same program with different input.
</li><p>
<li>Generate a function-by-function summary, and possibly annotate
source files, using the supplied
<code>cg_annotate</code> program. Source files to annotate can be
specified manually, or manually on the command line, or
"interesting" source files can be annotated automatically with
the <code>--auto=yes</code> option. You can annotate C/C++
files or assembly language files equally easily.
<p>
This step can be performed as many times as you like for each
Step 2. You may want to do multiple annotations showing
different information each time.
</li><p>
</ol>
The steps are described in detail in the following sections.
<h3>4.3&nbsp; Cache simulation specifics</h3>
Cachegrind uses a simulation for a machine with a split L1 cache and a unified
L2 cache. This configuration is used for all (modern) x86-based machines we
are aware of. Old Cyrix CPUs had a unified I and D L1 cache, but they are
ancient history now.<p>
The more specific characteristics of the simulation are as follows.
<ul>
<li>Write-allocate: when a write miss occurs, the block written to
is brought into the D1 cache. Most modern caches have this
property.<p>
</li>
<p>
<li>Bit-selection hash function: the line(s) in the cache to which a
memory block maps is chosen by the middle bits M--(M+N-1) of the
byte address, where:
<ul>
<li>&nbsp;line size = 2^M bytes&nbsp;</li>
<li>(cache size / line size) = 2^N bytes</li>
</ul>
</li>
<p>
<li>Inclusive L2 cache: the L2 cache replicates all the entries of
the L1 cache. This is standard on Pentium chips, but AMD
Athlons use an exclusive L2 cache that only holds blocks evicted
from L1. Ditto AMD Durons and most modern VIAs.</li>
</ul>
The cache configuration simulated (cache size, associativity and line size) is
determined automagically using the CPUID instruction. If you have an old
machine that (a) doesn't support the CPUID instruction, or (b) supports it in
an early incarnation that doesn't give any cache information, then Cachegrind
will fall back to using a default configuration (that of a model 3/4 Athlon).
Cachegrind will tell you if this happens. You can manually specify one, two or
all three levels (I1/D1/L2) of the cache from the command line using the
<code>--I1</code>, <code>--D1</code> and <code>--L2</code> options.
<p>
Other noteworthy behaviour:
<ul>
<li>References that straddle two cache lines are treated as follows:
<ul>
<li>If both blocks hit --&gt; counted as one hit</li>
<li>If one block hits, the other misses --&gt; counted as one miss</li>
<li>If both blocks miss --&gt; counted as one miss (not two)</li>
</ul>
</li>
<li>Instructions that modify a memory location (eg. <code>inc</code> and
<code>dec</code>) are counted as doing just a read, ie. a single data
reference. This may seem strange, but since the write can never cause a
miss (the read guarantees the block is in the cache) it's not very
interesting.
<p>
Thus it measures not the number of times the data cache is accessed, but
the number of times a data cache miss could occur.<p>
</li>
</ul>
If you are interested in simulating a cache with different properties, it is
not particularly hard to write your own cache simulator, or to modify the
existing ones in <code>vg_cachesim_I1.c</code>, <code>vg_cachesim_D1.c</code>,
<code>vg_cachesim_L2.c</code> and <code>vg_cachesim_gen.c</code>. We'd be
interested to hear from anyone who does.
<a name="profile"></a>
<h3>4.4&nbsp; Profiling programs</h3>
To gather cache profiling information about the program <code>ls -l</code>,
invoke Cachegrind like this:
<blockquote><code>valgrind --tool=cachegrind ls -l</code></blockquote>
The program will execute (slowly). Upon completion, summary statistics
that look like this will be printed:
<pre>
==31751== I refs: 27,742,716
==31751== I1 misses: 276
==31751== L2 misses: 275
==31751== I1 miss rate: 0.0%
==31751== L2i miss rate: 0.0%
==31751==
==31751== D refs: 15,430,290 (10,955,517 rd + 4,474,773 wr)
==31751== D1 misses: 41,185 ( 21,905 rd + 19,280 wr)
==31751== L2 misses: 23,085 ( 3,987 rd + 19,098 wr)
==31751== D1 miss rate: 0.2% ( 0.1% + 0.4%)
==31751== L2d miss rate: 0.1% ( 0.0% + 0.4%)
==31751==
==31751== L2 misses: 23,360 ( 4,262 rd + 19,098 wr)
==31751== L2 miss rate: 0.0% ( 0.0% + 0.4%)
</pre>
Cache accesses for instruction fetches are summarised first, giving the
number of fetches made (this is the number of instructions executed, which
can be useful to know in its own right), the number of I1 misses, and the
number of L2 instruction (<code>L2i</code>) misses.
<p>
Cache accesses for data follow. The information is similar to that of the
instruction fetches, except that the values are also shown split between reads
and writes (note each row's <code>rd</code> and <code>wr</code> values add up
to the row's total).
<p>
Combined instruction and data figures for the L2 cache follow that.
<h3>4.5&nbsp; Output file</h3>
As well as printing summary information, Cachegrind also writes
line-by-line cache profiling information to a file named
<code>cachegrind.out.<i>pid</i></code>. This file is human-readable, but is
best interpreted by the accompanying program <code>cg_annotate</code>,
described in the next section.
<p>
Things to note about the <code>cachegrind.out.<i>pid</i></code> file:
<ul>
<li>It is written every time Cachegrind
is run, and will overwrite any existing
<code>cachegrind.out.<i>pid</i></code> in the current directory (but
that won't happen very often because it takes some time for process ids
to be recycled).</li><p>
<li>It can be huge: <code>ls -l</code> generates a file of about
350KB. Browsing a few files and web pages with a Konqueror
built with full debugging information generates a file
of around 15 MB.</li>
</ul>
Note that older versions of Cachegrind used a log file named
<code>cachegrind.out</code> (i.e. no <code><i>.pid</i></code> suffix).
The suffix serves two purposes. Firstly, it means you don't have to
rename old log files that you don't want to overwrite. Secondly, and
more importantly, it allows correct profiling with the
<code>--trace-children=yes</code> option of programs that spawn child
processes.
<a name="profileflags"></a>
<h3>4.6&nbsp; Cachegrind options</h3>
Cache-simulation specific options are:
<ul>
<li><code>--I1=&lt;size&gt;,&lt;associativity&gt;,&lt;line_size&gt;</code><br>
<code>--D1=&lt;size&gt;,&lt;associativity&gt;,&lt;line_size&gt;</code><br>
<code>--L2=&lt;size&gt;,&lt;associativity&gt;,&lt;line_size&gt;</code><p>
[default: uses CPUID for automagic cache configuration]<p>
Manually specifies the I1/D1/L2 cache configuration, where
<code>size</code> and <code>line_size</code> are measured in bytes. The
three items must be comma-separated, but with no spaces, eg:
<blockquote>
<code>valgrind --tool=cachegrind --I1=65535,2,64</code>
</blockquote>
You can specify one, two or three of the I1/D1/L2 caches. Any level not
manually specified will be simulated using the configuration found in the
normal way (via the CPUID instruction, or failing that, via defaults).
</ul>
<a name="annotate"></a>
<h3>4.7&nbsp; Annotating C/C++ programs</h3>
Before using <code>cg_annotate</code>, it is worth widening your
window to be at least 120-characters wide if possible, as the output
lines can be quite long.
<p>
To get a function-by-function summary, run <code>cg_annotate
--<i>pid</i></code> in a directory containing a
<code>cachegrind.out.<i>pid</i></code> file. The <code>--<i>pid</i></code>
is required so that <code>cg_annotate</code> knows which log file to use when
several are present.
<p>
The output looks like this:
<pre>
--------------------------------------------------------------------------------
I1 cache: 65536 B, 64 B, 2-way associative
D1 cache: 65536 B, 64 B, 2-way associative
L2 cache: 262144 B, 64 B, 8-way associative
Command: concord vg_to_ucode.c
Events recorded: Ir I1mr I2mr Dr D1mr D2mr Dw D1mw D2mw
Events shown: Ir I1mr I2mr Dr D1mr D2mr Dw D1mw D2mw
Event sort order: Ir I1mr I2mr Dr D1mr D2mr Dw D1mw D2mw
Threshold: 99%
Chosen for annotation:
Auto-annotation: on
--------------------------------------------------------------------------------
Ir I1mr I2mr Dr D1mr D2mr Dw D1mw D2mw
--------------------------------------------------------------------------------
27,742,716 276 275 10,955,517 21,905 3,987 4,474,773 19,280 19,098 PROGRAM TOTALS
--------------------------------------------------------------------------------
Ir I1mr I2mr Dr D1mr D2mr Dw D1mw D2mw file:function
--------------------------------------------------------------------------------
8,821,482 5 5 2,242,702 1,621 73 1,794,230 0 0 getc.c:_IO_getc
5,222,023 4 4 2,276,334 16 12 875,959 1 1 concord.c:get_word
2,649,248 2 2 1,344,810 7,326 1,385 . . . vg_main.c:strcmp
2,521,927 2 2 591,215 0 0 179,398 0 0 concord.c:hash
2,242,740 2 2 1,046,612 568 22 448,548 0 0 ctype.c:tolower
1,496,937 4 4 630,874 9,000 1,400 279,388 0 0 concord.c:insert
897,991 51 51 897,831 95 30 62 1 1 ???:???
598,068 1 1 299,034 0 0 149,517 0 0 ../sysdeps/generic/lockfile.c:__flockfile
598,068 0 0 299,034 0 0 149,517 0 0 ../sysdeps/generic/lockfile.c:__funlockfile
598,024 4 4 213,580 35 16 149,506 0 0 vg_clientmalloc.c:malloc
446,587 1 1 215,973 2,167 430 129,948 14,057 13,957 concord.c:add_existing
341,760 2 2 128,160 0 0 128,160 0 0 vg_clientmalloc.c:vg_trap_here_WRAPPER
320,782 4 4 150,711 276 0 56,027 53 53 concord.c:init_hash_table
298,998 1 1 106,785 0 0 64,071 1 1 concord.c:create
149,518 0 0 149,516 0 0 1 0 0 ???:tolower@@GLIBC_2.0
149,518 0 0 149,516 0 0 1 0 0 ???:fgetc@@GLIBC_2.0
95,983 4 4 38,031 0 0 34,409 3,152 3,150 concord.c:new_word_node
85,440 0 0 42,720 0 0 21,360 0 0 vg_clientmalloc.c:vg_bogus_epilogue
</pre>
First up is a summary of the annotation options:
<ul>
<li>I1 cache, D1 cache, L2 cache: cache configuration. So you know the
configuration with which these results were obtained.</li><p>
<li>Command: the command line invocation of the program under
examination.</li><p>
<li>Events recorded: event abbreviations are:<p>
<ul>
<li><code>Ir </code>: I cache reads (ie. instructions executed)</li>
<li><code>I1mr</code>: I1 cache read misses</li>
<li><code>I2mr</code>: L2 cache instruction read misses</li>
<li><code>Dr </code>: D cache reads (ie. memory reads)</li>
<li><code>D1mr</code>: D1 cache read misses</li>
<li><code>D2mr</code>: L2 cache data read misses</li>
<li><code>Dw </code>: D cache writes (ie. memory writes)</li>
<li><code>D1mw</code>: D1 cache write misses</li>
<li><code>D2mw</code>: L2 cache data write misses</li>
</ul><p>
Note that D1 total accesses is given by <code>D1mr</code> +
<code>D1mw</code>, and that L2 total accesses is given by
<code>I2mr</code> + <code>D2mr</code> + <code>D2mw</code>.</li><p>
<li>Events shown: the events shown (a subset of events gathered). This can
be adjusted with the <code>--show</code> option.</li><p>
<li>Event sort order: the sort order in which functions are shown. For
example, in this case the functions are sorted from highest
<code>Ir</code> counts to lowest. If two functions have identical
<code>Ir</code> counts, they will then be sorted by <code>I1mr</code>
counts, and so on. This order can be adjusted with the
<code>--sort</code> option.<p>
Note that this dictates the order the functions appear. It is <b>not</b>
the order in which the columns appear; that is dictated by the "events
shown" line (and can be changed with the <code>--show</code> option).
</li><p>
<li>Threshold: <code>cg_annotate</code> by default omits functions
that cause very low numbers of misses to avoid drowning you in
information. In this case, cg_annotate shows summaries the
functions that account for 99% of the <code>Ir</code> counts;
<code>Ir</code> is chosen as the threshold event since it is the
primary sort event. The threshold can be adjusted with the
<code>--threshold</code> option.</li><p>
<li>Chosen for annotation: names of files specified manually for annotation;
in this case none.</li><p>
<li>Auto-annotation: whether auto-annotation was requested via the
<code>--auto=yes</code> option. In this case no.</li><p>
</ul>
Then follows summary statistics for the whole program. These are similar
to the summary provided when running <code>valgrind --tool=cachegrind</code>.<p>
Then follows function-by-function statistics. Each function is
identified by a <code>file_name:function_name</code> pair. If a column
contains only a dot it means the function never performs
that event (eg. the third row shows that <code>strcmp()</code>
contains no instructions that write to memory). The name
<code>???</code> is used if the the file name and/or function name
could not be determined from debugging information. If most of the
entries have the form <code>???:???</code> the program probably wasn't
compiled with <code>-g</code>. If any code was invalidated (either due to
self-modifying code or unloading of shared objects) its counts are aggregated
into a single cost centre written as <code>(discarded):(discarded)</code>.<p>
It is worth noting that functions will come from three types of source files:
<ol>
<li> From the profiled program (<code>concord.c</code> in this example).</li>
<li>From libraries (eg. <code>getc.c</code>)</li>
<li>From Valgrind's implementation of some libc functions (eg.
<code>vg_clientmalloc.c:malloc</code>). These are recognisable because
the filename begins with <code>vg_</code>, and is probably one of
<code>vg_main.c</code>, <code>vg_clientmalloc.c</code> or
<code>vg_mylibc.c</code>.
</li>
</ol>
There are two ways to annotate source files -- by choosing them
manually, or with the <code>--auto=yes</code> option. To do it
manually, just specify the filenames as arguments to
<code>cg_annotate</code>. For example, the output from running
<code>cg_annotate concord.c</code> for our example produces the same
output as above followed by an annotated version of
<code>concord.c</code>, a section of which looks like:
<pre>
--------------------------------------------------------------------------------
-- User-annotated source: concord.c
--------------------------------------------------------------------------------
Ir I1mr I2mr Dr D1mr D2mr Dw D1mw D2mw
[snip]
. . . . . . . . . void init_hash_table(char *file_name, Word_Node *table[])
3 1 1 . . . 1 0 0 {
. . . . . . . . . FILE *file_ptr;
. . . . . . . . . Word_Info *data;
1 0 0 . . . 1 1 1 int line = 1, i;
. . . . . . . . .
5 0 0 . . . 3 0 0 data = (Word_Info *) create(sizeof(Word_Info));
. . . . . . . . .
4,991 0 0 1,995 0 0 998 0 0 for (i = 0; i < TABLE_SIZE; i++)
3,988 1 1 1,994 0 0 997 53 52 table[i] = NULL;
. . . . . . . . .
. . . . . . . . . /* Open file, check it. */
6 0 0 1 0 0 4 0 0 file_ptr = fopen(file_name, "r");
2 0 0 1 0 0 . . . if (!(file_ptr)) {
. . . . . . . . . fprintf(stderr, "Couldn't open '%s'.\n", file_name);
1 1 1 . . . . . . exit(EXIT_FAILURE);
. . . . . . . . . }
. . . . . . . . .
165,062 1 1 73,360 0 0 91,700 0 0 while ((line = get_word(data, line, file_ptr)) != EOF)
146,712 0 0 73,356 0 0 73,356 0 0 insert(data->;word, data->line, table);
. . . . . . . . .
4 0 0 1 0 0 2 0 0 free(data);
4 0 0 1 0 0 2 0 0 fclose(file_ptr);
3 0 0 2 0 0 . . . }
</pre>
(Although column widths are automatically minimised, a wide terminal is clearly
useful.)<p>
Each source file is clearly marked (<code>User-annotated source</code>) as
having been chosen manually for annotation. If the file was found in one of
the directories specified with the <code>-I</code>/<code>--include</code>
option, the directory and file are both given.<p>
Each line is annotated with its event counts. Events not applicable for a line
are represented by a `.'; this is useful for distinguishing between an event
which cannot happen, and one which can but did not.<p>
Sometimes only a small section of a source file is executed. To minimise
uninteresting output, Valgrind only shows annotated lines and lines within a
small distance of annotated lines. Gaps are marked with the line numbers so
you know which part of a file the shown code comes from, eg:
<pre>
(figures and code for line 704)
-- line 704 ----------------------------------------
-- line 878 ----------------------------------------
(figures and code for line 878)
</pre>
The amount of context to show around annotated lines is controlled by the
<code>--context</code> option.<p>
To get automatic annotation, run <code>cg_annotate --auto=yes</code>.
cg_annotate will automatically annotate every source file it can find that is
mentioned in the function-by-function summary. Therefore, the files chosen for
auto-annotation are affected by the <code>--sort</code> and
<code>--threshold</code> options. Each source file is clearly marked
(<code>Auto-annotated source</code>) as being chosen automatically. Any files
that could not be found are mentioned at the end of the output, eg:
<pre>
--------------------------------------------------------------------------------
The following files chosen for auto-annotation could not be found:
--------------------------------------------------------------------------------
getc.c
ctype.c
../sysdeps/generic/lockfile.c
</pre>
This is quite common for library files, since libraries are usually compiled
with debugging information, but the source files are often not present on a
system. If a file is chosen for annotation <b>both</b> manually and
automatically, it is marked as <code>User-annotated source</code>.
Use the <code>-I/--include</code> option to tell Valgrind where to look for
source files if the filenames found from the debugging information aren't
specific enough.
Beware that cg_annotate can take some time to digest large
<code>cachegrind.out.<i>pid</i></code> files, e.g. 30 seconds or more. Also
beware that auto-annotation can produce a lot of output if your program is
large!
<h3>4.8&nbsp; Annotating assembler programs</h3>
Valgrind can annotate assembler programs too, or annotate the
assembler generated for your C program. Sometimes this is useful for
understanding what is really happening when an interesting line of C
code is translated into multiple instructions.<p>
To do this, you just need to assemble your <code>.s</code> files with
assembler-level debug information. gcc doesn't do this, but you can
use the GNU assembler with the <code>--gstabs</code> option to
generate object files with this information, eg:
<blockquote><code>as --gstabs foo.s</code></blockquote>
You can then profile and annotate source files in the same way as for C/C++
programs.
<h3>4.9&nbsp; <code>cg_annotate</code> options</h3>
<ul>
<li><code>--<i>pid</i></code></li><p>
Indicates which <code>cachegrind.out.<i>pid</i></code> file to read.
Not actually an option -- it is required.
<li><code>-h, --help</code></li><p>
<li><code>-v, --version</code><p>
Help and version, as usual.</li>
<li><code>--sort=A,B,C</code> [default: order in
<code>cachegrind.out.<i>pid</i></code>]<p>
Specifies the events upon which the sorting of the function-by-function
entries will be based. Useful if you want to concentrate on eg. I cache
misses (<code>--sort=I1mr,I2mr</code>), or D cache misses
(<code>--sort=D1mr,D2mr</code>), or L2 misses
(<code>--sort=D2mr,I2mr</code>).</li><p>
<li><code>--show=A,B,C</code> [default: all, using order in
<code>cachegrind.out.<i>pid</i></code>]<p>
Specifies which events to show (and the column order). Default is to use
all present in the <code>cachegrind.out.<i>pid</i></code> file (and use
the order in the file).</li><p>
<li><code>--threshold=X</code> [default: 99%] <p>
Sets the threshold for the function-by-function summary. Functions are
shown that account for more than X% of the primary sort event. If
auto-annotating, also affects which files are annotated.
Note: thresholds can be set for more than one of the events by appending
any events for the <code>--sort</code> option with a colon and a number
(no spaces, though). E.g. if you want to see the functions that cover
99% of L2 read misses and 99% of L2 write misses, use this option:
<blockquote><code>--sort=D2mr:99,D2mw:99</code></blockquote>
</li><p>
<li><code>--auto=no</code> [default]<br>
<code>--auto=yes</code> <p>
When enabled, automatically annotates every file that is mentioned in the
function-by-function summary that can be found. Also gives a list of
those that couldn't be found.
<li><code>--context=N</code> [default: 8]<p>
Print N lines of context before and after each annotated line. Avoids
printing large sections of source files that were not executed. Use a
large number (eg. 10,000) to show all source lines.
</li><p>
<li><code>-I=&lt;dir&gt;, --include=&lt;dir&gt;</code>
[default: empty string]<p>
Adds a directory to the list in which to search for files. Multiple
-I/--include options can be given to add multiple directories.
</ul>
<h3>4.10&nbsp; Warnings</h3>
There are a couple of situations in which cg_annotate issues warnings.
<ul>
<li>If a source file is more recent than the
<code>cachegrind.out.<i>pid</i></code> file. This is because the
information in <code>cachegrind.out.<i>pid</i></code> is only recorded
with line numbers, so if the line numbers change at all in the source
(eg. lines added, deleted, swapped), any annotations will be
incorrect.<p>
<li>If information is recorded about line numbers past the end of a file.
This can be caused by the above problem, ie. shortening the source file
while using an old <code>cachegrind.out.<i>pid</i></code> file. If this
happens, the figures for the bogus lines are printed anyway (clearly
marked as bogus) in case they are important.</li><p>
</ul>
<h3>4.11&nbsp; Things to watch out for</h3>
Some odd things that can occur during annotation:
<ul>
<li>If annotating at the assembler level, you might see something like this:
<pre>
1 0 0 . . . . . . leal -12(%ebp),%eax
1 0 0 . . . 1 0 0 movl %eax,84(%ebx)
2 0 0 0 0 0 1 0 0 movl $1,-20(%ebp)
. . . . . . . . . .align 4,0x90
1 0 0 . . . . . . movl $.LnrB,%eax
1 0 0 . . . 1 0 0 movl %eax,-16(%ebp)
</pre>
How can the third instruction be executed twice when the others are
executed only once? As it turns out, it isn't. Here's a dump of the
executable, using <code>objdump -d</code>:
<pre>
8048f25: 8d 45 f4 lea 0xfffffff4(%ebp),%eax
8048f28: 89 43 54 mov %eax,0x54(%ebx)
8048f2b: c7 45 ec 01 00 00 00 movl $0x1,0xffffffec(%ebp)
8048f32: 89 f6 mov %esi,%esi
8048f34: b8 08 8b 07 08 mov $0x8078b08,%eax
8048f39: 89 45 f0 mov %eax,0xfffffff0(%ebp)
</pre>
Notice the extra <code>mov %esi,%esi</code> instruction. Where did this
come from? The GNU assembler inserted it to serve as the two bytes of
padding needed to align the <code>movl $.LnrB,%eax</code> instruction on
a four-byte boundary, but pretended it didn't exist when adding debug
information. Thus when Valgrind reads the debug info it thinks that the
<code>movl $0x1,0xffffffec(%ebp)</code> instruction covers the address
range 0x8048f2b--0x804833 by itself, and attributes the counts for the
<code>mov %esi,%esi</code> to it.<p>
</li>
<li>Inlined functions can cause strange results in the function-by-function
summary. If a function <code>inline_me()</code> is defined in
<code>foo.h</code> and inlined in the functions <code>f1()</code>,
<code>f2()</code> and <code>f3()</code> in <code>bar.c</code>, there will
not be a <code>foo.h:inline_me()</code> function entry. Instead, there
will be separate function entries for each inlining site, ie.
<code>foo.h:f1()</code>, <code>foo.h:f2()</code> and
<code>foo.h:f3()</code>. To find the total counts for
<code>foo.h:inline_me()</code>, add up the counts from each entry.<p>
The reason for this is that although the debug info output by gcc
indicates the switch from <code>bar.c</code> to <code>foo.h</code>, it
doesn't indicate the name of the function in <code>foo.h</code>, so
Valgrind keeps using the old one.<p>
<li>Sometimes, the same filename might be represented with a relative name
and with an absolute name in different parts of the debug info, eg:
<code>/home/user/proj/proj.h</code> and <code>../proj.h</code>. In this
case, if you use auto-annotation, the file will be annotated twice with
the counts split between the two.<p>
</li>
<li>Files with more than 65,535 lines cause difficulties for the stabs debug
info reader. This is because the line number in the <code>struct
nlist</code> defined in <code>a.out.h</code> under Linux is only a 16-bit
value. Valgrind can handle some files with more than 65,535 lines
correctly by making some guesses to identify line number overflows. But
some cases are beyond it, in which case you'll get a warning message
explaining that annotations for the file might be incorrect.<p>
</li>
<li>If you compile some files with <code>-g</code> and some without, some
events that take place in a file without debug info could be attributed
to the last line of a file with debug info (whichever one gets placed
before the non-debug-info file in the executable).<p>
</li>
</ul>
This list looks long, but these cases should be fairly rare.<p>
Note: stabs is not an easy format to read. If you come across bizarre
annotations that look like might be caused by a bug in the stabs reader,
please let us know.<p>
<h3>4.12&nbsp; Accuracy</h3>
Valgrind's cache profiling has a number of shortcomings:
<ul>
<li>It doesn't account for kernel activity -- the effect of system calls on
the cache contents is ignored.</li><p>
<li>It doesn't account for other process activity (although this is probably
desirable when considering a single program).</li><p>
<li>It doesn't account for virtual-to-physical address mappings; hence the
entire simulation is not a true representation of what's happening in the
cache.</li><p>
<li>It doesn't account for cache misses not visible at the instruction level,
eg. those arising from TLB misses, or speculative execution.</li><p>
<li>Valgrind's custom threads implementation will schedule threads
differently to the standard one. This could warp the results for
threaded programs.
</li><p>
<li>The instructions <code>bts</code>, <code>btr</code> and <code>btc</code>
will incorrectly be counted as doing a data read if both the arguments
are registers, eg:
<blockquote><code>btsl %eax, %edx</code></blockquote>
This should only happen rarely.
</li><p>
<li>FPU instructions with data sizes of 28 and 108 bytes (e.g.
<code>fsave</code>) are treated as though they only access 16 bytes.
These instructions seem to be rare so hopefully this won't affect
accuracy much.
</li><p>
</ul>
Another thing worth nothing is that results are very sensitive. Changing the
size of the <code>valgrind.so</code> file, the size of the program being
profiled, or even the length of its name can perturb the results. Variations
will be small, but don't expect perfectly repeatable results if your program
changes at all.<p>
While these factors mean you shouldn't trust the results to be super-accurate,
hopefully they should be close enough to be useful.<p>
<h3>4.13&nbsp; Todo</h3>
<ul>
<li>Program start-up/shut-down calls a lot of functions that aren't
interesting and just complicate the output. Would be nice to exclude
these somehow.</li>
<p>
</ul>
</body>
</html>

View File

@ -1,458 +0,0 @@
<html>
<head>
<style type="text/css">
body { background-color: #ffffff;
color: #000000;
font-family: Times, Helvetica, Arial;
font-size: 14pt}
h4 { margin-bottom: 0.3em}
code { color: #000000;
font-family: Courier;
font-size: 13pt }
pre { color: #000000;
font-family: Courier;
font-size: 13pt }
a:link { color: #0000C0;
text-decoration: none; }
a:visited { color: #0000C0;
text-decoration: none; }
a:active { color: #0000C0;
text-decoration: none; }
</style>
<title>How Cachegrind works</title>
</head>
<body bgcolor="#ffffff">
<a name="cg-techdocs">&nbsp;</a>
<h1 align=center>How Cachegrind works</h1>
<center>
Detailed technical notes for hackers, maintainers and the
overly-curious<br>
<p>
<a href="mailto:njn25@cam.ac.uk">njn25@cam.ac.uk</a><br>
<a
href="http://valgrind.kde.org">http://valgrind.kde.org</a><br>
<p>
Copyright &copy; 2001-2003 Nick Nethercote
<p>
</center>
<p>
<hr width="100%">
<h2>Cache profiling</h2>
Valgrind is a very nice platform for doing cache profiling and other kinds of
simulation, because it converts horrible x86 instructions into nice clean
RISC-like UCode. For example, for cache profiling we are interested in
instructions that read and write memory; in UCode there are only four
instructions that do this: <code>LOAD</code>, <code>STORE</code>,
<code>FPU_R</code> and <code>FPU_W</code>. By contrast, because of the x86
addressing modes, almost every instruction can read or write memory.<p>
Most of the cache profiling machinery is in the file
<code>vg_cachesim.c</code>.<p>
These notes are a somewhat haphazard guide to how Valgrind's cache profiling
works.<p>
<h3>Cost centres</h3>
Valgrind gathers cache profiling about every instruction executed,
individually. Each instruction has a <b>cost centre</b> associated with it.
There are two kinds of cost centre: one for instructions that don't reference
memory (<code>iCC</code>), and one for instructions that do
(<code>idCC</code>):
<pre>
typedef struct _CC {
ULong a;
ULong m1;
ULong m2;
} CC;
typedef struct _iCC {
/* word 1 */
UChar tag;
UChar instr_size;
/* words 2+ */
Addr instr_addr;
CC I;
} iCC;
typedef struct _idCC {
/* word 1 */
UChar tag;
UChar instr_size;
UChar data_size;
/* words 2+ */
Addr instr_addr;
CC I;
CC D;
} idCC;
</pre>
Each <code>CC</code> has three fields <code>a</code>, <code>m1</code>,
<code>m2</code> for recording references, level 1 misses and level 2 misses.
Each of these is a 64-bit <code>ULong</code> -- the numbers can get very large,
ie. greater than 4.2 billion allowed by a 32-bit unsigned int.<p>
A <code>iCC</code> has one <code>CC</code> for instruction cache accesses. A
<code>idCC</code> has two, one for instruction cache accesses, and one for data
cache accesses.<p>
The <code>iCC</code> and <code>dCC</code> structs also store unchanging
information about the instruction:
<ul>
<li>An instruction-type identification tag (explained below)</li><p>
<li>Instruction size</li><p>
<li>Data reference size (<code>idCC</code> only)</li><p>
<li>Instruction address</li><p>
</ul>
Note that data address is not one of the fields for <code>idCC</code>. This is
because for many memory-referencing instructions the data address can change
each time it's executed (eg. if it uses register-offset addressing). We have
to give this item to the cache simulation in a different way (see
Instrumentation section below). Some memory-referencing instructions do always
reference the same address, but we don't try to treat them specialy in order to
keep things simple.<p>
Also note that there is only room for recording info about one data cache
access in an <code>idCC</code>. So what about instructions that do a read then
a write, such as:
<blockquote><code>inc %(esi)</code></blockquote>
In a write-allocate cache, as simulated by Valgrind, the write cannot miss,
since it immediately follows the read which will drag the block into the cache
if it's not already there. So the write access isn't really interesting, and
Valgrind doesn't record it. This means that Valgrind doesn't measure
memory references, but rather memory references that could miss in the cache.
This behaviour is the same as that used by the AMD Athlon hardware counters.
It also has the benefit of simplifying the implementation -- instructions that
read and write memory can be treated like instructions that read memory.<p>
<h3>Storing cost-centres</h3>
Cost centres are stored in a way that makes them very cheap to lookup, which is
important since one is looked up for every original x86 instruction
executed.<p>
Valgrind does JIT translations at the basic block level, and cost centres are
also setup and stored at the basic block level. By doing things carefully, we
store all the cost centres for a basic block in a contiguous array, and lookup
comes almost for free.<p>
Consider this part of a basic block (for exposition purposes, pretend it's an
entire basic block):
<pre>
movl $0x0,%eax
movl $0x99, -4(%ebp)
</pre>
The translation to UCode looks like this:
<pre>
MOVL $0x0, t20
PUTL t20, %EAX
INCEIPo $5
LEA1L -4(t4), t14
MOVL $0x99, t18
STL t18, (t14)
INCEIPo $7
</pre>
The first step is to allocate the cost centres. This requires a preliminary
pass to count how many x86 instructions were in the basic block, and their
types (and thus sizes). UCode translations for single x86 instructions are
delimited by the <code>INCEIPo</code> instruction, the argument of which gives
the byte size of the instruction (note that lazy INCEIP updating is turned off
to allow this).<p>
We can tell if an x86 instruction references memory by looking for
<code>LDL</code> and <code>STL</code> UCode instructions, and thus what kind of
cost centre is required. From this we can determine how many cost centres we
need for the basic block, and their sizes. We can then allocate them in a
single array.<p>
Consider the example code above. After the preliminary pass, we know we need
two cost centres, one <code>iCC</code> and one <code>dCC</code>. So we
allocate an array to store these which looks like this:
<pre>
|(uninit)| tag (1 byte)
|(uninit)| instr_size (1 bytes)
|(uninit)| (padding) (2 bytes)
|(uninit)| instr_addr (4 bytes)
|(uninit)| I.a (8 bytes)
|(uninit)| I.m1 (8 bytes)
|(uninit)| I.m2 (8 bytes)
|(uninit)| tag (1 byte)
|(uninit)| instr_size (1 byte)
|(uninit)| data_size (1 byte)
|(uninit)| (padding) (1 byte)
|(uninit)| instr_addr (4 bytes)
|(uninit)| I.a (8 bytes)
|(uninit)| I.m1 (8 bytes)
|(uninit)| I.m2 (8 bytes)
|(uninit)| D.a (8 bytes)
|(uninit)| D.m1 (8 bytes)
|(uninit)| D.m2 (8 bytes)
</pre>
(We can see now why we need tags to distinguish between the two types of cost
centres.)<p>
We also record the size of the array. We look up the debug info of the first
instruction in the basic block, and then stick the array into a table indexed
by filename and function name. This makes it easy to dump the information
quickly to file at the end.<p>
<h3>Instrumentation</h3>
The instrumentation pass has two main jobs:
<ol>
<li>Fill in the gaps in the allocated cost centres.</li><p>
<li>Add UCode to call the cache simulator for each instruction.</li><p>
</ol>
The instrumentation pass steps through the UCode and the cost centres in
tandem. As each original x86 instruction's UCode is processed, the appropriate
gaps in the instructions cost centre are filled in, for example:
<pre>
|INSTR_CC| tag (1 byte)
|5 | instr_size (1 bytes)
|(uninit)| (padding) (2 bytes)
|i_addr1 | instr_addr (4 bytes)
|0 | I.a (8 bytes)
|0 | I.m1 (8 bytes)
|0 | I.m2 (8 bytes)
|WRITE_CC| tag (1 byte)
|7 | instr_size (1 byte)
|4 | data_size (1 byte)
|(uninit)| (padding) (1 byte)
|i_addr2 | instr_addr (4 bytes)
|0 | I.a (8 bytes)
|0 | I.m1 (8 bytes)
|0 | I.m2 (8 bytes)
|0 | D.a (8 bytes)
|0 | D.m1 (8 bytes)
|0 | D.m2 (8 bytes)
</pre>
(Note that this step is not performed if a basic block is re-translated; see
<a href="#retranslations">here</a> for more information.)<p>
GCC inserts padding before the <code>instr_size</code> field so that it is word
aligned.<p>
The instrumentation added to call the cache simulation function looks like this
(instrumentation is indented to distinguish it from the original UCode):
<pre>
MOVL $0x0, t20
PUTL t20, %EAX
PUSHL %eax
PUSHL %ecx
PUSHL %edx
MOVL $0x4091F8A4, t46 # address of 1st CC
PUSHL t46
CALLMo $0x12 # second cachesim function
CLEARo $0x4
POPL %edx
POPL %ecx
POPL %eax
INCEIPo $5
LEA1L -4(t4), t14
MOVL $0x99, t18
MOVL t14, t42
STL t18, (t14)
PUSHL %eax
PUSHL %ecx
PUSHL %edx
PUSHL t42
MOVL $0x4091F8C4, t44 # address of 2nd CC
PUSHL t44
CALLMo $0x13 # second cachesim function
CLEARo $0x8
POPL %edx
POPL %ecx
POPL %eax
INCEIPo $7
</pre>
Consider the first instruction's UCode. Each call is surrounded by three
<code>PUSHL</code> and <code>POPL</code> instructions to save and restore the
caller-save registers. Then the address of the instruction's cost centre is
pushed onto the stack, to be the first argument to the cache simulation
function. The address is known at this point because we are doing a
simultaneous pass through the cost centre array. This means the cost centre
lookup for each instruction is almost free (just the cost of pushing an
argument for a function call). Then the call to the cache simulation function
for non-memory-reference instructions is made (note that the
<code>CALLMo</code> UInstruction takes an offset into a table of predefined
functions; it is not an absolute address), and the single argument is
<code>CLEAR</code>ed from the stack.<p>
The second instruction's UCode is similar. The only difference is that, as
mentioned before, we have to pass the address of the data item referenced to
the cache simulation function too. This explains the <code>MOVL t14,
t42</code> and <code>PUSHL t42</code> UInstructions. (Note that the seemingly
redundant <code>MOV</code>ing will probably be optimised away during register
allocation.)<p>
Note that instead of storing unchanging information about each instruction
(instruction size, data size, etc) in its cost centre, we could have passed in
these arguments to the simulation function. But this would slow the calls down
(two or three extra arguments pushed onto the stack). Also it would bloat the
UCode instrumentation by amounts similar to the space required for them in the
cost centre; bloated UCode would also fill the translation cache more quickly,
requiring more translations for large programs and slowing them down more.<p>
<a name="retranslations"></a>
<h3>Handling basic block retranslations</h3>
The above description ignores one complication. Valgrind has a limited size
cache for basic block translations; if it fills up, old translations are
discarded. If a discarded basic block is executed again, it must be
re-translated.<p>
However, we can't use this approach for profiling -- we can't throw away cost
centres for instructions in the middle of execution! So when a basic block is
translated, we first look for its cost centre array in the hash table. If
there is no cost centre array, it must be the first translation, so we proceed
as described above. But if there is a cost centre array already, it must be a
retranslation. In this case, we skip the cost centre allocation and
initialisation steps, but still do the UCode instrumentation step.<p>
<h3>The cache simulation</h3>
The cache simulation is fairly straightforward. It just tracks which memory
blocks are in the cache at the moment (it doesn't track the contents, since
that is irrelevant).<p>
The interface to the simulation is quite clean. The functions called from the
UCode contain calls to the simulation functions in the files
<Code>vg_cachesim_{I1,D1,L2}.c</code>; these calls are inlined so that only
one function call is done per simulated x86 instruction. The file
<code>vg_cachesim.c</code> simply <code>#include</code>s the three files
containing the simulation, which makes plugging in new cache simulations is
very easy -- you just replace the three files and recompile.<p>
<h3>Output</h3>
Output is fairly straightforward, basically printing the cost centre for every
instruction, grouped by files and functions. Total counts (eg. total cache
accesses, total L1 misses) are calculated when traversing this structure rather
than during execution, to save time; the cache simulation functions are called
so often that even one or two extra adds can make a sizeable difference.<p>
Input file has the following format:
<pre>
file ::= desc_line* cmd_line events_line data_line+ summary_line
desc_line ::= "desc:" ws? non_nl_string
cmd_line ::= "cmd:" ws? cmd
events_line ::= "events:" ws? (event ws)+
data_line ::= file_line | fn_line | count_line
file_line ::= ("fl=" | "fi=" | "fe=") filename
fn_line ::= "fn=" fn_name
count_line ::= line_num ws? (count ws)+
summary_line ::= "summary:" ws? (count ws)+
count ::= num | "."
</pre>
Where:
<ul>
<li><code>non_nl_string</code> is any string not containing a newline.</li><p>
<li><code>cmd</code> is a command line invocation.</li><p>
<li><code>filename</code> and <code>fn_name</code> can be anything.</li><p>
<li><code>num</code> and <code>line_num</code> are decimal numbers.</li><p>
<li><code>ws</code> is whitespace.</li><p>
<li><code>nl</code> is a newline.</li><p>
</ul>
The contents of the "desc:" lines is printed out at the top of the summary.
This is a generic way of providing simulation specific information, eg. for
giving the cache configuration for cache simulation.<p>
Counts can be "." to represent "N/A", eg. the number of write misses for an
instruction that doesn't write to memory.<p>
The number of counts in each <code>line</code> and the
<code>summary_line</code> should not exceed the number of events in the
<code>event_line</code>. If the number in each <code>line</code> is less,
cg_annotate treats those missing as though they were a "." entry. <p>
A <code>file_line</code> changes the current file name. A <code>fn_line</code>
changes the current function name. A <code>count_line</code> contains counts
that pertain to the current filename/fn_name. A "fn=" <code>file_line</code>
and a <code>fn_line</code> must appear before any <code>count_line</code>s to
give the context of the first <code>count_line</code>s.<p>
Each <code>file_line</code> should be immediately followed by a
<code>fn_line</code>. "fi=" <code>file_lines</code> are used to switch
filenames for inlined functions; "fe=" <code>file_lines</code> are similar, but
are put at the end of a basic block in which the file name hasn't been switched
back to the original file name. (fi and fe lines behave the same, they are
only distinguished to help debugging.)<p>
<h3>Summary of performance features</h3>
Quite a lot of work has gone into making the profiling as fast as possible.
This is a summary of the important features:
<ul>
<li>The basic block-level cost centre storage allows almost free cost centre
lookup.</li><p>
<li>Only one function call is made per instruction simulated; even this
accounts for a sizeable percentage of execution time, but it seems
unavoidable if we want flexibility in the cache simulator.</li><p>
<li>Unchanging information about an instruction is stored in its cost centre,
avoiding unnecessary argument pushing, and minimising UCode
instrumentation bloat.</li><p>
<li>Summary counts are calculated at the end, rather than during
execution.</li><p>
<li>The <code>cachegrind.out</code> output files can contain huge amounts of
information; file format was carefully chosen to minimise file
sizes.</li><p>
</ul>
<h3>Annotation</h3>
Annotation is done by cg_annotate. It is a fairly straightforward Perl script
that slurps up all the cost centres, and then runs through all the chosen
source files, printing out cost centres with them. It too has been carefully
optimised.
<h3>Similar work, extensions</h3>
It would be relatively straightforward to do other simulations and obtain
line-by-line information about interesting events. A good example would be
branch prediction -- all branches could be instrumented to interact with a
branch prediction simulator, using very similar techniques to those described
above.<p>
In particular, cg_annotate would not need to change -- the file format is such
that it is not specific to the cache simulation, but could be used for any kind
of line-by-line information. The only part of cg_annotate that is specific to
the cache simulation is the name of the input file
(<code>cachegrind.out</code>), although it would be very simple to add an
option to control this.<p>
</body>
</html>

View File

@ -356,6 +356,9 @@ AC_OUTPUT(
valgrind.spec
valgrind.pc
docs/Makefile
docs/lib/Makefile
docs/images/Makefile
docs/xml/Makefile
tests/Makefile
tests/vg_regtest
tests/unused/Makefile
@ -371,7 +374,6 @@ AC_OUTPUT(
auxprogs/Makefile
coregrind/Makefile
coregrind/demangle/Makefile
coregrind/docs/Makefile
coregrind/amd64/Makefile
coregrind/arm/Makefile
coregrind/x86/Makefile

View File

@ -1,3 +1 @@
docdir = $(datadir)/doc/valgrind
dist_doc_DATA = cc_main.html
EXTRA_DIST = cc-manual.xml

View File

@ -0,0 +1,50 @@
<?xml version="1.0"?> <!-- -*- sgml -*- -->
<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
<chapter id="cc-manual" xreflabel="CoreCheck">
<title>CoreCheck: a very simple error detector</title>
<para>CoreCheck is a very simple tool for Valgrind. It adds no
instrumentation to the program's code, and only reports the few
kinds of errors detected by Valgrind's core. It is mainly of use
for Valgrind's developers for debugging and regression
testing.</para>
<para>The errors detected are those found by the core when
<computeroutput>VG_(needs).core_errors</computeroutput> is set.
These include:</para>
<itemizedlist>
<listitem>
<para>Pthread API errors (many; eg. unlocking a non-locked
mutex)</para>
</listitem>
<listitem>
<para>Silly arguments to <computeroutput>malloc() </computeroutput> et al
(eg. negative size)</para>
</listitem>
<listitem>
<para>Invalid file descriptors to blocking syscalls
<computeroutput>read()</computeroutput> and
<computeroutput>write()</computeroutput></para>
</listitem>
<listitem>
<para>Bad signal numbers passed to
<computeroutput>sigaction()</computeroutput></para>
</listitem>
<listitem>
<para>Attempts to install signal handler for
<computeroutput>SIGKILL</computeroutput> or
<computeroutput>SIGSTOP</computeroutput></para>
</listitem>
</itemizedlist>
</chapter>

View File

@ -1,66 +0,0 @@
<html>
<head>
<style type="text/css">
body { background-color: #ffffff;
color: #000000;
font-family: Times, Helvetica, Arial;
font-size: 14pt}
h4 { margin-bottom: 0.3em}
code { color: #000000;
font-family: Courier;
font-size: 13pt }
pre { color: #000000;
font-family: Courier;
font-size: 13pt }
a:link { color: #0000C0;
text-decoration: none; }
a:visited { color: #0000C0;
text-decoration: none; }
a:active { color: #0000C0;
text-decoration: none; }
</style>
<title>Cachegrind</title>
</head>
<body bgcolor="#ffffff">
<a name="title"></a>
<h1 align=center>CoreCheck</h1>
<center>This manual was last updated on 2002-10-03</center>
<p>
<center>
<a href="mailto:njn25@cam.ac.uk">njn25@cam.ac.uk</a><br>
Copyright &copy; 2000-2004 Nicholas Nethercote
<p>
CoreCheck is licensed under the GNU General Public License,
version 2<br>
CoreCheck is a Valgrind tool that does very basic error checking.
</center>
<p>
<h2>1&nbsp; CoreCheck</h2>
CoreCheck is a very simple tool for Valgrind. It adds no instrumentation to
the program's code, and only reports the few kinds of errors detected by
Valgrind's core. It is mainly of use for Valgrind's developers for debugging
and regression testing.
<p>
The errors detected are those found by the core when
<code>VG_(needs).core_errors</code> is set. These include:
<ul>
<li>Pthread API errors (many; eg. unlocking a non-locked mutex)<p>
<li>Silly arguments to <code>malloc() </code> et al (eg. negative size)<p>
<li>Invalid file descriptors to blocking syscalls <code>read()</code> and
<code>write()</code><p>
<li>Bad signal numbers passed to <code>sigaction()</code><p>
<li>Attempts to install signal handler for <code>SIGKILL</code> or
<code>SIGSTOP</code> <p>
</ul>
<hr width="100%">
</body>
</html>

View File

@ -4,8 +4,8 @@ include $(top_srcdir)/Makefile.core-AM_CPPFLAGS.am
## When building, we are only interested in the current arch/OS/platform.
## But when doing 'make dist', we are interested in every arch/OS/platform.
## That's what DIST_SUBDIRS specifies.
SUBDIRS = $(VG_ARCH) $(VG_OS) $(VG_PLATFORM) demangle . docs
DIST_SUBDIRS = $(VG_ARCH_ALL) $(VG_OS_ALL) $(VG_PLATFORM_ALL) demangle . docs
SUBDIRS = $(VG_ARCH) $(VG_OS) $(VG_PLATFORM) demangle .
DIST_SUBDIRS = $(VG_ARCH_ALL) $(VG_OS_ALL) $(VG_PLATFORM_ALL) demangle .
AM_CPPFLAGS += -DVG_LIBDIR="\"$(valdir)"\" -I$(srcdir)/demangle \
-DKICKSTART_BASE=@KICKSTART_BASE@ \

View File

@ -1,2 +0,0 @@
Makefile.in
Makefile

View File

@ -1,3 +0,0 @@
docdir = $(datadir)/doc/valgrind
dist_doc_DATA = coregrind_core.html coregrind_intro.html coregrind_tools.html

File diff suppressed because it is too large Load Diff

View File

@ -1,162 +0,0 @@
<a name="intro"></a>
<h2>1&nbsp; Introduction</h2>
<a name="intro-overview"></a>
<h3>1.1&nbsp; An overview of Valgrind</h3>
Valgrind is a flexible system for debugging and profiling Linux-x86
executables. The system consists of a core, which provides a synthetic
x86 CPU in software, and a series of tools, each of which performs some
kind of debugging, profiling, or similar task. The architecture is
modular, so that new tools can be created easily and without disturbing
the existing structure.
<p>
A number of useful tools are supplied as standard. In summary, these
are:
<ul>
<li><b>Memcheck</b> detects memory-management problems in your programs.
All reads and writes of memory are checked, and calls to
malloc/new/free/delete are intercepted. As a result, Memcheck can
detect the following problems:
<ul>
<li>Use of uninitialised memory</li>
<li>Reading/writing memory after it has been free'd</li>
<li>Reading/writing off the end of malloc'd blocks</li>
<li>Reading/writing inappropriate areas on the stack</li>
<li>Memory leaks -- where pointers to malloc'd blocks are lost
forever</li>
<li>Mismatched use of malloc/new/new [] vs free/delete/delete []</li>
<li>Overlapping <code>src</code> and <code>dst</code> pointers in
<code>memcpy()</code> and related functions</li>
<li>Some misuses of the POSIX pthreads API</li>
</ul>
<p>
Problems like these can be difficult to find by other means, often
lying undetected for long periods, then causing occasional,
difficult-to-diagnose crashes.
<p>
<li><b>Addrcheck</b> is a lightweight version of
Memcheck. It is identical to Memcheck except
for the single detail that it does not do any uninitialised-value
checks. All of the other checks -- primarily the fine-grained
address checking -- are still done. The downside of this is that
you don't catch the uninitialised-value errors that
Memcheck can find.
<p>
But the upside is significant: programs run about twice as fast as
they do on Memcheck, and a lot less memory is used. It
still finds reads/writes of freed memory, memory off the end of
blocks and in other invalid places, bugs which you really want to
find before release!
<p>
Because Addrcheck is lighter and faster than
Memcheck, you can run more programs for longer, and so you
may be able to cover more test scenarios. Addrcheck was
created because one of us (Julian) wanted to be able to
run a complete KDE desktop session with checking. As of early
November 2002, we have been able to run KDE-3.0.3 on a 1.7 GHz P4
with 512 MB of memory, using Addrcheck. Although the
result is not stellar, it's quite usable, and it seems plausible
to run KDE for long periods at a time like this, collecting up
all the addressing errors that appear.
<p>
<li><b>Cachegrind</b> is a cache profiler. It performs detailed simulation of
the I1, D1 and L2 caches in your CPU and so can accurately
pinpoint the sources of cache misses in your code. If you desire,
it will show the number of cache misses, memory references and
instructions accruing to each line of source code, with
per-function, per-module and whole-program summaries. If you ask
really nicely it will even show counts for each individual x86
instruction.
<p>
Cachegrind auto-detects your machine's cache configuration
using the <code>CPUID</code> instruction, and so needs no further
configuration info, in most cases.
<p>
Cachegrind is nicely complemented by Josef Weidendorfer's
amazing KCacheGrind visualisation tool (<A
HREF="http://kcachegrind.sourceforge.net">
http://kcachegrind.sourceforge.net</A>), a KDE application which
presents these profiling results in a graphical and
easier-to-understand form.
<p>
<li><b>Helgrind</b> finds data races in multithreaded programs.
Helgrind looks for
memory locations which are accessed by more than one (POSIX
p-)thread, but for which no consistently used (pthread_mutex_)lock
can be found. Such locations are indicative of missing
synchronisation between threads, and could cause hard-to-find
timing-dependent problems.
<p>
Helgrind ("Hell's Gate", in Norse mythology) implements the
so-called "Eraser" data-race-detection algorithm, along with
various refinements (thread-segment lifetimes) which reduce the
number of false errors it reports. It is as yet somewhat of an
experimental tool, so your feedback is especially welcomed here.
<p>
Helgrind has been hacked on extensively by Jeremy
Fitzhardinge, and we have him to thank for getting it to a
releasable state.
</ul>
A number of minor tools (<b>corecheck</b>, <b>lackey</b> and
<b>Nulgrind</b>) are also supplied. These aren't particularly useful --
they exist to illustrate how to create simple tools and to help the
valgrind developers in various ways.
<p>
Valgrind is closely tied to details of the CPU, operating system and
to a less extent, compiler and basic C libraries. This makes it
difficult to make it portable, so we have chosen at the outset to
concentrate on what we believe to be a widely used platform: Linux on
x86s. Valgrind uses the standard Unix <code>./configure</code>,
<code>make</code>, <code>make install</code> mechanism, and we have
attempted to ensure that it works on machines with kernel 2.2 or 2.4
and glibc 2.1.X, 2.2.X or 2.3.1. This should cover the vast majority
of modern Linux installations. Note that glibc-2.3.2+, with the
NPTL (Native Posix Threads Library) package won't work. We hope to
be able to fix this, but it won't be easy.
<p>
Valgrind is licensed under the GNU General Public License, version
2. Read the file LICENSE in the source distribution for details. Some
of the PThreads test cases, <code>pth_*.c</code>, are taken from
"Pthreads Programming" by Bradford Nichols, Dick Buttlar &amp;
Jacqueline Proulx Farrell, ISBN 1-56592-115-1, published by O'Reilly
&amp; Associates, Inc.
<a name="intro-navigation"></a>
<h3>1.2&nbsp; How to navigate this manual</h3>
The Valgrind distribution consists of the Valgrind core, upon which are
built Valgrind tools, which do different kinds of debugging and
profiling. This manual is structured similarly.
<p>
First, we describe the Valgrind core, how to use it, and the flags it
supports. Then, each tool has its own chapter in this manual. You only
need to read the documentation for the core and for the tool(s) you
actually use, although you may find it helpful to be at least a little
bit familar with what all tools do. If you're new to all this, you
probably want to run the Memcheck tool. If you want to write a new tool,
read <A HREF="coregrind_tools.html">this</A>.
<p>
Be aware that the core understands some command line flags, and the
tools have their own flags which they know about. This means
there is no central place describing all the flags that are accepted
-- you have to read the flags documentation both for
<A HREF="coregrind_core.html#core">Valgrind's core</A>
and for the tool you want to use.
<p>

View File

@ -1,735 +0,0 @@
<html>
<head>
<style type="text/css">
body { background-color: #ffffff;
color: #000000;
font-family: Times, Helvetica, Arial;
font-size: 14pt}
h4 { margin-bottom: 0.3em}
code { color: #000000;
font-family: Courier;
font-size: 13pt }
pre { color: #000000;
font-family: Courier;
font-size: 13pt }
a:link { color: #0000C0;
text-decoration: none; }
a:visited { color: #0000C0;
text-decoration: none; }
a:active { color: #0000C0;
text-decoration: none; }
</style>
<title>Valgrind</title>
</head>
<body bgcolor="#ffffff">
<a name="title">&nbsp;</a>
<h1 align=center>Valgrind Tools</h1>
<center>
A guide to writing new tools for Valgrind<br>
This guide was last updated on 20030520
</center>
<p>
<center>
<a href="mailto:njn25@cam.ac.uk">njn25@cam.ac.uk</a><br>
Nick Nethercote
<p>
Valgrind is licensed under the GNU General Public License,
version 2<br>
An open-source tool for supervising execution of Linux-x86 executables.
</center>
<p>
<hr width="100%">
<a name="contents"></a>
<h2>Contents of this manual</h2>
<h4>1&nbsp; <a href="#intro">Introduction</a></h4>
1.1&nbsp; <a href="#supexec">Supervised Execution</a><br>
1.2&nbsp; <a href="#tools">Tools</a><br>
1.3&nbsp; <a href="#execspaces">Execution Spaces</a><br>
<h4>2&nbsp; <a href="#writingatool">Writing a Tool</a></h4>
2.1&nbsp; <a href="#whywriteatool">Why write a tool?</a><br>
2.2&nbsp; <a href="#suggestedtools">Suggested tools</a><br>
2.3&nbsp; <a href="#howtoolswork">How tools work</a><br>
2.4&nbsp; <a href="#gettingcode">Getting the code</a><br>
2.5&nbsp; <a href="#gettingstarted">Getting started</a><br>
2.6&nbsp; <a href="#writingcode">Writing the code</a><br>
2.7&nbsp; <a href="#init">Initialisation</a><br>
2.8&nbsp; <a href="#instr">Instrumentation</a><br>
2.9&nbsp; <a href="#fini">Finalisation</a><br>
2.10&nbsp; <a href="#otherimportantinfo">Other important information</a><br>
2.11&nbsp; <a href="#wordsofadvice">Words of advice</a><br>
<h4>3&nbsp; <a href="#advancedtopics">Advanced Topics</a></h4>
3.1&nbsp; <a href="#suppressions">Suppressions</a><br>
3.2&nbsp; <a href="#documentation">Documentation</a><br>
3.3&nbsp; <a href="#regressiontests">Regression tests</a><br>
3.4&nbsp; <a href="#profiling">Profiling</a><br>
3.5&nbsp; <a href="#othermakefilehackery">Other makefile hackery</a><br>
3.6&nbsp; <a href="#interfaceversions">Core/tool interface versions</a><br>
<h4>4&nbsp; <a href="#finalwords">Final Words</a></h4>
<hr width="100%">
<a name="intro"></a>
<h2>1&nbsp; Introduction</h2>
<a name="supexec"></a>
<h3>1.1&nbsp; Supervised Execution</h3>
Valgrind provides a generic infrastructure for supervising the execution of
programs. This is done by providing a way to instrument programs in very
precise ways, making it relatively easy to support activities such as dynamic
error detection and profiling.<p>
Although writing a tool is not easy, and requires learning quite a few things
about Valgrind, it is much easier than instrumenting a program from scratch
yourself.
<a name="tools"></a>
<h3>1.2&nbsp; Tools</h3>
The key idea behind Valgrind's architecture is the division between its
``core'' and ``tools''.
<p>
The core provides the common low-level infrastructure to support program
instrumentation, including the x86-to-x86 JIT compiler, low-level memory
manager, signal handling and a scheduler (for pthreads). It also provides
certain services that are useful to some but not all tools, such as support
for error recording and suppression.
<p>
But the core leaves certain operations undefined, which must be filled by tools.
Most notably, tools define how program code should be instrumented. They can
also define certain variables to indicate to the core that they would like to
use certain services, or be notified when certain interesting events occur.
But the core takes care of all the hard work.
<p>
<a name="execspaces"></a>
<h3>1.3&nbsp; Execution Spaces</h3>
An important concept to understand before writing a tool is that there are
three spaces in which program code executes:
<ol>
<li>User space: this covers most of the program's execution. The tool is
given the code and can instrument it any way it likes, providing (more or
less) total control over the code.<p>
Code executed in user space includes all the program code, almost all of
the C library (including things like the dynamic linker), and almost
all parts of all other libraries.
</li><p>
<li>Core space: a small proportion of the program's execution takes place
entirely within Valgrind's core. This includes:<p>
<ul>
<li>Dynamic memory management (<code>malloc()</code> etc.)</li>
<li>Pthread operations and scheduling</li>
<li>Signal handling</li>
</ul><p>
A tool has no control over these operations; it never ``sees'' the code
doing this work and thus cannot instrument it. However, the core
provides hooks so a tool can be notified when certain interesting events
happen, for example when when dynamic memory is allocated or freed, the
stack pointer is changed, or a pthread mutex is locked, etc.<p>
Note that these hooks only notify tools of events relevant to user
space. For example, when the core allocates some memory for its own use,
the tool is not notified of this, because it's not directly part of the
supervised program's execution.
</li><p>
<li>Kernel space: execution in the kernel. Two kinds:<p>
<ol>
<li>System calls: can't be directly observed by either the tool or the
core. But the core does have some idea of what happens to the
arguments, and it provides hooks for a tool to wrap system calls.
</li><p>
<li>Other: all other kernel activity (e.g. process scheduling) is
totally opaque and irrelevant to the program.
</li><p>
</ol>
</li><p>
It should be noted that a tool only has direct control over code executed in
user space. This is the vast majority of code executed, but it is not
absolutely all of it, so any profiling information recorded by a tool won't
be totally accurate.
</ol>
<a name="writingatool"></a>
<h2>2&nbsp; Writing a Tool</h2>
<a name="whywriteatool"></a>
<h3>2.1&nbsp; Why write a tool?</h3>
Before you write a tool, you should have some idea of what it should do. What
is it you want to know about your programs of interest? Consider some existing
tools:
<ul>
<li>memcheck: among other things, performs fine-grained validity and
addressibility checks of every memory reference performed by the program
</li><p>
<li>addrcheck: performs lighterweight addressibility checks of every memory
reference performed by the program</li><p>
<li>cachegrind: tracks every instruction and memory reference to simulate
instruction and data caches, tracking cache accesses and misses that
occur on every line in the program</li><p>
<li>helgrind: tracks every memory access and mutex lock/unlock to determine
if a program contains any data races</li><p>
<li>lackey: does simple counting of various things: the number of calls to a
particular function (<code>_dl_runtime_resolve()</code>); the number of
basic blocks, x86 instruction, UCode instructions executed; the number
of branches executed and the proportion of those which were taken.</li><p>
</ul>
These examples give a reasonable idea of what kinds of things Valgrind can be
used for. The instrumentation can range from very lightweight (e.g. counting
the number of times a particular function is called) to very intrusive (e.g.
memcheck's memory checking).
<a name="suggestedtools"></a>
<h3>2.2&nbsp; Suggested tools</h3>
Here is a list of ideas we have had for tools that should not be too hard to
implement.
<ul>
<li>branch profiler: A machine's branch prediction hardware could be
simulated, and each branch annotated with the number of predicted and
mispredicted branches. Would be implemented quite similarly to
Cachegrind, and could reuse the <code>cg_annotate</code> script to
annotate source code.<p>
The biggest difficulty with this is the simulation; the chip-makers
are very cagey about how their chips do branch prediction. But
implementing one or more of the basic algorithms could still give good
information.
</li><p>
<li>coverage tool: Cachegrind can already be used for doing test coverage,
but it's massive overkill to use it just for that.<p>
It would be easy to write a coverage tool that records how many times
each basic block was recorded. Again, the <code>cg_annotate</code>
script could be used for annotating source code with the gathered
information. Although, <code>cg_annotate</code> is only designed for
working with single program runs. It could be extended relatively easily
to deal with multiple runs of a program, so that the coverage of a whole
test suite could be determined.<p>
In addition to the standard coverage information, such a tool could
record extra information that would help a user generate test cases to
exercise unexercised paths. For example, for each conditional branch,
the tool could record all inputs to the conditional test, and print these
out when annotating.<p>
<li>run-time type checking: A nice example of a dynamic checker is given
in this paper:
<blockquote>
Debugging via Run-Time Type Checking<br>
Alexey Loginov, Suan Hsi Yong, Susan Horwitz and Thomas Reps<br>
Proceedings of Fundamental Approaches to Software Engineering<br>
April 2001.
</blockquote>
Similar is the tool described in this paper:
<blockquote>
Run-Time Type Checking for Binary Programs<br>
Michael Burrows, Stephen N. Freund, Janet L. Wiener<br>
Proceedings of the 12th International Conference on Compiler Construction
(CC 2003)<br>
April 2003.
</blockquote>
These approach can find quite a range of bugs, particularly in C and C++
programs, and could be implemented quite nicely as a Valgrind tool.<p>
Ways to speed up this run-time type checking are described in this paper:
<blockquote>
Reducing the Overhead of Dynamic Analysis<br>
Suan Hsi Yong and Susan Horwitz<br>
Proceedings of Runtime Verification '02<br>
July 2002.
</blockquote>
Valgrind's client requests could be used to pass information to a tool
about which elements need instrumentation and which don't.
</li><p>
</ul>
We would love to hear from anyone who implements these or other tools.
<a name="howtoolswork"></a>
<h3>2.3&nbsp; How tools work</h3>
Tools must define various functions for instrumenting programs that are called
by Valgrind's core, yet they must be implemented in such a way that they can be
written and compiled without touching Valgrind's core. This is important,
because one of our aims is to allow people to write and distribute their own
tools that can be plugged into Valgrind's core easily.<p>
This is achieved by packaging each tool into a separate shared object which is
then loaded ahead of the core shared object <code>valgrind.so</code>, using the
dynamic linker's <code>LD_PRELOAD</code> variable. Any functions defined in
the tool that share the name with a function defined in core (such as
the instrumentation function <code>TL_(instrument)()</code>) override the
core's definition. Thus the core can call the necessary tool functions.<p>
This magic is all done for you; the shared object used is chosen with the
<code>--tool</code> option to the <code>valgrind</code> startup script. The
default tool used is <code>memcheck</code>, Valgrind's original memory checker.
<a name="gettingcode"></a>
<h3>2.4&nbsp; Getting the code</h3>
To write your own tool, you'll need to check out a copy of Valgrind from the
CVS repository, rather than using a packaged distribution. This is because it
contains several extra files needed for writing tools.<p>
To check out the code from the CVS repository, first login:
<blockquote><code>
cvs -d:pserver:anonymous@cvs.valgrind.sourceforge.net:/cvsroot/valgrind login
</code></blockquote>
Then checkout the code. To get a copy of the current development version
(recommended for the brave only):
<blockquote><code>
cvs -z3 -d:pserver:anonymous@cvs.valgrind.sourceforge.net:/cvsroot/valgrind co valgrind
</code></blockquote>
To get a copy of the stable released branch:
<blockquote><code>
cvs -z3 -d:pserver:anonymous@cvs.valgrind.sourceforge.net:/cvsroot/valgrind co -r <i>TAG</i> valgrind
</code></blockquote>
where <code><i>TAG</i></code> has the form <code>VALGRIND_X_Y_Z</code> for
version X.Y.Z.
<a name="gettingstarted"></a>
<h3>2.5&nbsp; Getting started</h3>
Valgrind uses GNU <code>automake</code> and <code>autoconf</code> for the
creation of Makefiles and configuration. But don't worry, these instructions
should be enough to get you started even if you know nothing about those
tools.<p>
In what follows, all filenames are relative to Valgrind's top-level directory
<code>valgrind/</code>.
<ol>
<li>Choose a name for the tool, and an abbreviation that can be used as a
short prefix. We'll use <code>foobar</code> and <code>fb</code> as an
example.
</li><p>
<li>Make a new directory <code>foobar/</code> which will hold the tool.
</li><p>
<li>Copy <code>none/Makefile.am</code> into <code>foobar/</code>.
Edit it by replacing all occurrences of the string
``<code>none</code>'' with ``<code>foobar</code>'' and the one
occurrence of the string ``<code>nl_</code>'' with ``<code>fb_</code>''.
It might be worth trying to understand this file, at least a little; you
might have to do more complicated things with it later on. In
particular, the name of the <code>vgtool_foobar_so_SOURCES</code> variable
determines the name of the tool's shared object, which determines what
name must be passed to the <code>--tool</code> option to use the tool.
</li><p>
<li>Copy <code>none/nl_main.c</code> into
<code>foobar/</code>, renaming it as <code>fb_main.c</code>.
Edit it by changing the lines in <code>TL_(pre_clo_init)()</code>
to something appropriate for the tool. These fields are used in the
startup message, except for <code>bug_reports_to</code> which is used
if a tool assertion fails.
</li><p>
<li>Edit <code>Makefile.am</code>, adding the new directory
<code>foobar</code> to the <code>SUBDIRS</code> variable.
</li><p>
<li>Edit <code>configure.in</code>, adding <code>foobar/Makefile</code> to the
<code>AC_OUTPUT</code> list.
</li><p>
<li>Run:
<pre>
autogen.sh
./configure --prefix=`pwd`/inst
make install</pre>
It should automake, configure and compile without errors, putting copies
of the tool's shared object <code>vgtool_foobar.so</code> in
<code>foobar/</code> and
<code>inst/lib/valgrind/</code>.
</li><p>
<li>You can test it with a command like
<pre>
inst/bin/valgrind --tool=foobar date</pre>
(almost any program should work; <code>date</code> is just an example).
The output should be something like this:
<pre>
==738== foobar-0.0.1, a foobarring tool for x86-linux.
==738== Copyright (C) 1066AD, and GNU GPL'd, by J. Random Hacker.
==738== Built with valgrind-1.1.0, a program execution monitor.
==738== Copyright (C) 2000-2003, and GNU GPL'd, by Julian Seward.
==738== Estimated CPU clock rate is 1400 MHz
==738== For more details, rerun with: -v
==738==
Wed Sep 25 10:31:54 BST 2002
==738==</pre>
The tool does nothing except run the program uninstrumented.
</li><p>
</ol>
These steps don't have to be followed exactly - you can choose different names
for your source files, and use a different <code>--prefix</code> for
<code>./configure</code>.<p>
Now that we've setup, built and tested the simplest possible tool, onto the
interesting stuff...
<a name="writingcode"></a>
<h3>2.6&nbsp; Writing the code</h3>
A tool must define at least these four functions:
<pre>
TL_(pre_clo_init)()
TL_(post_clo_init)()
TL_(instrument)()
TL_(fini)()
</pre>
Also, it must use the macro <code>VG_DETERMINE_INTERFACE_VERSION</code>
exactly once in its source code. If it doesn't, you will get a link error
explaining the problem. This macro is used to ensure the core/tool interface
used by the core and a plugged-in tool are binary compatible.
In addition, if a tool wants to use some of the optional services provided by
the core, it may have to define other functions.
<a name="init"></a>
<h3>2.7&nbsp; Initialisation</h3>
Most of the initialisation should be done in <code>TL_(pre_clo_init)()</code>.
Only use <code>TL_(post_clo_init)()</code> if a tool provides command line
options and must do some initialisation after option processing takes place
(``<code>clo</code>'' stands for ``command line options'').<p>
First of all, various ``details'' need to be set for a tool, using the
functions <code>VG_(details_*)()</code>. Some are all compulsory, some aren't.
Some are used when constructing the startup message,
<code>detail_bug_reports_to</code> is used if <code>VG_(tool_panic)()</code> is
ever called, or a tool assertion fails. Others have other uses.<p>
Second, various ``needs'' can be set for a tool, using the functions
<code>VG_(needs_*)()</code>. They are mostly booleans, and can be left
untouched (they default to <code>False</code>). They determine whether a tool
can do various things such as: record, report and suppress errors; process
command line options; wrap system calls; record extra information about
malloc'd blocks, etc.<p>
For example, if a tool wants the core's help in recording and reporting errors,
it must set the <code>tool_errors</code> need to <code>True</code>, and then
provide definitions of six functions for comparing errors, printing out errors,
reading suppressions from a suppressions file, etc. While writing these
functions requires some work, it's much less than doing error handling from
scratch because the core is doing most of the work. See the type
<code>VgNeeds</code> in <code>include/tool.h</code> for full details of all
the needs.<p>
Third, the tool can indicate which events in core it wants to be notified
about, using the functions <code>VG_(track_*)()</code>. These include things
such as blocks of memory being malloc'd, the stack pointer changing, a mutex
being locked, etc. If a tool wants to know about this, it should set the
relevant pointer in the structure to point to a function, which will be called
when that event happens.<p>
For example, if the tool want to be notified when a new block of memory is
malloc'd, it should call <code>VG_(track_new_mem_heap)()</code> with an
appropriate function pointer, and the assigned function will be called each
time this happens.<p>
More information about ``details'', ``needs'' and ``trackable events'' can be
found in <code>include/tool.h</code>.<p>
<a name="instr"></a>
<h3>2.8&nbsp; Instrumentation</h3>
<code>TL_(instrument)()</code> is the interesting one. It allows you to
instrument <i>UCode</i>, which is Valgrind's RISC-like intermediate language.
UCode is described in the <a href="mc_techdocs.html">technical docs</a> for
Memcheck.
The easiest way to instrument UCode is to insert calls to C functions when
interesting things happen. See the tool ``Lackey''
(<code>lackey/lk_main.c</code>) for a simple example of this, or
Cachegrind (<code>cachegrind/cg_main.c</code>) for a more complex
example.<p>
A much more complicated way to instrument UCode, albeit one that might result
in faster instrumented programs, is to extend UCode with new UCode
instructions. This is recommended for advanced Valgrind hackers only! See
Memcheck for an example.
<a name="fini"></a>
<h3>2.9&nbsp; Finalisation</h3>
This is where you can present the final results, such as a summary of the
information collected. Any log files should be written out at this point.
<a name="otherimportantinfo"></a>
<h3>2.10&nbsp; Other important information</h3>
Please note that the core/tool split infrastructure is quite complex and
not brilliantly documented. Here are some important points, but there are
undoubtedly many others that I should note but haven't thought of.<p>
The file <code>include/tool.h</code> contains all the types,
macros, functions, etc. that a tool should (hopefully) need, and is the only
<code>.h</code> file a tool should need to <code>#include</code>.<p>
In particular, you probably shouldn't use anything from the C library (there
are deep reasons for this, trust us). Valgrind provides an implementation of a
reasonable subset of the C library, details of which are in
<code>tool.h</code>.<p>
Similarly, when writing a tool, you shouldn't need to look at any of the code
in Valgrind's core. Although it might be useful sometimes to help understand
something.<p>
<code>tool.h</code> has a reasonable amount of documentation in it that
should hopefully be enough to get you going. But ultimately, the tools
distributed (Memcheck, Addrcheck, Cachegrind, Lackey, etc.) are probably the
best documentation of all, for the moment.<p>
Note that the <code>VG_</code> and <code>TL_</code> macros are used heavily.
These just prepend longer strings in front of names to avoid potential
namespace clashes. We strongly recommend using the <code>TL_</code> macro for
any global functions and variables in your tool, or writing a similar macro.<p>
<a name="wordsofadvice"></a>
<h3>2.11&nbsp; Words of Advice</h3>
Writing and debugging tools is not trivial. Here are some suggestions for
solving common problems.<p>
If you are getting segmentation faults in C functions used by your tool, the
usual GDB command:
<blockquote><code>gdb <i>prog</i> core</code></blockquote>
usually gives the location of the segmentation fault.<p>
If you want to debug C functions used by your tool, you can attach GDB to
Valgrind with some effort; see the file <code>README_DEVELOPERS</code> in
CVS for instructions.<p>
GDB may be able to give you useful information. Note that by default
most of the system is built with <code>-fomit-frame-pointer</code>,
and you'll need to get rid of this to extract useful tracebacks from
GDB.<p>
If you just want to know whether a program point has been reached, using the
<code>OINK</code> macro (in <code> include/tool.h</code>) can be easier than
using GDB.<p>
If you are having problems with your UCode instrumentation, it's likely that
GDB won't be able to help at all. In this case, Valgrind's
<code>--trace-codegen</code> option is invaluable for observing the results of
instrumentation.<p>
The other debugging command line options can be useful too (run <code>valgrind
-h</code> for the list).<p>
<a name="advancedtopics"></a>
<h2>3&nbsp; Advanced Topics</h2>
Once a tool becomes more complicated, there are some extra things you may
want/need to do.
<a name="suppressions"></a>
<h3>3.1&nbsp; Suppressions</h3>
If your tool reports errors and you want to suppress some common ones, you can
add suppressions to the suppression files. The relevant files are
<code>valgrind/*.supp</code>; the final suppression file is aggregated from
these files by combining the relevant <code>.supp</code> files depending on the
versions of linux, X and glibc on a system.
<p>
Suppression types have the form <code>tool_name:suppression_name</code>. The
<code>tool_name</code> here is the name you specify for the tool during
initialisation with <code>VG_(details_name)()</code>.
<a name="documentation"></a>
<h3>3.2&nbsp; Documentation</h3>
If you are feeling conscientious and want to write some HTML documentation for
your tool, follow these steps (using <code>foobar</code> as the example tool
name again):
<ol>
<li>Make a directory <code>foobar/docs/</code>.
</li><p>
<li>Edit <code>foobar/Makefile.am</code>, adding <code>docs</code> to
the <code>SUBDIRS</code> variable.
</li><p>
<li>Edit <code>configure.in</code>, adding
<code>foobar/docs/Makefile</code> to the <code>AC_OUTPUT</code> list.
</li><p>
<li>Write <code>foobar/docs/Makefile.am</code>. Use
<code>memcheck/docs/Makefile.am</code> as an example.
</li><p>
<li>Write the documentation, putting it in <code>foobar/docs/</code>.
</li><p>
</ol>
<a name="regressiontests"></a>
<h3>3.3&nbsp; Regression tests</h3>
Valgrind has some support for regression tests. If you want to write
regression tests for your tool:
<ol>
<li>Make a directory <code>foobar/tests/</code>.
</li><p>
<li>Edit <code>foobar/Makefile.am</code>, adding <code>tests</code> to
the <code>SUBDIRS</code> variable.
</li><p>
<li>Edit <code>configure.in</code>, adding
<code>foobar/tests/Makefile</code> to the <code>AC_OUTPUT</code> list.
</li><p>
<li>Write <code>foobar/tests/Makefile.am</code>. Use
<code>memcheck/tests/Makefile.am</code> as an example.
</li><p>
<li>Write the tests, <code>.vgtest</code> test description files,
<code>.stdout.exp</code> and <code>.stderr.exp</code> expected output
files. (Note that Valgrind's output goes to stderr.) Some details
on writing and running tests are given in the comments at the top of the
testing script <code>tests/vg_regtest</code>.
</li><p>
<li>Write a filter for stderr results <code>foobar/tests/filter_stderr</code>.
It can call the existing filters in <code>tests/</code>. See
<code>memcheck/tests/filter_stderr</code> for an example; in particular
note the <code>$dir</code> trick that ensures the filter works correctly
from any directory.
</li><p>
</ol>
<a name="profiling"></a>
<h3>3.4&nbsp; Profiling</h3>
To do simple tick-based profiling of a tool, include the line
<blockquote>
#include "vg_profile.c"
</blockquote>
in the tool somewhere, and rebuild (you may have to <code>make clean</code>
first). Then run Valgrind with the <code>--profile=yes</code> option.<p>
The profiler is stack-based; you can register a profiling event with
<code>VGP_(register_profile_event)()</code> and then use the
<code>VGP_PUSHCC</code> and <code>VGP_POPCC</code> macros to record time spent
doing certain things. New profiling event numbers must not overlap with the
core profiling event numbers. See <code>include/tool.h</code> for details
and Memcheck for an example.
<a name="othermakefilehackery"></a>
<h3>3.5&nbsp; Other makefile hackery</h3>
If you add any directories under <code>valgrind/foobar/</code>, you will
need to add an appropriate <code>Makefile.am</code> to it, and add a
corresponding entry to the <code>AC_OUTPUT</code> list in
<code>valgrind/configure.in</code>.<p>
If you add any scripts to your tool (see Cachegrind for an example) you need to
add them to the <code>bin_SCRIPTS</code> variable in
<code>valgrind/foobar/Makefile.am</code>.<p>
<a name="interfaceversions"></a>
<h3>3.5&nbsp; Core/tool interface versions</h3>
In order to allow for the core/tool interface to evolve over time, Valgrind
uses a basic interface versioning system. All a tool has to do is use the
<code>VG_DETERMINE_INTERFACE_VERSION</code> macro exactly once in its code.
If not, a link error will occur when the tool is built.
<p>
The interface version number has the form X.Y. Changes in Y indicate binary
compatible changes. Changes in X indicate binary incompatible changes. If
the core and tool has the same major version number X they should work
together. If X doesn't match, Valgrind will abort execution with an
explanation of the problem.
<p>
This approach was chosen so that if the interface changes in the future,
old tools won't work and the reason will be clearly explained, instead of
possibly crashing mysteriously. We have attempted to minimise the potential
for binary incompatible changes by means such as minimising the use of naked
structs in the interface.
<a name="finalwords"></a>
<h2>4&nbsp; Final Words</h2>
This whole core/tool business under active development, although it's slowly
maturing.<p>
The first consequence of this is that the core/tool interface will continue
to change in the future; we have no intention of freezing it and then
regretting the inevitable stupidities. Hopefully most of the future changes
will be to add new features, hooks, functions, etc, rather than to change old
ones, which should cause a minimum of trouble for existing tools, and we've put
some effort into future-proofing the interface to avoid binary incompatibility.
But we can't guarantee anything. The versioning system should catch any
incompatibilities. Just something to be aware of.<p>
The second consequence of this is that we'd love to hear your feedback about
it:
<ul>
<li>If you love it or hate it</li><p>
<li>If you find bugs</li><p>
<li>If you write a tool</li><p>
<li>If you have suggestions for new features, needs, trackable events,
functions</li><p>
<li>If you have suggestions for making tools easier to write</li><p>
<li>If you have suggestions for improving this documentation</li><p>
<li>If you don't understand something</li><p>
</ul>
or anything else!<p>
Happy programming.

View File

@ -1,3 +1,81 @@
docdir = $(datadir)/doc/valgrind
SUBDIRS = xml lib images
dist_doc_DATA = manual.html
EXTRA_DIST = README
##-------------------------------------------------------------------
## Below here is more ordinary make stuff...
##-------------------------------------------------------------------
docdir = ./
xmldir = $(docdir)xml
imgdir = $(docdir)images
libdir = $(docdir)lib
htmldir = $(docdir)html
printdir = $(docdir)print
XML_CATALOG_FILES = /etc/xml/catalog
# file to log print output to
LOGFILE = print.log
# validation stuff
XMLLINT = xmllint
LINT_FLAGS = --noout --xinclude --noblanks --postvalid
VALID_FLAGS = --dtdvalid http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd
XMLLINT_FLAGS = $(LINT_FLAGS) $(VALID_FLAGS)
# stylesheet processor
XSLTPROC = xsltproc
XSLTPROC_FLAGS = --nonet --xinclude
# stylesheets
XSL_HTML_CHUNK_STYLE = $(libdir)/vg-html-chunk.xsl
XSL_HTML_SINGLE_STYLE = $(libdir)/vg-html-single.xsl
XSL_FO_STYLE = $(libdir)/vg-fo.xsl
all-docs: html-docs print-docs
valid:
$(XMLLINT) $(XMLLINT_FLAGS) $(xmldir)/index.xml
# chunked html
html-docs:
@echo "Generating html files..."
export XML_CATALOG_FILES=$(XML_CATALOG_FILES)
mkdir -p $(htmldir)
/bin/rm -fr $(htmldir)/
mkdir -p $(htmldir)/
mkdir -p $(htmldir)/images
cp $(libdir)/vg_basic.css $(htmldir)/
cp $(imgdir)/*.png $(htmldir)/images
$(XSLTPROC) $(XSLTPROC_FLAGS) -o $(htmldir)/ $(XSL_HTML_CHUNK_STYLE) $(xmldir)/index.xml
# pdf and postscript
print-docs:
@echo "Generating pdf file: $(printdir)/index.pdf ...";
export XML_CATALOG_FILES=$(XML_CATALOG_FILES);
mkdir -p $(printdir);
mkdir -p $(printdir)/images;
cp $(imgdir)/massif-graph-sm.png $(printdir)/images;
$(XSLTPROC) $(XSLTPROC_FLAGS) -o $(printdir)/index.fo $(XSL_FO_STYLE) $(xmldir)/index.xml;
(cd $(printdir);
pdfxmltex index.fo &> $(LOGFILE);
pdfxmltex index.fo &> $(LOGFILE);
pdfxmltex index.fo &> $(LOGFILE);
echo "Generating ps file: $(printdir)/index.ps";
pdftops index.pdf;
rm *.log *.aux *.fo *.out)
# If the docs have been built, install them. But don't worry if they have
# not -- developers do 'make install' not from a 'make dist'-ified distro all
# the time.
install-data-hook:
if test -r html ; then \
mkdir -p $(datadir)/doc/ z; \
cp -r html $(datadir)/doc/; \
fi
dist-hook: html-docs
cp -r html $(distdir)
distclean-local:
rm -rf html print

166
docs/README Normal file
View File

@ -0,0 +1,166 @@
Valgrind Documentation
----------------------
This text assumes the following directory structure:
Distribution text files (eg. README):
valgrind/
Main /docs/ dir:
valgrind/docs/
Top-level XML files:
valgrind/docs/xml/
Tool specific XML docs:
valgrind/<toolname>/docs/
All images used in the docs:
valgrind/docs/images/
Stylesheets, catalogs, parsing/formatting scripts:
valgrind/docs/lib/
Some files of note:
docs/xml/index.xml: Top-level book-set wrapper
docs/xml/FAQ.xml: The FAQ
docs/xml/vg-entities.xml: Various strings, dates etc. used all over
docs/xml/xml_help.txt: Basic guide to common XML tags.
Overview
---------
The Documentation Set contains all books, articles,
etc. pertaining to Valgrind, and is designed to be built as:
- chunked html files
- PDF file
- PS file
The whole thing is a "book set", made up of multiple books (the user
manual, the FAQ, the tech-docs, the licenses). Each book could be
made individually, but the build system doesn't do that.
CSS: the style-sheet used by the docs is the same as that used by the
website (consistency is king). It might be worth doing a pre-build diff
to check whether the website stylesheet has changed.
The build process
-----------------
It's not obvious exactly when things get built, and so on. Here's an
overview:
- The HTML docs can be built manually by running 'make html-docs' in
valgrind/docs/. (Don't use 'make html'; that is a valid built-in
automake target, but does nothing.) Likewise for PDF/PS with 'make
print-docs'.
- 'make dist' puts the XML files into the tarball. It also builds the
HTML docs and puts them in too, in valgrind/docs/html/ (including
style sheets, images, etc).
- 'make install' installs the HTML docs in
$(install)/share/doc/valgrind/html/, if they are present. (They will
be present if you are installing from the result of a 'make dist'.
They might not be present if you are developing in a Subversion
workspace and have not built them.) It doesn't install the XML docs,
as they're not useful installed.
If the XML processing tools ever mature enough to become standard, we
could just build the docs from XML when doing 'make install', which
would be simpler.
The XML Toolchain
------------------
I spent some time on the docbook-apps list in order to ascertain
the most-useful / widely-available / least-fragile / advanced
toolchain. Basically, everything has problems of one sort or
another, so I ended up going with what I felt was the
least-problematical of the various options.
The maintainer is responsible for ensure the following tools are
present on his system:
- xmllint: using libxml version 20607
- xsltproc: using libxml 20607, libxslt 10102 and libexslt 802
(Nb:be sure to use a version based on libxml2
version 2.6.11 or later. There was a bug in
xml:base processing in versions before that.)
- pdfxmltex: pdfTeX (Web2C 7.4.5) 3.14159-1.10b
- pdftops: version 3.00
- DocBook: version 4.2
- bzip2
- lynx
A big problem is latency. Norman Walsh is constantly updating
DocBook, but the tools tend to lag behind somewhat. It is
important that the versions get on with each other. If you
decide to upgrade something, then it is your responsibility to
ascertain whether things still work nicely - this *cannot* be
assumed.
Print output: if make expires with an error, cat output.
If you see something like this:
! TeX capacity exceeded, sorry [pool size=436070]
then look at this:
http://lists.debian.org/debian-doc/2003/12/msg00020.html
and modify your texmf files accordingly.
Catalog Locations
------------------
oasis:
http://www.oasis-open.org/docbook/xml/4.2/catalog.xml
http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd
Suse 9.1:
/usr/share/xml/docbook/ stylesheet/nwalsh/1.64.1/html/docbook.xsl
/usr/share/xml/docbook/ schema/dtd/4.2/docbookx.dtd
/usr/share/xml/docbook/ schema/dtd/4.2/catalog.xml
Notes:
------
- the end of file.xml must have only ONE newline after the last tag:
</book>
- pdfxmltex barfs if given a filename with an underscore in it
References:
----------
- samba have got all the stuff
http://websvn.samba.org/listing.php?rep=4&path=/trunk/&opt=dir&sc=1
excellent on-line howto reference:
- http://www.cogent.ca/
using automake with docbook:
- http://www.movement.uklinux.net/docs/docbook-autotools/index.html
Debugging catalog processing:
- http://xmlsoft.org/catalog.html#Declaring
xmlcatalog -v <catalog-file>
shell script to generate xml catalogs for docbook 4.1.2:
- http://xmlsoft.org/XSLT/docbook.html
configure.in re pdfxmltex
- http://cvs.sourceforge.net/viewcvs.py/logreport/service/configure.in?rev=1.325
some useful xls stylesheets in cvs:
- http://cvs.sourceforge.net/viewcvs.py/perl-xml/perl-xml-faq/
TODO:
----
- get rid of blank pages in fo output
- concat titlepage + subtitle page in fo output
- generate an index for the user manual (??)
- fix tex so it does not run out of memory
- run through and check for not-linked hrefs: grep on 'http'
- run through and check for bad email addresses: grep on '@' etc.
- when we move to svn, change all refs to sourceforge.cvs
- go through and wrap refs+addresses in '<address>' tags

3
docs/images/Makefile.am Normal file
View File

@ -0,0 +1,3 @@
EXTRA_DIST = \
home.png next.png prev.png up.png \
massif-graph-sm.png massif-graph.png

BIN
docs/images/home.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 299 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.5 KiB

BIN
docs/images/next.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 337 B

BIN
docs/images/prev.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 337 B

BIN
docs/images/up.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 317 B

6
docs/lib/Makefile.am Normal file
View File

@ -0,0 +1,6 @@
EXTRA_DIST = \
vg-common.xsl \
vg-fo.xsl \
vg-html-chunk.xsl \
vg-html-single.xsl \
vg_basic.css

45
docs/lib/vg-common.xsl Normal file
View File

@ -0,0 +1,45 @@
<?xml version="1.0"?> <!-- -*- sgml -*- -->
<xsl:stylesheet
xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0">
<!-- we like '1.2 Title' -->
<xsl:param name="section.autolabel" select="'1'"/>
<xsl:param name="section.label.includes.component.label" select="'1'"/>
<!-- Do not put 'Chapter' at the start of eg 'Chapter 1. Doing This' -->
<xsl:param name="local.l10n.xml" select="document('')"/>
<l:i18n xmlns:l="http://docbook.sourceforge.net/xmlns/l10n/1.0">
<l:l10n language="en">
<l:context name="title-numbered">
<l:template name="chapter" text="%n.&#160;%t"/>
</l:context>
</l:l10n>
</l:i18n>
<!-- don't generate sub-tocs for qanda sets -->
<xsl:param name="generate.toc">
set toc,title
book toc,title,figure,table,example,equation
chapter toc,title
section toc
sect1 toc
sect2 toc
sect3 toc
sect4 nop
sect5 nop
qandaset toc
qandadiv nop
appendix toc,title
article/appendix nop
<!-- article toc,title -->
article nop
preface toc,title
reference toc,title
</xsl:param>
<!-- center everything at the top of a titlepage -->
<xsl:attribute-set name="set.titlepage.recto.style">
<xsl:attribute name="align">center</xsl:attribute>
</xsl:attribute-set>
</xsl:stylesheet>

320
docs/lib/vg-fo.xsl Normal file
View File

@ -0,0 +1,320 @@
<?xml version="1.0" encoding="UTF-8"?> <!-- -*- sgml -*- -->
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:fo="http://www.w3.org/1999/XSL/Format" version="1.0">
<xsl:import href="http://docbook.sourceforge.net/release/xsl/current/fo/docbook.xsl"/>
<xsl:import href="vg-common.xsl"/>
<!-- set indent = yes while debugging, then change to NO -->
<xsl:output method="xml" indent="no"/>
<!-- ensure only passivetex extensions are on -->
<xsl:param name="stylesheet.result.type" select="'fo'"/>
<!-- fo extensions: PDF bookmarks and index terms -->
<xsl:param name="use.extensions" select="'1'"/>
<xsl:param name="xep.extensions" select="0"/>
<xsl:param name="fop.extensions" select="0"/>
<xsl:param name="saxon.extensions" select="0"/>
<xsl:param name="passivetex.extensions" select="1"/>
<xsl:param name="tablecolumns.extension" select="'1'"/>
<!-- ensure we are using single sided -->
<xsl:param name="double.sided" select="'0'"/>
<!-- insert cross references to page numbers -->
<xsl:param name="insert.xref.page.number" select="1"/>
<!-- <?custom-pagebreak?> inserts a page break at this point -->
<xsl:template match="processing-instruction('custom-pagebreak')">
<fo:block break-before='page'/>
</xsl:template>
<!-- show links in color -->
<xsl:attribute-set name="xref.properties">
<xsl:attribute name="color">blue</xsl:attribute>
</xsl:attribute-set>
<!-- make pre listings indented a bit + a bg colour -->
<xsl:template match="programlisting | screen">
<fo:block start-indent="0.25in" wrap-option="no-wrap"
white-space-collapse="false" text-align="start"
font-family="monospace" background-color="#f2f2f9"
linefeed-treatment="preserve"
xsl:use-attribute-sets="normal.para.spacing">
<xsl:apply-templates/>
</fo:block>
</xsl:template>
<!-- workaround bug in passivetex fo output for itemizedlist -->
<xsl:template match="itemizedlist/listitem">
<xsl:variable name="id">
<xsl:call-template name="object.id"/></xsl:variable>
<xsl:variable name="itemsymbol">
<xsl:call-template name="list.itemsymbol">
<xsl:with-param name="node" select="parent::itemizedlist"/>
</xsl:call-template>
</xsl:variable>
<xsl:variable name="item.contents">
<fo:list-item-label end-indent="label-end()">
<fo:block>
<xsl:choose>
<xsl:when test="$itemsymbol='disc'">&#x2022;</xsl:when>
<xsl:when test="$itemsymbol='bullet'">&#x2022;</xsl:when>
<xsl:otherwise>&#x2022;</xsl:otherwise>
</xsl:choose>
</fo:block>
</fo:list-item-label>
<fo:list-item-body start-indent="body-start()">
<xsl:apply-templates/> <!-- removed extra block wrapper -->
</fo:list-item-body>
</xsl:variable>
<xsl:choose>
<xsl:when test="parent::*/@spacing = 'compact'">
<fo:list-item id="{$id}"
xsl:use-attribute-sets="compact.list.item.spacing">
<xsl:copy-of select="$item.contents"/>
</fo:list-item>
</xsl:when>
<xsl:otherwise>
<fo:list-item id="{$id}" xsl:use-attribute-sets="list.item.spacing">
<xsl:copy-of select="$item.contents"/>
</fo:list-item>
</xsl:otherwise>
</xsl:choose>
</xsl:template>
<!-- workaround bug in passivetex fo output for orderedlist -->
<xsl:template match="orderedlist/listitem">
<xsl:variable name="id">
<xsl:call-template name="object.id"/></xsl:variable>
<xsl:variable name="item.contents">
<fo:list-item-label end-indent="label-end()">
<fo:block>
<xsl:apply-templates select="." mode="item-number"/>
</fo:block>
</fo:list-item-label>
<fo:list-item-body start-indent="body-start()">
<xsl:apply-templates/> <!-- removed extra block wrapper -->
</fo:list-item-body>
</xsl:variable>
<xsl:choose>
<xsl:when test="parent::*/@spacing = 'compact'">
<fo:list-item id="{$id}"
xsl:use-attribute-sets="compact.list.item.spacing">
<xsl:copy-of select="$item.contents"/>
</fo:list-item>
</xsl:when>
<xsl:otherwise>
<fo:list-item id="{$id}" xsl:use-attribute-sets="list.item.spacing">
<xsl:copy-of select="$item.contents"/>
</fo:list-item>
</xsl:otherwise>
</xsl:choose>
</xsl:template>
<!-- workaround bug in passivetex fo output for variablelist -->
<xsl:param name="variablelist.as.blocks" select="1"/>
<xsl:template match="varlistentry" mode="vl.as.blocks">
<xsl:variable name="id">
<xsl:call-template name="object.id"/></xsl:variable>
<fo:block id="{$id}" xsl:use-attribute-sets="list.item.spacing"
keep-together.within-column="always"
keep-with-next.within-column="always">
<xsl:apply-templates select="term"/>
</fo:block>
<fo:block start-indent="0.5in" end-indent="0in"
space-after.minimum="0.2em"
space-after.optimum="0.4em"
space-after.maximum="0.6em">
<fo:block>
<xsl:apply-templates select="listitem"/>
</fo:block>
</fo:block>
</xsl:template>
<!-- workaround bug in passivetext fo output for revhistory -->
<xsl:template match="revhistory" mode="titlepage.mode">
<fo:block space-before="1.0em">
<fo:table table-layout="fixed" width="100%">
<fo:table-column column-number="1" column-width="33%"/>
<fo:table-column column-number="2" column-width="33%"/>
<fo:table-column column-number="3" column-width="34%"/>
<fo:table-body>
<fo:table-row>
<fo:table-cell number-columns-spanned="3" text-align="left">
<fo:block>
<xsl:call-template name="gentext">
<xsl:with-param name="key" select="'RevHistory'"/>
</xsl:call-template>
</fo:block>
</fo:table-cell>
</fo:table-row>
<xsl:apply-templates mode="titlepage.mode"/>
</fo:table-body>
</fo:table>
</fo:block>
</xsl:template>
<xsl:template match="revhistory/revision" mode="titlepage.mode">
<xsl:variable name="revnumber" select=".//revnumber"/>
<xsl:variable name="revdate" select=".//date"/>
<xsl:variable name="revauthor" select=".//authorinitials"/>
<xsl:variable name="revremark" select=".//revremark"/>
<fo:table-row>
<fo:table-cell text-align="left">
<fo:block>
<xsl:if test="$revnumber">
<xsl:call-template name="gentext">
<xsl:with-param name="key" select="'Revision'"/>
</xsl:call-template>
<xsl:call-template name="gentext.space"/>
<xsl:apply-templates select="$revnumber[1]" mode="titlepage.mode"/>
</xsl:if>
</fo:block>
</fo:table-cell>
<fo:table-cell text-align="left">
<fo:block>
<xsl:apply-templates select="$revdate[1]" mode="titlepage.mode"/>
</fo:block>
</fo:table-cell>
<fo:table-cell text-align="left">
<fo:block>
<xsl:apply-templates select="$revauthor[1]" mode="titlepage.mode"/>
</fo:block>
</fo:table-cell>
</fo:table-row>
<xsl:if test="$revremark">
<fo:table-row>
<fo:table-cell number-columns-spanned="3" text-align="left">
<fo:block>
<xsl:apply-templates select="$revremark[1]" mode="titlepage.mode"/>
</fo:block>
</fo:table-cell>
</fo:table-row>
</xsl:if>
</xsl:template>
<!-- workaround bug in footers: force right-align w/two 80|30 cols -->
<xsl:template name="footer.table">
<xsl:param name="pageclass" select="''"/>
<xsl:param name="sequence" select="''"/>
<xsl:param name="gentext-key" select="''"/>
<xsl:choose>
<xsl:when test="$pageclass = 'index'">
<xsl:attribute name="margin-left">0pt</xsl:attribute>
</xsl:when>
</xsl:choose>
<xsl:variable name="candidate">
<fo:table table-layout="fixed" width="100%">
<fo:table-column column-number="1" column-width="80%"/>
<fo:table-column column-number="2" column-width="20%"/>
<fo:table-body>
<fo:table-row height="14pt">
<fo:table-cell text-align="left" display-align="after">
<xsl:attribute name="relative-align">baseline</xsl:attribute>
<fo:block>
<fo:block> </fo:block><!-- empty cell -->
</fo:block>
</fo:table-cell>
<fo:table-cell text-align="center" display-align="after">
<xsl:attribute name="relative-align">baseline</xsl:attribute>
<fo:block>
<xsl:call-template name="footer.content">
<xsl:with-param name="pageclass" select="$pageclass"/>
<xsl:with-param name="sequence" select="$sequence"/>
<xsl:with-param name="position" select="'center'"/>
<xsl:with-param name="gentext-key" select="$gentext-key"/>
</xsl:call-template>
</fo:block>
</fo:table-cell>
</fo:table-row>
</fo:table-body>
</fo:table>
</xsl:variable>
<!-- Really output a footer? -->
<xsl:choose>
<xsl:when test="$pageclass='titlepage' and $gentext-key='book'
and $sequence='first'">
<!-- no, book titlepages have no footers at all -->
</xsl:when>
<xsl:when test="$sequence = 'blank' and $footers.on.blank.pages = 0">
<!-- no output -->
</xsl:when>
<xsl:otherwise>
<xsl:copy-of select="$candidate"/>
</xsl:otherwise>
</xsl:choose>
</xsl:template>
<!-- fix bug in headers: force right-align w/two 40|60 cols -->
<xsl:template name="header.table">
<xsl:param name="pageclass" select="''"/>
<xsl:param name="sequence" select="''"/>
<xsl:param name="gentext-key" select="''"/>
<xsl:choose>
<xsl:when test="$pageclass = 'index'">
<xsl:attribute name="margin-left">0pt</xsl:attribute>
</xsl:when>
</xsl:choose>
<xsl:variable name="candidate">
<fo:table table-layout="fixed" width="100%">
<xsl:call-template name="head.sep.rule">
<xsl:with-param name="pageclass" select="$pageclass"/>
<xsl:with-param name="sequence" select="$sequence"/>
<xsl:with-param name="gentext-key" select="$gentext-key"/>
</xsl:call-template>
<fo:table-column column-number="1" column-width="40%"/>
<fo:table-column column-number="2" column-width="60%"/>
<fo:table-body>
<fo:table-row height="14pt">
<fo:table-cell text-align="left" display-align="before">
<xsl:attribute name="relative-align">baseline</xsl:attribute>
<fo:block>
<fo:block> </fo:block><!-- empty cell -->
</fo:block>
</fo:table-cell>
<fo:table-cell text-align="center" display-align="before">
<xsl:attribute name="relative-align">baseline</xsl:attribute>
<fo:block>
<xsl:call-template name="header.content">
<xsl:with-param name="pageclass" select="$pageclass"/>
<xsl:with-param name="sequence" select="$sequence"/>
<xsl:with-param name="position" select="'center'"/>
<xsl:with-param name="gentext-key" select="$gentext-key"/>
</xsl:call-template>
</fo:block>
</fo:table-cell>
</fo:table-row>
</fo:table-body>
</fo:table>
</xsl:variable>
<!-- Really output a header? -->
<xsl:choose>
<xsl:when test="$pageclass = 'titlepage' and $gentext-key = 'book'
and $sequence='first'">
<!-- no, book titlepages have no headers at all -->
</xsl:when>
<xsl:when test="$sequence = 'blank' and $headers.on.blank.pages = 0">
<!-- no output -->
</xsl:when>
<xsl:otherwise>
<xsl:copy-of select="$candidate"/>
</xsl:otherwise>
</xsl:choose>
</xsl:template>
</xsl:stylesheet>
<!--
pagebreaks in fo output:
- http://www.dpawson.co.uk/docbook/styling/fo.html#d1408e636
http://www.dpawson.co.uk/docbook/styling/fo.html
http://docbook.sourceforge.net/release/xsl/current/doc/fo/variablelist.as.blocks.html
alt. book to oreilly:
- http://www.ravelgrane.com/ER/doc/lx/book.html
tex memory:
- http://www.dpawson.co.uk/docbook/tools.html#d4e191
-->

321
docs/lib/vg-html-chunk.xsl Normal file
View File

@ -0,0 +1,321 @@
<?xml version="1.0"?> <!-- -*- sgml -*- -->
<xsl:stylesheet
xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0">
<xsl:import href="http://docbook.sourceforge.net/release/xsl/current/html/docbook.xsl"/>
<xsl:import href="http://docbook.sourceforge.net/release/xsl/current/html/chunk-common.xsl"/>
<xsl:import href="http://docbook.sourceforge.net/release/xsl/current/html/manifest.xsl"/>
<xsl:import href="http://docbook.sourceforge.net/release/xsl/current/html/chunk-code.xsl"/>
<xsl:import href="vg-common.xsl"/>
<!-- use 8859-1 encoding -->
<xsl:output method="html" encoding="ISO-8859-1" indent="yes"/>
<xsl:param name="use.id.as.filename" select="'1'"/>
<xsl:param name="chunker.output.indent" select="'yes'"/>
<!-- use our custom html stylesheet -->
<xsl:param name="html.stylesheet" select="'vg_basic.css'"/>
<!-- use our custom header -->
<xsl:template name="header.navigation">
<xsl:param name="prev" select="/foo"/>
<xsl:param name="next" select="/foo"/>
<xsl:param name="nav.context"/>
<xsl:variable name="home" select="/*[1]"/>
<xsl:variable name="up" select="parent::*"/>
<xsl:variable name="row1" select="$navig.showtitles != 0"/>
<xsl:variable name="row2" select="count($prev) &gt; 0
or (count($up) &gt; 0
and generate-id($up) != generate-id($home) )
or count($next) &gt; 0"/>
<div>
<!-- never show header nav stuff on title page -->
<xsl:if test="count($prev)>0">
<xsl:if test="$row1 or $row2">
<table class="nav" width="100%" cellspacing="3" cellpadding="3" border="0" summary="Navigation header">
<xsl:if test="$row2">
<tr>
<!-- prev -->
<td width="22px" align="center" valign="middle">
<xsl:if test="count($prev)>0">
<a accesskey="p">
<xsl:attribute name="href">
<xsl:call-template name="href.target">
<xsl:with-param name="object" select="$prev"/>
</xsl:call-template>
</xsl:attribute>
<img src="images/prev.png" width="18" height="21" border="0">
<xsl:attribute name="alt">
<xsl:call-template name="gentext">
<xsl:with-param name="key">nav-prev</xsl:with-param>
</xsl:call-template>
</xsl:attribute>
</img>
</a>
</xsl:if>
</td>
<!-- up -->
<xsl:if test="count($up)>0">
<td width="25px" align="center" valign="middle">
<a accesskey="u">
<xsl:attribute name="href">
<xsl:call-template name="href.target">
<xsl:with-param name="object" select="$up"/>
</xsl:call-template>
</xsl:attribute>
<img src="images/up.png" width="21" height="18" border="0">
<xsl:attribute name="alt">
<xsl:call-template name="gentext">
<xsl:with-param name="key">nav-up</xsl:with-param>
</xsl:call-template>
</xsl:attribute>
</img>
</a>
</td>
</xsl:if>
<!-- home -->
<xsl:if test="$home != . or $nav.context = 'toc'">
<td width="31px" align="center" valign="middle">
<a accesskey="h">
<xsl:attribute name="href">
<xsl:call-template name="href.target">
<xsl:with-param name="object" select="$home"/>
</xsl:call-template>
</xsl:attribute>
<img src="images/home.png" width="27" height="20" border="0">
<xsl:attribute name="alt">
<xsl:call-template name="gentext">
<xsl:with-param name="key">nav-up</xsl:with-param>
</xsl:call-template>
</xsl:attribute>
</img>
</a>
</td>
</xsl:if>
<!-- chapter|section heading -->
<th align="center" valign="middle">
<xsl:apply-templates select="$up" mode="object.title.markup"/>
<!--
<xsl:choose>
<xsl:when test="count($up) > 0 and generate-id($up) != generate-id($home)">
<xsl:apply-templates select="$up" mode="object.title.markup"/>
</xsl:when>
<xsl:otherwise>
<xsl:text>Valgrind User's Manual</xsl:text>
</xsl:otherwise>
</xsl:choose>
-->
</th>
<!-- next -->
<td width="22px" align="center" valign="middle">
<xsl:if test="count($next)>0">
<a accesskey="n">
<xsl:attribute name="href">
<xsl:call-template name="href.target">
<xsl:with-param name="object" select="$next"/>
</xsl:call-template>
</xsl:attribute>
<img src="images/next.png" width="18" height="21" border="0">
<xsl:attribute name="alt">
<xsl:call-template name="gentext">
<xsl:with-param name="key">nav-next</xsl:with-param>
</xsl:call-template>
</xsl:attribute>
</img>
</a>
</xsl:if>
</td>
</tr>
</xsl:if>
</table>
</xsl:if>
</xsl:if>
</div>
</xsl:template>
<!-- our custom footer -->
<xsl:template name="footer.navigation">
<xsl:param name="prev" select="/foo"/>
<xsl:param name="next" select="/foo"/>
<xsl:param name="nav.context"/>
<xsl:variable name="home" select="/*[1]"/>
<xsl:variable name="up" select="parent::*"/>
<xsl:variable name="row1" select="count($prev) &gt; 0
or count($up) &gt; 0
or count($next) &gt; 0"/>
<xsl:variable name="row2" select="($prev != 0)
or (generate-id($home) != generate-id(.)
or $nav.context = 'toc')
or ($chunk.tocs.and.lots != 0
and $nav.context != 'toc')
or ($next != 0)"/>
<div>
<xsl:if test="$row1 or $row2">
<br />
<table class="nav" width="100%" cellspacing="3" cellpadding="2" border="0" summary="Navigation footer">
<xsl:if test="$row1">
<tr>
<td rowspan="2" width="40%" align="left">
<xsl:if test="count($prev)>0">
<a accesskey="p">
<xsl:attribute name="href">
<xsl:call-template name="href.target">
<xsl:with-param name="object" select="$prev"/>
</xsl:call-template>
</xsl:attribute>
<xsl:text>&#060;&#060;&#160;</xsl:text>
<xsl:apply-templates select="$prev" mode="object.title.markup"/>
</a>
</xsl:if>
<xsl:text>&#160;</xsl:text>
</td>
<td width="20%" align="center">
<xsl:choose>
<xsl:when test="count($up)>0">
<a accesskey="u">
<xsl:attribute name="href">
<xsl:call-template name="href.target">
<xsl:with-param name="object" select="$up"/>
</xsl:call-template>
</xsl:attribute>
<xsl:call-template name="navig.content">
<xsl:with-param name="direction" select="'up'"/>
</xsl:call-template>
</a>
</xsl:when>
<xsl:otherwise>&#160;</xsl:otherwise>
</xsl:choose>
</td>
<td rowspan="2" width="40%" align="right">
<xsl:text>&#160;</xsl:text>
<xsl:if test="count($next)>0">
<a accesskey="n">
<xsl:attribute name="href">
<xsl:call-template name="href.target">
<xsl:with-param name="object" select="$next"/>
</xsl:call-template>
</xsl:attribute>
<xsl:apply-templates select="$next" mode="object.title.markup"/>
<xsl:text>&#160;&#062;&#062;</xsl:text>
</a>
</xsl:if>
</td>
</tr>
</xsl:if>
<xsl:if test="$row2">
<tr>
<td width="20%" align="center">
<xsl:choose>
<xsl:when test="$home != . or $nav.context = 'toc'">
<a accesskey="h">
<xsl:attribute name="href">
<xsl:call-template name="href.target">
<xsl:with-param name="object" select="$home"/>
</xsl:call-template>
</xsl:attribute>
<xsl:call-template name="navig.content">
<xsl:with-param name="direction" select="'home'"/>
</xsl:call-template>
</a>
<xsl:if test="$chunk.tocs.and.lots != 0 and $nav.context != 'toc'">
<xsl:text>&#160;|&#160;</xsl:text>
</xsl:if>
</xsl:when>
<xsl:otherwise>&#160;</xsl:otherwise>
</xsl:choose>
<xsl:if test="$chunk.tocs.and.lots != 0 and $nav.context != 'toc'">
<a accesskey="t">
<xsl:attribute name="href">
<xsl:apply-templates select="/*[1]" mode="recursive-chunk-filename"/>
<xsl:text>-toc</xsl:text>
<xsl:value-of select="$html.ext"/>
</xsl:attribute>
<xsl:call-template name="gentext">
<xsl:with-param name="key" select="'nav-toc'"/>
</xsl:call-template>
</a>
</xsl:if>
</td>
</tr>
</xsl:if>
</table>
</xsl:if>
</div>
</xsl:template>
<!-- We don't like tables with borders -->
<xsl:template match="revhistory" mode="titlepage.mode">
<xsl:variable name="numcols">
<xsl:choose>
<xsl:when test="//authorinitials">3</xsl:when>
<xsl:otherwise>2</xsl:otherwise>
</xsl:choose>
</xsl:variable>
<table width="100%" border="0" summary="Revision history">
<tr>
<th align="left" colspan="{$numcols}">
<h3>Revision History</h3>
</th>
</tr>
<xsl:apply-templates mode="titlepage.mode">
<xsl:with-param name="numcols" select="$numcols"/>
</xsl:apply-templates>
</table>
</xsl:template>
<!-- don't put an expanded set-level TOC, only book titles -->
<xsl:template match="book" mode="toc">
<xsl:param name="toc-context" select="."/>
<xsl:choose>
<xsl:when test="local-name($toc-context) = 'set'">
<xsl:call-template name="subtoc">
<xsl:with-param name="toc-context" select="$toc-context"/>
<xsl:with-param name="nodes" select="foo"/>
</xsl:call-template>
</xsl:when>
<xsl:otherwise>
<xsl:call-template name="subtoc">
<xsl:with-param name="toc-context" select="$toc-context"/>
<xsl:with-param name="nodes" select="part|reference
|preface|chapter|appendix
|article
|bibliography|glossary|index
|refentry
|bridgehead[$bridgehead.in.toc !=
0]"/>
</xsl:call-template>
</xsl:otherwise>
</xsl:choose>
</xsl:template>
<!-- question and answer set mods -->
<xsl:template match="answer">
<xsl:variable name="deflabel">
<xsl:choose>
<xsl:when test="ancestor-or-self::*[@defaultlabel]">
<xsl:value-of select="(ancestor-or-self::*[@defaultlabel])[last()]
/@defaultlabel"/>
</xsl:when>
<xsl:otherwise>
<xsl:value-of select="$qanda.defaultlabel"/>
</xsl:otherwise>
</xsl:choose>
</xsl:variable>
<tr class="{name(.)}">
<td><xsl:text>&#160;</xsl:text></td>
<td align="left" valign="top">
<xsl:apply-templates select="*[name(.) != 'label']"/>
</td>
</tr>
<tr><td colspan="2"><xsl:text>&#160;</xsl:text></td></tr>
</xsl:template>
</xsl:stylesheet>

View File

@ -0,0 +1,63 @@
<?xml version="1.0"?> <!-- -*- sgml -*- -->
<!DOCTYPE xsl:stylesheet [ <!ENTITY vg-css SYSTEM "vg_basic.css"> ]>
<xsl:stylesheet
xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0">
<xsl:import href="http://docbook.sourceforge.net/release/xsl/current/html/docbook.xsl"/>
<xsl:import href="vg-common.xsl"/>
<!-- use 8859-1 encoding -->
<xsl:output method="html" encoding="ISO-8859-1" indent="yes"/>
<!-- we include the css directly when generating one large file -->
<xsl:template name="user.head.content">
<style type="text/css" media="screen">
<xsl:text>&vg-css;</xsl:text>
</style>
</xsl:template>
<!-- We don't like tables with borders -->
<xsl:template match="revhistory" mode="titlepage.mode">
<xsl:variable name="numcols">
<xsl:choose>
<xsl:when test="//authorinitials">3</xsl:when>
<xsl:otherwise>2</xsl:otherwise>
</xsl:choose>
</xsl:variable>
<table width="100%" border="0" summary="Revision history">
<tr>
<th align="left" colspan="{$numcols}">
<h4>Revision History</h4>
</th>
</tr>
<xsl:apply-templates mode="titlepage.mode">
<xsl:with-param name="numcols" select="$numcols"/>
</xsl:apply-templates>
</table>
</xsl:template>
<!-- question and answer set mods -->
<xsl:template match="answer">
<xsl:variable name="deflabel">
<xsl:choose>
<xsl:when test="ancestor-or-self::*[@defaultlabel]">
<xsl:value-of select="(ancestor-or-self::*[@defaultlabel])[last()]
/@defaultlabel"/>
</xsl:when>
<xsl:otherwise>
<xsl:value-of select="$qanda.defaultlabel"/>
</xsl:otherwise>
</xsl:choose>
</xsl:variable>
<tr class="{name(.)}">
<td><xsl:text>&#160;</xsl:text></td>
<td align="left" valign="top">
<xsl:apply-templates select="*[name(.) != 'label']"/>
</td>
</tr>
<tr><td colspan="2"><xsl:text>&#160;</xsl:text></td></tr>
</xsl:template>
</xsl:stylesheet>

62
docs/lib/vg_basic.css Normal file
View File

@ -0,0 +1,62 @@
/* default link colours */
a, a:link, a:visited, a:active { color: #74240f; }
a:hover { color: #888800; }
body {
color: #202020;
background-color: #ffffff;
}
body, td {
font-size: 90%;
line-height: 125%;
font-family: Arial, Geneva, Helvetica, sans-serif;
}
h1, h2, h3, h4 { color: #74240f; }
h3 { margin-bottom: 0.4em; }
code, tt, pre { color: #3366cc; }
code, tt { color: #761596; }
pre.programlisting {
color: #000000;
padding: 0.5em;
background: #f2f2f9;
border: 1px solid #3366cc;
}
pre.screen {
color: #000000;
padding: 0.5em;
background: #eeeeee;
border: 1px solid #626262;
}
ul { list-style: url("images/li-brown.png"); }
.titlepage hr {
height: 1px;
border: 0px;
background-color: #7f7f7f;
}
/* header / footer nav tables */
table.nav {
color: #0f7355;
border: solid 1px #0f7355;
background: #edf7f4;
background-color: #edf7f4;
margin-bottom: 0.5em;
}
/* don't have underlined links in chunked nav menus */
table.nav a { text-decoration: none; }
table.nav a:hover { text-decoration: underline; }
table.nav td { font-size: 85%; }
/* yellow box just for massif blockquotes */
blockquote {
padding: 0.5em;
background: #fffbc9;
border: solid 1px #ffde84;
}

View File

@ -1,125 +0,0 @@
<html>
<head>
<style type="text/css">
body { background-color: #ffffff;
color: #000000;
font-family: Times, Helvetica, Arial;
font-size: 14pt}
h4 { margin-bottom: 0.3em}
code { color: #000000;
font-family: Courier;
font-size: 13pt }
pre { color: #000000;
font-family: Courier;
font-size: 13pt }
a:link { color: #0000C0;
text-decoration: none; }
a:visited { color: #0000C0;
text-decoration: none; }
a:active { color: #0000C0;
text-decoration: none; }
</style>
<title>Valgrind</title>
</head>
<body bgcolor="#ffffff">
<a name="title">&nbsp;</a>
<h1 align=center>Valgrind, version 2.2.0</h1>
<center>This manual was last updated on 31 August 2004</center>
<p>
<center>
<a href="mailto:jseward@acm.org">jseward@acm.org</a>,
<a href="mailto:njn25@cam.ac.uk">njn25@cam.ac.uk</a><br>
Copyright &copy; 2000-2004 Julian Seward, Nick Nethercote
<p>
Valgrind is licensed under the GNU General Public License, version
2<br>
An open-source tool for debugging and profiling Linux-x86 executables.
</center>
<p>
<hr width="100%">
<a name="contents"></a>
<h2>Contents of this manual</h2>
<h4>1&nbsp; <a href="coregrind_intro.html#intro">Introduction</a></h4>
1.1&nbsp; <a href="coregrind_intro.html#intro-overview">
An overview of Valgrind</a><br>
1.2&nbsp; <a href="coregrind_intro.html#intro-navigation">
How to navigate this manual</a>
<h4>2&nbsp; <a href="coregrind_core.html#core">
Using and understanding the Valgrind core</a></h4>
2.1&nbsp; <a href="coregrind_core.html#core-whatdoes">
What it does with your program</a><br>
2.2&nbsp; <a href="coregrind_core.html#started">
Getting started</a><br>
2.3&nbsp; <a href="coregrind_core.html#comment">
The commentary</a><br>
2.4&nbsp; <a href="coregrind_core.html#report">
Reporting of errors</a><br>
2.5&nbsp; <a href="coregrind_core.html#suppress">
Suppressing errors</a><br>
2.6&nbsp; <a href="coregrind_core.html#flags">
Command-line flags for the Valgrind core</a><br>
2.7&nbsp; <a href="coregrind_core.html#clientreq">
The Client Request mechanism</a><br>
2.8&nbsp; <a href="coregrind_core.html#pthreads">
Support for POSIX pthreads</a><br>
2.9&nbsp; <a href="coregrind_core.html#signals">
Handling of signals</a><br>
2.10&nbsp; <a href="coregrind_core.html#install">
Building and installing</a><br>
2.11&nbsp; <a href="coregrind_core.html#problems">
If you have problems</a><br>
2.12&nbsp; <a href="coregrind_core.html#limits">
Limitations</a><br>
2.13&nbsp; <a href="coregrind_core.html#howworks">
How it works -- a rough overview</a><br>
2.14&nbsp; <a href="coregrind_core.html#example">
An example run</a><br>
2.15&nbsp; <a href="coregrind_core.html#warnings">
Warning messages you might see</a><br>
<h4>3&nbsp; <a href="mc_main.html#mc-top">
Memcheck: a heavyweight memory checker</a></h4>
<h4>4&nbsp; <a href="cg_main.html#cg-top">
Cachegrind: a cache-miss profiler</a></h4>
<h4>5&nbsp; <a href="ac_main.html#ac-top">
Addrcheck: a lightweight memory checker</a></h4>
<h4>6&nbsp; <a href="hg_main.html#hg-top">
Helgrind: a data-race detector</a></h4>
<h4>7&nbsp; <a href="ms_main.html#ms-top">
Massif: a heap profiler</a></h4>
<p>
The following is not part of the user manual. It describes how you can
write tools for Valgrind, in order to make new program supervision
tools.
<h4>8&nbsp; <a href="coregrind_tools.html">
Valgrind Tools</a></h4>
<p>
The following are not part of the user manual. They describe internal
details of how Valgrind works. Reading them may rot your brain. You
have been warned.
<h4>9&nbsp; <a href="mc_techdocs.html#mc-techdocs">
The design and implementation of Valgrind</a></h4>
<h4>10&nbsp; <a href="cg_techdocs.html#cg-techdocs">
How Cachegrind works</a></h4>
<hr width="100%">

674
docs/xml/FAQ.xml Normal file
View File

@ -0,0 +1,674 @@
<?xml version="1.0"?> <!-- -*- sgml -*- -->
<!DOCTYPE book PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd"
[ <!ENTITY % vg-entities SYSTEM "vg-entities.xml"> %vg-entities; ]>
<book id="FAQ" xreflabel="Valgrind FAQ">
<bookinfo>
<title>Valgrind FAQ</title>
</bookinfo>
<chapter id="faq.background" xreflabel="Background">
<title>Background</title>
<qandaset id="qset.background">
<qandaentry id="faq.pronounce">
<question>
<para>How do you pronounce "Valgrind"?</para>
</question>
<answer>
<para>The "Val" as in the world "value". The "grind" is
pronounced with a short 'i' -- ie. "grinned" (rhymes with
"tinned") rather than "grined" (rhymes with "find").</para>
<para>Don't feel bad: almost everyone gets it wrong at
first.</para>
</answer>
</qandaentry>
<qandaentry id="faq.whence">
<question>
<para>Where does the name "Valgrind" come from?</para>
</question>
<answer>
<para>From Nordic mythology. Originally (before release) the
project was named Heimdall, after the watchman of the Nordic
gods. He could "see a hundred miles by day or night, hear the
grass growing, see the wool growing on a sheep's back" (etc).
This would have been a great name, but it was already taken by
a security package "Heimdal".</para> <para>Keeping with the
Nordic theme, Valgrind was chosen. Valgrind is the name of the
main entrance to Valhalla (the Hall of the Chosen Slain in
Asgard). Over this entrance there resides a wolf and over it
there is the head of a boar and on it perches a huge eagle,
whose eyes can see to the far regions of the nine worlds. Only
those judged worthy by the guardians are allowed to pass
through Valgrind. All others are refused entrance.</para>
<para>It's not short for "value grinder", although that's not a
bad guess.</para>
</answer>
</qandaentry>
</qandaset>
</chapter>
<chapter id="faq.installing"
xreflabel="Compiling, installing and configuring">
<title>Compiling, installing and configuring</title>
<qandaset id="qset.installing">
<qandaentry id="faq.make_dies">
<question>
<para>When I trying building Valgrind, 'make' dies partway with
an assertion failure, something like this:
<screen>
% make: expand.c:489: allocated_variable_append:
Assertion 'current_variable_set_list->next != 0' failed.
</screen>
</para>
</question>
<answer>
<para>It's probably a bug in 'make'. Some, but not all,
instances of version 3.79.1 have this bug, see
www.mail-archive.com/bug-make@gnu.org/msg01658.html. Try
upgrading to a more recent version of 'make'. Alternatively,
we have heard that unsetting the CFLAGS environment variable
avoids the problem.</para>
</answer>
</qandaentry>
</qandaset>
</chapter>
<chapter id="faq.abort"
xreflabel="Valgrind aborts unexpectedly">
<title>Valgrind aborts unexpectedly</title>
<qandaset id="qset.abort">
<qandaentry id="faq.exit_errors">
<question>
<para>Programs run OK on Valgrind, but at exit produce a bunch
of errors a bit like this:</para>
</question>
<answer><para>
<programlisting>
==20755== Invalid read of size 4
==20755== at 0x40281C8A: _nl_unload_locale (loadlocale.c:238)
==20755== by 0x4028179D: free_mem (findlocale.c:257)
==20755== by 0x402E0962: __libc_freeres (set-freeres.c:34)
==20755== by 0x40048DCC: vgPlain___libc_freeres_wrapper (vg_clientfuncs.c:585)
==20755== Address 0x40CC304C is 8 bytes inside a block of size 380 free'd
==20755== at 0x400484C9: free (vg_clientfuncs.c:180)
==20755== by 0x40281CBA: _nl_unload_locale (loadlocale.c:246)
==20755== by 0x40281218: free_mem (setlocale.c:461)
==20755== by 0x402E0962: __libc_freeres (set-freeres.c:34)
</programlisting>
and then die with a segmentation fault.</para>
<para>When the program exits, Valgrind runs the procedure
<literal>__libc_freeres()</literal> in glibc. This is a hook
for memory debuggers, so they can ask glibc to free up any
memory it has used. Doing that is needed to ensure that
Valgrind doesn't incorrectly report space leaks in glibc.</para>
<para>Problem is that running
<literal>__libc_freeres()</literal> in older glibc versions
causes this crash.</para> <para>WORKAROUND FOR 1.1.X and later
versions of Valgrind: use the
<literal>--run-libc-freeres=no</literal> flag. You may then get
space leak reports for glibc-allocations (please _don't_ report
these to the glibc people, since they are not real leaks), but
at least the program runs.</para>
</answer>
</qandaentry>
<qandaentry id="faq.bugdeath">
<question>
<para>My (buggy) program dies like this:</para>
</question>
<answer>
<screen>
% valgrind: vg_malloc2.c:442 (bszW_to_pszW): Assertion 'pszW >= 0' failed.
</screen>
<para>If Memcheck (the memory checker) shows any invalid reads,
invalid writes and invalid frees in your program, the above may
happen. Reason is that your program may trash Valgrind's
low-level memory manager, which then dies with the above
assertion, or something like this. The cure is to fix your
program so that it doesn't do any illegal memory accesses. The
above failure will hopefully go away after that.</para>
</answer>
</qandaentry>
<qandaentry id="faq.msgdeath">
<question>
<para>My program dies, printing a message like this along the
way:</para>
</question>
<answer>
<screen>
% disInstr: unhandled instruction bytes: 0x66 0xF 0x2E 0x5
</screen>
<para>Older versions did not support some x86 instructions,
particularly SSE/SSE2 instructions. Try a newer Valgrind; we
now support almost all instructions. If it still happens with
newer versions, if the failing instruction is an SSE/SSE2
instruction, you might be able to recompile your progrma
without it by using the flag
<computeroutput>-march</computeroutput> to gcc. Either way,
let us know and we'll try to fix it.</para>
<para>Another possibility is that your program has a bug and
erroneously jumps to a non-code address, in which case you'll
get a SIGILL signal. Memcheck/Addrcheck may issue a warning
just before this happens, but they might not if the jump
happens to land in addressable memory.</para>
</answer>
</qandaentry>
<qandaentry id="faq.defdeath">
<question>
<para>My program dies like this:</para>
</question>
<answer>
<screen>
% error: /lib/librt.so.1: symbol __pthread_clock_settime,
version GLIBC_PRIVATE not defined in file libpthread.so.0 with link time reference
</screen>
<para>This is a total swamp. Nevertheless there is a way out.
It's a problem which is not easy to fix. Really the problem is
that <filename>/lib/librt.so.1</filename> refers to some
symbols <literal>__pthread_clock_settime</literal> and
<literal>__pthread_clock_gettime</literal> in
<filename>/lib/libpthread.so</filename> which are not intended
to be exported, ie they are private.</para>
<para>Best solution is to ensure your program does not use
<filename>/lib/librt.so.1</filename>.</para>
<para>However ... since you're probably not using it directly,
or even knowingly, that's hard to do. You might instead be
able to fix it by playing around with
<filename>coregrind/vg_libpthread.vs</filename>. Things to
try:</para>
<para>Remove this:</para>
<programlisting>
GLIBC_PRIVATE {
__pthread_clock_gettime;
__pthread_clock_settime;
};
</programlisting>
<para>or maybe remove this</para>
<programlisting>
GLIBC_2.2.3 {
__pthread_clock_gettime;
__pthread_clock_settime;
} GLIBC_2.2;
</programlisting>
<para>or maybe add this:</para>
<programlisting>
GLIBC_2.2.4 {
__pthread_clock_gettime;
__pthread_clock_settime;
} GLIBC_2.2;
GLIBC_2.2.5 {
__pthread_clock_gettime;
__pthread_clock_settime;
} GLIBC_2.2;
</programlisting>
<para>or some combination of the above. After each change you
need to delete <filename>coregrind/libpthread.so</filename> and
do <computeroutput>make &amp;&amp; make
install</computeroutput>.</para>
<para>I just don't know if any of the above will work. If you
can find a solution which works, I would be interested to hear
it.</para>
<para>To which someone replied:</para>
<screen>
I deleted this:
GLIBC_2.2.3 {
__pthread_clock_gettime;
__pthread_clock_settime;
} GLIBC_2.2;
and it worked.
</screen>
</answer>
</qandaentry>
</qandaset>
</chapter>
<chapter id="faq.unexpected"
xreflabel="Valgrind behaves unexpectedly">
<title>Valgrind behaves unexpectedly</title>
<qandaset id="qset.unexpected">
<qandaentry id="faq.no-output">
<question>
<para>I try running "valgrind my-program", but my-program runs
normally, and Valgrind doesn't emit any output at all.</para>
</question>
<answer>
<para><command>For versions prior to 2.1.1:</command></para>
<para>Valgrind doesn't work out-of-the-box with programs that
are entirely statically linked. It does a quick test at
startup, and if it detects that the program is statically
linked, it aborts with an explanation.</para>
<para>This test may fail in some obscure cases, eg. if you run
a script under Valgrind and the script interpreter is
statically linked.</para>
<para>If you still want static linking, you can ask gcc to link
certain libraries statically. Try the following options:</para>
<screen>
-Wl,-Bstatic -lmyLibrary1 -lotherLibrary -Wl,-Bdynamic
</screen>
<para>Just make sure you end with
<computeroutput>-Wl,-Bdynamic</computeroutput> so that libc is
dynamically linked.</para>
<para>If you absolutely cannot use dynamic libraries, you can
try statically linking together all the .o files in coregrind/,
all the .o files of the tool of your choice (eg. those in
memcheck/), and the .o files of your program. You'll end up
with a statically linked binary that runs permanently under
Valgrind's control. Note that we haven't tested this procedure
thoroughly.</para>
<para><command>For versions 2.1.1 and later:</command></para>
<para>Valgrind does now work with static binaries, although
beware that some of the tools won't operate as well as normal,
because they have access to less information about how the
program runs. Eg. Memcheck will miss some errors that it would
otherwise find. This is because Valgrind doesn't replace
malloc() and friends with its own versions. It's best if your
program is dynamically linked with glibc.</para>
</answer>
</qandaentry>
<qandaentry id="faq.slowthread">
<question>
<para>My threaded server process runs unbelievably slowly on
Valgrind. So slowly, in fact, that at first I thought it had
completely locked up.</para>
</question>
<answer>
<para>We are not completely sure about this, but one
possibility is that laptops with power management fool
Valgrind's timekeeping mechanism, which is (somewhat in error)
based on the x86 RDTSC instruction. A "fix" which is claimed
to work is to run some other cpu-intensive process at the same
time, so that the laptop's power-management clock-slowing does
not kick in. We would be interested in hearing more feedback
on this.</para>
<para>Another possible cause is that versions prior to 1.9.6
did not support threading on glibc 2.3.X systems well.
Hopefully the situation is much improved with 1.9.6 and later
versions.</para>
</answer>
</qandaentry>
<qandaentry id="faq.reports">
<question>
<para>My program uses the C++ STL and string classes. Valgrind
reports 'still reachable' memory leaks involving these classes
at the exit of the program, but there should be none.</para>
</question>
<answer>
<para>First of all: relax, it's probably not a bug, but a
feature. Many implementations of the C++ standard libraries
use their own memory pool allocators. Memory for quite a
number of destructed objects is not immediately freed and given
back to the OS, but kept in the pool(s) for later re-use. The
fact that the pools are not freed at the exit() of the program
cause Valgrind to report this memory as still reachable. The
behaviour not to free pools at the exit() could be called a bug
of the library though.</para>
<para>Using gcc, you can force the STL to use malloc and to
free memory as soon as possible by globally disabling memory
caching. Beware! Doing so will probably slow down your
program, sometimes drastically.</para>
<itemizedlist>
<listitem>
<para>With gcc 2.91, 2.95, 3.0 and 3.1, compile all source
using the STL with <literal>-D__USE_MALLOC</literal>. Beware!
This is removed from gcc starting with version 3.3.</para>
</listitem>
<listitem>
<para>With 3.2.2 and later, you should export the environment
variable <literal>GLIBCPP_FORCE_NEW</literal> before running
your program.</para>
</listitem>
</itemizedlist>
<para>There are other ways to disable memory pooling: using the
<literal>malloc_alloc</literal> template with your objects (not
portable, but should work for gcc) or even writing your own
memory allocators. But all this goes beyond the scope of this
FAQ. Start by reading <ulink
url="http://gcc.gnu.org/onlinedocs/libstdc++/ext/howto.html#3">
http://gcc.gnu.org/onlinedocs/libstdc++/ext/howto.html#3</ulink>
if you absolutely want to do that. But beware:</para>
<orderedlist>
<listitem>
<para>there are currently changes underway for gcc which are
not totally reflected in the docs right now ("now" == 26 Apr
03)</para>
</listitem>
<listitem>
<para>allocators belong to the more messy parts of the STL
and people went at great lengths to make it portable across
platforms. Chances are good that your solution will work on
your platform, but not on others.</para>
</listitem>
</orderedlist>
</answer>
</qandaentry>
<qandaentry id="faq.unhelpful">
<question>
<para>The stack traces given by Memcheck (or another tool)
aren't helpful. How can I improve them?</para>
</question>
<answer>
<para>If they're not long enough, use
<literal>--num-callers</literal> to make them longer.</para>
<para>If they're not detailed enough, make sure you are
compiling with <literal>-g</literal> to add debug information.
And don't strip symbol tables (programs should be unstripped
unless you run 'strip' on them; some libraries ship
stripped).</para>
<para>Also, <literal>-fomit-frame-pointer</literal> and
<literal>-fstack-check</literal> can make stack traces
worse.</para>
<para>Some example sub-traces:</para>
<para>With debug information and unstripped (best):</para>
<programlisting>
Invalid write of size 1
at 0x80483BF: really (malloc1.c:20)
by 0x8048370: main (malloc1.c:9)
</programlisting>
<para>With no debug information, unstripped:</para>
<programlisting>
Invalid write of size 1
at 0x80483BF: really (in /auto/homes/njn25/grind/head5/a.out)
by 0x8048370: main (in /auto/homes/njn25/grind/head5/a.out)
</programlisting>
<para>With no debug information, stripped:</para>
<programlisting>
Invalid write of size 1
at 0x80483BF: (within /auto/homes/njn25/grind/head5/a.out)
by 0x8048370: (within /auto/homes/njn25/grind/head5/a.out)
by 0x42015703: __libc_start_main (in /lib/tls/libc-2.3.2.so)
by 0x80482CC: (within /auto/homes/njn25/grind/head5/a.out)
</programlisting>
<para>With debug information and -fomit-frame-pointer:</para>
<programlisting>
Invalid write of size 1
at 0x80483C4: really (malloc1.c:20)
by 0x42015703: __libc_start_main (in /lib/tls/libc-2.3.2.so)
by 0x80482CC: ??? (start.S:81)
</programlisting>
</answer>
</qandaentry>
</qandaset>
</chapter>
<chapter id="faq.notfound" xreflabel="Memcheck doesn't find my bug">
<title>Memcheck doesn't find my bug</title>
<qandaset id="qset.notfound">
<qandaentry id="faq.hiddenbug">
<question>
<para>I try running "valgrind --tool=memcheck my_program" and
get Valgrind's startup message, but I don't get any errors and
I know my program has errors.</para>
</question>
<answer>
<para>By default, Valgrind only traces the top-level process.
So if your program spawns children, they won't be traced by
Valgrind by default. Also, if your program is started by a
shell script, Perl script, or something similar, Valgrind will
trace the shell, or the Perl interpreter, or equivalent.</para>
<para>To trace child processes, use the
<literal>--trace-children=yes</literal> option.</para>
<para>If you are tracing large trees of processes, it can be
less disruptive to have the output sent over the network. Give
Valgrind the flag
<literal>--log-socket=127.0.0.1:12345</literal> (if you want
logging output sent to <literal>port 12345</literal> on
<literal>localhost</literal>). You can use the
valgrind-listener program to listen on that port:</para>
<programlisting>
valgrind-listener 12345
</programlisting>
<para>Obviously you have to start the listener process first.
See the Manual: <ulink url="http://www.valgrind.org/docs/bookset/manual-core.out2file.html">Directing output to file</ulink> for more details.</para>
</answer>
</qandaentry>
<qandaentry id="faq.overruns">
<question>
<para>Why doesn't Memcheck find the array overruns in this program?</para>
</question>
<answer>
<programlisting>
int static[5];
int main(void)
{
int stack[5];
static[5] = 0;
stack [5] = 0;
return 0;
}
</programlisting>
<para>Unfortunately, Memcheck doesn't do bounds checking on
static or stack arrays. We'd like to, but it's just not
possible to do in a reasonable way that fits with how Memcheck
works. Sorry.</para>
</answer>
</qandaentry>
<qandaentry id="faq.segfault">
<question>
<para>My program dies with a segmentation fault, but Memcheck
doesn't give any error messages before it, or none that look
related.</para>
</question>
<answer>
<para>One possibility is that your program accesses to memory
with inappropriate permissions set, such as writing to
read-only memory. Maybe your program is writing to a static
string like this:</para>
<programlisting>
char* s = "hello";
s[0] = 'j';
</programlisting>
<para>or something similar. Writing to read-only memory can
also apparently make LinuxThreads behave strangely.</para>
</answer>
</qandaentry>
</qandaset>
</chapter>
<chapter id="faq.misc"
xreflabel="Miscellaneous">
<title>Miscellaneous</title>
<qandaset id="qset.misc">
<qandaentry id="faq.writesupp">
<question>
<para>I tried writing a suppression but it didn't work. Can
you write my suppression for me?</para>
</question>
<answer>
<para>Yes! Use the
<computeroutput>--gen-suppressions=yes</computeroutput> feature
to spit out suppressions automatically for you. You can then
edit them if you like, eg. combining similar automatically
generated suppressions using wildcards like
<literal>'*'</literal>.</para>
<para>If you really want to write suppressions by hand, read
the manual carefully. Note particularly that C++ function
names must be <literal>_mangled_</literal>.</para>
</answer>
</qandaentry>
<qandaentry id="faq.deflost">
<question>
<para>With Memcheck/Addrcheck's memory leak detector, what's
the difference between "definitely lost", "possibly lost",
"still reachable", and "suppressed"?</para>
</question>
<answer>
<para>The details are in the Manual:
<ulink url="http://www.valgrind.org/docs/bookset/mc-manual.leaks.html">Memory leak detection</ulink>.</para>
<para>In short:</para>
<itemizedlist>
<listitem>
<para>"definitely lost" means your program is leaking memory
-- fix it!</para>
</listitem>
<listitem>
<para>"possibly lost" means your program is probably leaking
memory, unless you're doing funny things with
pointers.</para>
</listitem>
<listitem>
<para>"still reachable" means your program is probably ok --
it didn't free some memory it could have. This is quite
common and often reasonable. Don't use
<computeroutput>--show-reachable=yes</computeroutput> if you
don't want to see these reports.</para>
</listitem>
<listitem>
<para>"suppressed" means that a leak error has been
suppressed. There are some suppressions in the default
suppression files. You can ignore suppressed errors.</para>
</listitem>
</itemizedlist>
</answer>
</qandaentry>
</qandaset>
</chapter>
<!-- template
<chapter id="faq."
xreflabel="xx">
<title>xx</title>
<qandaset id="qset.">
<qandaentry id="faq.deflost">
<question>
<para></para>
</question>
<answer>
<para></para>
</answer>
</qandaentry>
</qandaset>
</chapter>
-->
<chapter id="faq.help" xreflabel="How To Get Further Assistance">
<title>How To Get Further Assistance</title>
<para>Please read all of this section before posting.</para>
<para>If you think an answer is incomplete or inaccurate, please
e-mail <ulink url="mailto:&vg-vemail;">&vg-vemail;</ulink>.</para>
<para>Read the appropriate section(s) of the Manual(s):
<ulink url="http://www.valgrind.org/docs/">Valgrind
Documentation</ulink>.</para>
<para>Read the <ulink url="http://www.valgrind.org/docs/">Distribution Documents</ulink>.</para>
<para><ulink url="http://search.gmane.org">Search</ulink> the
<ulink url="http://news.gmane.org/gmane.comp.debugging.valgrind">valgrind-users</ulink> mailing list archives, using the group name
<computeroutput>gmane.comp.debugging.valgrind</computeroutput>.</para>
<para>Only when you have tried all of these things and are still stuck,
should you post to the <ulink url="&vg-users-list;">valgrind-users
mailing list</ulink>. In which case, please read the following
carefully. Making a complete posting will greatly increase the chances
that an expert or fellow user reading it will have enough information
and motivation to reply.</para>
<para>Make sure you give full details of the problem,
including the full output of <computeroutput>valgrind
-v</computeroutput>, if applicable. Also which Linux distribution
you're using (Red Hat, Debian, etc) and its version number.</para>
<para>You are in little danger of making your posting too long
unless you include large chunks of valgrind's (unsuppressed)
output, so err on the side of giving too much information.</para>
<para>Clearly written subject lines and message bodies are appreciated,
too.</para>
<para>Finally, remember that, despite the fact that most of the
community are very helpful and responsive to emailed questions,
you are probably requesting help from unpaid volunteers, so you
have no guarantee of receiving an answer.</para>
</chapter>
</book>

10
docs/xml/Makefile.am Normal file
View File

@ -0,0 +1,10 @@
EXTRA_DIST = \
index.xml \
FAQ.xml \
manual.xml manual-intro.xml manual-core.xml \
writing-tools.xml \
dist-docs.xml \
tech-docs.xml \
licenses.xml \
vg-entities.xml \
xml_help.txt

82
docs/xml/dist-docs.xml Normal file
View File

@ -0,0 +1,82 @@
<?xml version="1.0"?> <!-- -*- sgml -*- -->
<!DOCTYPE book PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
<book id="dist" xreflabel="Distribution Documents">
<bookinfo>
<title>Distribution Documents</title>
</bookinfo>
<!-- Nb: because these are all text files, we have to wrap them in suitable
XML. Hence the chapter/title stuff -->
<chapter id="dist.acknowledge" xreflabel="Acknowledgements">
<title>ACKNOWLEDGEMENTS</title>
<literallayout>
<xi:include href="../../ACKNOWLEDGEMENTS" parse="text"
xmlns:xi="http://www.w3.org/2001/XInclude" />
</literallayout>
</chapter>
<chapter id="dist.authors" xreflabel="Valgrind Developers">
<title id="dist.authors.title">AUTHORS</title>
<literallayout>
<xi:include href="../../AUTHORS" parse="text"
xmlns:xi="http://www.w3.org/2001/XInclude" />
</literallayout>
</chapter>
<chapter id="dist.install" xreflabel="Install">
<title>INSTALL</title>
<literallayout>
<xi:include href="../../INSTALL" parse="text"
xmlns:xi="http://www.w3.org/2001/XInclude" />
</literallayout>
</chapter>
<chapter id="dist.news" xreflabel="News">
<title>NEWS</title>
<literallayout>
<xi:include href="../../NEWS" parse="text"
xmlns:xi="http://www.w3.org/2001/XInclude" />
</literallayout>
</chapter>
<chapter id="dist.readme" xreflabel="Readme">
<title>README</title>
<literallayout>
<xi:include href="../../README" parse="text"
xmlns:xi="http://www.w3.org/2001/XInclude" />
</literallayout>
</chapter>
<chapter id="dist.readme-missing"
xreflabel="Readme Missing Syscall or Ioctl">
<title>README_MISSING_SYSCALL_OR_IOCTL</title>
<literallayout>
<xi:include href="../../README_MISSING_SYSCALL_OR_IOCTL"
parse="text"
xmlns:xi="http://www.w3.org/2001/XInclude" />
</literallayout>
</chapter>
<chapter id="dist.readme-packagers"
xreflabel="Readme Packagers">
<title>README_PACKAGERS</title>
<literallayout>
<xi:include href="../../README_PACKAGERS"
parse="text"
xmlns:xi="http://www.w3.org/2001/XInclude" />
</literallayout>
</chapter>
<chapter id="dist.todo" xreflabel="Todo">
<title>TODO</title>
<literallayout>
<xi:include href="../../TODO"
parse="text"
xmlns:xi="http://www.w3.org/2001/XInclude" />
</literallayout>
</chapter>
</book>

54
docs/xml/index.xml Normal file
View File

@ -0,0 +1,54 @@
<?xml version="1.0"?> <!-- -*- sgml -*- -->
<!DOCTYPE set PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd"
[
<!-- various strings, dates etc. common to all docs -->
<!ENTITY % vg-entities SYSTEM "vg-entities.xml"> %vg-entities;
]>
<set lang="en" id="index">
<setinfo>
<title>Valgrind Documentation</title>
<releaseinfo>&rel-type; &rel-version; &rel-date;</releaseinfo>
<copyright>
<year>&vg-lifespan;</year>
<holder>
<link linkend="dist.authors" endterm="dist.authors.title"></link>
</holder>
</copyright>
<legalnotice>
<para>Permission is granted to copy, distribute and/or
modify this document under the terms of the GNU Free
Documentation License, Version 1.2 or any later version
published by the Free Software Foundation; with no
Invariant Sections, with no Front-Cover Texts, and with no
Back-Cover Texts. A copy of the license is included in the
section entitled <xref linkend="license.gfdl"/>.</para>
</legalnotice>
</setinfo>
<!-- User Manual -->
<xi:include href="manual.xml" parse="xml"
xmlns:xi="http://www.w3.org/2001/XInclude" />
<!-- FAQ -->
<xi:include href="FAQ.xml" parse="xml"
xmlns:xi="http://www.w3.org/2001/XInclude" />
<!-- Technical Docs -->
<xi:include href="tech-docs.xml" parse="xml"
xmlns:xi="http://www.w3.org/2001/XInclude" />
<!-- Distribution Docs -->
<xi:include href="dist-docs.xml" parse="xml"
xmlns:xi="http://www.w3.org/2001/XInclude" />
<!-- GNU Licenses -->
<xi:include href="licenses.xml" parse="xml"
xmlns:xi="http://www.w3.org/2001/XInclude" />
</set>

29
docs/xml/licenses.xml Normal file
View File

@ -0,0 +1,29 @@
<?xml version="1.0"?> <!-- -*- sgml -*- -->
<!DOCTYPE book PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
<book id="licenses" xreflabel="GNU Licenses">
<bookinfo>
<title>GNU Licenses</title>
</bookinfo>
<chapter id="license.gpl" xreflabel=" The GNU General Public License">
<title>The GNU General Public License</title>
<literallayout>
<xi:include href="../../COPYING"
parse="text"
xmlns:xi="http://www.w3.org/2001/XInclude" />
</literallayout>
</chapter>
<chapter id="license.gfdl" xreflabel="The GNU Free Documentation License">
<title>The GNU Free Documentation License</title>
<literallayout>
<xi:include href="../../COPYING.DOCS"
parse="text"
xmlns:xi="http://www.w3.org/2001/XInclude" />
</literallayout>
</chapter>
</book>

1951
docs/xml/manual-core.xml Normal file

File diff suppressed because it is too large Load Diff

199
docs/xml/manual-intro.xml Normal file
View File

@ -0,0 +1,199 @@
<?xml version="1.0"?> <!-- -*- sgml -*- -->
<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
<chapter id="manual-intro" xreflabel="Introduction">
<title>Introduction</title>
<sect1 id="manual-intro.overview" xreflabel="An Overview of Valgrind">
<title>An Overview of Valgrind</title>
<para>Valgrind is a flexible system for debugging and profiling
Linux-x86 executables. The system consists of a core, which
provides a synthetic x86 CPU in software, and a series of tools,
each of which performs some kind of debugging, profiling, or
similar task. The architecture is modular, so that new tools can
be created easily and without disturbing the existing
structure.</para>
<para>A number of useful tools are supplied as standard. In
summary, these are:</para>
<orderedlist>
<listitem>
<para><command>Memcheck</command> detects memory-management
problems in your programs. All reads and writes of memory
are checked, and calls to malloc/new/free/delete are
intercepted. As a result, Memcheck can detect the following
problems:</para>
<itemizedlist>
<listitem>
<para>Use of uninitialised memory</para>
</listitem>
<listitem>
<para>Reading/writing memory after it has been
free'd</para>
</listitem>
<listitem>
<para>Reading/writing off the end of malloc'd
blocks</para>
</listitem>
<listitem>
<para>Reading/writing inappropriate areas on the
stack</para>
</listitem>
<listitem>
<para>Memory leaks -- where pointers to malloc'd
blocks are lost forever</para>
</listitem>
<listitem>
<para>Mismatched use of malloc/new/new [] vs
free/delete/delete []</para>
</listitem>
<listitem>
<para>Overlapping <computeroutput>src</computeroutput> and
<computeroutput>dst</computeroutput> pointers in
<computeroutput>memcpy()</computeroutput> and related
functions</para></listitem> <listitem><para>Some misuses of
the POSIX pthreads API</para>
</listitem>
</itemizedlist>
<para>Problems like these can be difficult to find by other
means, often lying undetected for long periods, then causing
occasional, difficult-to-diagnose crashes.</para>
</listitem>
<listitem>
<para><command>Addrcheck</command> is a lightweight version
of Memcheck. It is identical to Memcheck except for the
single detail that it does not do any uninitialised-value
checks. All of the other checks -- primarily the
fine-grained address checking -- are still done. The
downside of this is that you don't catch the
uninitialised-value errors that Memcheck can find.</para>
<para>But the upside is significant: programs run about twice
as fast as they do on Memcheck, and a lot less memory is
used. It still finds reads/writes of freed memory, memory
off the end of blocks and in other invalid places, bugs which
you really want to find before release!</para>
<para>Because Addrcheck is lighter and faster than Memcheck,
you can run more programs for longer, and so you may be able
to cover more test scenarios. Addrcheck was created because
one of us (Julian) wanted to be able to run a complete KDE
desktop session with checking. As of early November 2002, we
have been able to run KDE-3.0.3 on a 1.7 GHz P4 with 512 MB
of memory, using Addrcheck. Although the result is not
stellar, it's quite usable, and it seems plausible to run KDE
for long periods at a time like this, collecting up all the
addressing errors that appear.</para>
</listitem>
<listitem>
<para><command>Cachegrind</command> is a cache profiler. It
performs detailed simulation of the I1, D1 and L2 caches in
your CPU and so can accurately pinpoint the sources of cache
misses in your code. If you desire, it will show the number
of cache misses, memory references and instructions accruing
to each line of source code, with per-function, per-module
and whole-program summaries. If you ask really nicely it
will even show counts for each individual x86
instruction.</para>
<para>Cachegrind auto-detects your machine's cache
configuration using the
<computeroutput>CPUID</computeroutput> instruction, and so
needs no further configuration info, in most cases.</para>
<para>Cachegrind is nicely complemented by Josef
Weidendorfer's amazing KCacheGrind visualisation tool
(<ulink url="http://kcachegrind.sourceforge.net">http://kcachegrind.sourceforge.net</ulink>),
a KDE application which presents these profiling results in a
graphical and easier-to-understand form.</para>
</listitem>
<listitem>
<para><command>Helgrind</command> finds data races in
multithreaded programs. Helgrind looks for memory locations
which are accessed by more than one (POSIX p-)thread, but for
which no consistently used (pthread_mutex_)lock can be found.
Such locations are indicative of missing synchronisation
between threads, and could cause hard-to-find
timing-dependent problems.</para>
<para>Helgrind ("Hell's Gate", in Norse mythology) implements
the so-called "Eraser" data-race-detection algorithm, along
with various refinements (thread-segment lifetimes) which
reduce the number of false errors it reports. It is as yet
somewhat of an experimental tool, so your feedback is
especially welcomed here.</para>
<para>Helgrind has been hacked on extensively by Jeremy
Fitzhardinge, and we have him to thank for getting it to a
releasable state.</para>
</listitem>
</orderedlist>
<para>A number of minor tools (<command>Corecheck</command>,
<command>Lackey</command> and <command>Nulgrind</command>) are
also supplied. These aren't particularly useful -- they exist to
illustrate how to create simple tools and to help the valgrind
developers in various ways.</para>
<para>Valgrind is closely tied to details of the CPU, operating
system and to a less extent, compiler and basic C libraries. This
makes it difficult to make it portable, so we have chosen at the
outset to concentrate on what we believe to be a widely used
platform: Linux on x86s. Valgrind uses the standard Unix
<computeroutput>./configure</computeroutput>,
<computeroutput>make</computeroutput>, <computeroutput>make
install</computeroutput> mechanism, and we have attempted to
ensure that it works on machines with kernel 2.2 or 2.4 and glibc
2.1.X, 2.2.X or 2.3.1. This should cover the vast majority of
modern Linux installations. Note that glibc-2.3.2+, with the
NPTL (Native Posix Threads Library) package won't work. We hope
to be able to fix this, but it won't be easy.</para>
<para>Valgrind is licensed under the <xref linkend="license.gpl"/>,
version 2. Some of the PThreads test cases,
<computeroutput>pth_*.c</computeroutput>, are taken from
"Pthreads Programming" by Bradford Nichols, Dick Buttlar &amp;
Jacqueline Proulx Farrell, ISBN 1-56592-115-1, published by
O'Reilly &amp; Associates, Inc.</para>
</sect1>
<sect1 id="manual-intro.navigation" xreflabel="How to navigate this manual">
<title>How to navigate this manual</title>
<para>The Valgrind distribution consists of the Valgrind core,
upon which are built Valgrind tools, which do different kinds of
debugging and profiling. This manual is structured
similarly.</para>
<para>First, we describe the Valgrind core, how to use it, and
the flags it supports. Then, each tool has its own chapter in
this manual. You only need to read the documentation for the
core and for the tool(s) you actually use, although you may find
it helpful to be at least a little bit familar with what all
tools do. If you're new to all this, you probably want to run
the Memcheck tool. If you want to write a new tool, read
<xref linkend="writing-tools"/>.</para>
<para>Be aware that the core understands some command line flags,
and the tools have their own flags which they know about. This
means there is no central place describing all the flags that are
accepted -- you have to read the flags documentation both for
<xref linkend="manual-core"/> and for the tool you want to
use.</para>
</sect1>
</chapter>

32
docs/xml/manual.xml Normal file
View File

@ -0,0 +1,32 @@
<?xml version="1.0"?> <!-- -*- sgml -*- -->
<!DOCTYPE book PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
<book id="manual" xreflabel="Valgrind User Manual">
<bookinfo>
<title>Valgrind User Manual</title>
</bookinfo>
<xi:include href="manual-intro.xml" parse="xml"
xmlns:xi="http://www.w3.org/2001/XInclude" />
<xi:include href="manual-core.xml" parse="xml"
xmlns:xi="http://www.w3.org/2001/XInclude" />
<xi:include href="../../memcheck/docs/mc-manual.xml" parse="xml"
xmlns:xi="http://www.w3.org/2001/XInclude" />
<xi:include href="../../addrcheck/docs/ac-manual.xml" parse="xml"
xmlns:xi="http://www.w3.org/2001/XInclude" />
<xi:include href="../../cachegrind/docs/cg-manual.xml" parse="xml"
xmlns:xi="http://www.w3.org/2001/XInclude" />
<xi:include href="../../massif/docs/ms-manual.xml" parse="xml"
xmlns:xi="http://www.w3.org/2001/XInclude" />
<xi:include href="../../helgrind/docs/hg-manual.xml" parse="xml"
xmlns:xi="http://www.w3.org/2001/XInclude" />
<xi:include href="../../none/docs/nl-manual.xml" parse="xml"
xmlns:xi="http://www.w3.org/2001/XInclude" />
<xi:include href="../../corecheck/docs/cc-manual.xml" parse="xml"
xmlns:xi="http://www.w3.org/2001/XInclude" />
<xi:include href="../../lackey/docs/lk-manual.xml" parse="xml"
xmlns:xi="http://www.w3.org/2001/XInclude" />
</book>

18
docs/xml/tech-docs.xml Normal file
View File

@ -0,0 +1,18 @@
<?xml version="1.0"?> <!-- -*- sgml -*- -->
<!DOCTYPE book PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
<book id="tech-docs" xreflabel="Valgrind Technical Documentation">
<bookinfo>
<title>Valgrind Technical Documentation</title>
</bookinfo>
<xi:include href="../../memcheck/docs/mc-tech-docs.xml" parse="xml"
xmlns:xi="http://www.w3.org/2001/XInclude" />
<xi:include href="../../cachegrind/docs/cg-tech-docs.xml" parse="xml"
xmlns:xi="http://www.w3.org/2001/XInclude" />
<xi:include href="writing-tools.xml" parse="xml"
xmlns:xi="http://www.w3.org/2001/XInclude" />
</book>

12
docs/xml/vg-entities.xml Normal file
View File

@ -0,0 +1,12 @@
<!-- misc. strings -->
<!ENTITY vg-url "http://www.valgrind.org">
<!ENTITY vg-jemail "jseward@valgrind.org">
<!ENTITY vg-vemail "valgrind@valgrind.org">
<!ENTITY vg-lifespan "2000-2004">
<!ENTITY vg-users-list "http://lists.sourceforge.net/lists/listinfo/valgrind-users">
<!-- valgrind release + version stuff -->
<!ENTITY rel-type "Development release">
<!ENTITY rel-version "2.1.2">
<!ENTITY rel-date "July 18 2004">

1248
docs/xml/writing-tools.xml Normal file

File diff suppressed because it is too large Load Diff

174
docs/xml/xml_help.txt Normal file
View File

@ -0,0 +1,174 @@
<!-- -*- sgml -*- -->
----------------------------------------------
Docbook Reference Manual (1999):
- http://www.oreilly.com/catalog/docbook/
DocBook XSL: The Complete Guide (2002)
- http://www.sagehill.net/docbookxsl/index.html
DocBook elements (what tags are allowed where)
- http://www.oreilly.com/catalog/docbook/chapter/book/refelem.html
Catalogs:
- http://www.sagehill.net/docbookxsl/WriteCatalog.html
----------------------------------------------
xml to html markup transformations:
<programlisting> --> <pre class="programlisting">
<screen> --> <pre class="screen">
<computeroutput> --> <tt class="computeroutput">
<literal> --> <tt>
<emphasis> --> <i>
<command> --> <b class="command">
<blockquote> --> <div class="blockquote">
<blockquote class="blockquote">
Important: inside <screen> and <programlisting> blocks, do NOT
use 'html entities' in your markup, eg. '&lt;' If you *do* use
them, they will be output verbatim, which is not what you want.
----------------------------------------------
<ulink url="http://..">http://kcachegrind.sourceforge.net</ulink>
----------------------------------------------
<variablelist> --> <dl>
<varlistentry>
<term>TTF</term> --> <dt>
<listitem>TrueType fonts.</listitem> --> <dd>
</varlistentry>
</variablelist> --> <dl>
----------------------------------------------
<itemizedlist> --> <ul>
<listitem> --> <li>
<para>....</para>
<para>....</para>
</listitem> --> </li>
</itemizedlist> --> </ul>
----------------------------------------------
<orderedlist> --> <ol>
<listitem> --> <li>
<para>....</para>
<para>....</para>
</listitem> --> </li>
</orderedlist> --> </ol>
----------------------------------------------
To achieve this:
This is a paragraph of text before a list:
* some text
* some more text
and this is some more text after the list.
Do this:
<para>This is a paragraph of text before a list:</para>
<itemizedlist>
<listitem>
<para>some text</para>
</listitem>
<listitem>
<para>some more text</para>
</listitem>
</itemizedlist>
----------------------------------------------
To achieve this:
For further details, see <a href="clientreq">The Mechanism</a>
Do this:
Given:
<sect1 id="clientreq" xreflabel="The Mechanism">
<title>The Mechanism</title>
<para>...</para>
</sect1>
Then do:
For further details, see <xref linkend="clientreq"/>.
----------------------------------------------
To achieve this:
<p><b>Warning:</b> Only do this if ...</p>
Do this:
<formalpara>
<title>Warning:</title>
<para>Only do this if ...</para>
</formalpara>
Or this:
<para><command>Warning:</command> Only do this if ... </para>
----------------------------------------------
To achieve this:
<p>It uses the Eraser algorithm described in:<br />
<br />
Eraser: A Dynamic Data Race Detector for Multithreaded Programs<br />
Stefan Savage, Michael Burrows, Patrick Sobalvarro and Thomas Anderson<br />
ACM Transactions on Computer Systems, 15(4):391-411<br />
November 1997.<br />
</p>
Do this:
<literallayout>
It uses the Eraser algorithm described in:
Eraser: A Dynamic Data Race Detector for Multithreaded Programs
Stefan Savage, Michael Burrows, Patrick Sobalvarro and Thomas Anderson
ACM Transactions on Computer Systems, 15(4):391-411
November 1997.
</literallayout>
----------------------------------------------
To achieve this:
<pre>
/* Hook to delay things long enough so we can get the pid
and attach GDB in another shell. */
if (0) {
Int p, q;
for ( p = 0; p < 50000; p++ )
for ( q = 0; q < 50000; q++ ) ;
</pre>
Do this:
<programlisting><![CDATA[
/* Hook to delay things long enough so we can get the pid
and attach GDB in another shell. */
if (0) {
Int p, q;
for ( p = 0; p < 50000; p++ )
for ( q = 0; q < 50000; q++ ) ;
}]]></programlisting>
(do the same thing for <screen> tag)
----------------------------------------------
To achieve this:
where <i><code>TAG</code></i> has the ...
Do this:
where <emphasis><computeroutput>TAG</computeroutput></emphasis> has the ...
Note: you cannot put <emphasis> inside <computeroutput>, unfortunately.
----------------------------------------------
Any other helpful hints? Please tell us.

View File

@ -1,3 +1 @@
docdir = $(datadir)/doc/valgrind
dist_doc_DATA = hg_main.html
EXTRA_DIST = hg-manual.xml

View File

@ -0,0 +1,57 @@
<?xml version="1.0"?> <!-- -*- sgml -*- -->
<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
<chapter id="hg-manual" xreflabel="Helgrind: a data-race detector">
<title>Helgrind: a data-race detector</title>
<para>Helgrind is a Valgrind tool for detecting data races in C
and C++ programs that use the Pthreads library.</para>
<para>To use this tool, you specify
<computeroutput>--tool=helgrind</computeroutput> on the Valgrind
command line.</para>
<para>It uses the Eraser algorithm described in:
<address>Eraser: A Dynamic Data Race Detector for Multithreaded Programs
Stefan Savage, Michael Burrows, Greg Nelson, Patrick Sobalvarro and Thomas Anderson
ACM Transactions on Computer Systems, 15(4):391-411
November 1997.
</address>
</para>
<para>We also incorporate significant improvements from this paper:
<address>Runtime Checking of Multithreaded Applications with Visual Threads
Jerry J. Harrow, Jr.
Proceedings of the 7th International SPIN Workshop on Model Checking of Software
Stanford, California, USA
August 2000
LNCS 1885, pp331--342
K. Havelund, J. Penix, and W. Visser, editors.
</address>
</para>
<para>Basically what Helgrind does is to look for memory
locations which are accessed by more than one thread. For each
such location, Helgrind records which of the program's
(pthread_mutex_)locks were held by the accessing thread at the
time of the access. The hope is to discover that there is indeed
at least one lock which is used by all threads to protect that
location. If no such lock can be found, then there is
(apparently) no consistent locking strategy being applied for
that location, and so a possible data race might result.</para>
<para>Helgrind also allows for "thread segment lifetimes". If
the execution of two threads cannot overlap -- for example, if
your main thread waits on another thread with a
<computeroutput>pthread_join()</computeroutput> operation -- they
can both access the same variable without holding a lock.</para>
<para>There's a lot of other sophistication in Helgrind, aimed at
reducing the number of false reports, and at producing useful
error reports. We hope to have more documentation one
day...</para>
</chapter>

View File

@ -1,60 +0,0 @@
<html>
<head>
<title>Helgrind: a data-race detector</title>
</head>
<a name="hg-top"></a>
<h2>6&nbsp; Helgrind: a data-race detector</h2>
To use this tool, you must specify <code>--tool=helgrind</code> on the
Valgrind command line.
<p>
Helgrind is a Valgrind tool for detecting data races in C and C++ programs
that use the Pthreads library.
<p>
It uses the Eraser algorithm described in
<blockquote>
Eraser: A Dynamic Data Race Detector for Multithreaded Programs<br>
Stefan Savage, Michael Burrows, Greg Nelson, Patrick Sobalvarro and
Thomas Anderson<br>
ACM Transactions on Computer Systems, 15(4):391-411<br>
November 1997.
</blockquote>
We also incorporate significant improvements from this paper:
<blockquote>
Runtime Checking of Multithreaded Applications with Visual Threads
Jerry J. Harrow, Jr.<br>
Proceedings of the 7th International SPIN Workshop on Model Checking of
Software<br>
Stanford, California, USA<br>
August 2000<br>
LNCS 1885, pp331--342<br>
K. Havelund, J. Penix, and W. Visser, editors.<br>
</blockquote>
<p>
Basically what Helgrind does is to look for memory locations which are
accessed by more than one thread. For each such location, Helgrind
records which of the program's (pthread_mutex_)locks were held by the
accessing thread at the time of the access. The hope is to discover
that there is indeed at least one lock which is used by all threads to
protect that location. If no such lock can be found, then there is
(apparently) no consistent locking strategy being applied for that
location, and so a possible data race might result.
<p>
Helgrind also allows for "thread segment lifetimes". If the execution of two
threads cannot overlap -- for example, if your main thread waits on another
thread with a <code>pthread_join()</code> operation -- they can both access the
same variable without holding a lock.
<p>
There's a lot of other sophistication in Helgrind, aimed at
reducing the number of false reports, and at producing useful error
reports. We hope to have more documentation one day...
</body>
</html>

View File

@ -1,3 +1 @@
docdir = $(datadir)/doc/valgrind
dist_doc_DATA = lk_main.html
EXTRA_DIST = lk-manual.xml

39
lackey/docs/lk-manual.xml Normal file
View File

@ -0,0 +1,39 @@
<?xml version="1.0"?> <!-- -*- sgml -*- -->
<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
<chapter id="lk-manual" xreflabel="Lackey">
<title>Lackey: a very simple profiler</title>
<para>Lackey is a simple Valgrind tool that does some basic
program measurement. It adds quite a lot of simple
instrumentation to the program's code. It is primarily intended
to be of use as an example tool.</para>
<para>It measures three things:</para>
<orderedlist>
<listitem>
<para>The number of calls to
<computeroutput>_dl_runtime_resolve()</computeroutput>, the
function in glibc's dynamic linker that resolves function
lookups into shared objects.</para>
</listitem>
<listitem>
<para>The number of UCode instructions (UCode is Valgrind's
RISC-like intermediate language), x86 instructions, and basic
blocks executed by the program, and some ratios between the
three counts.</para>
</listitem>
<listitem>
<para>The number of conditional branches encountered and the
proportion of those taken.</para>
</listitem>
</orderedlist>
</chapter>

View File

@ -1,68 +0,0 @@
<html>
<head>
<style type="text/css">
body { background-color: #ffffff;
color: #000000;
font-family: Times, Helvetica, Arial;
font-size: 14pt}
h4 { margin-bottom: 0.3em}
code { color: #000000;
font-family: Courier;
font-size: 13pt }
pre { color: #000000;
font-family: Courier;
font-size: 13pt }
a:link { color: #0000C0;
text-decoration: none; }
a:visited { color: #0000C0;
text-decoration: none; }
a:active { color: #0000C0;
text-decoration: none; }
</style>
<title>Cachegrind</title>
</head>
<body bgcolor="#ffffff">
<a name="title"></a>
<h1 align=center>Lackey</h1>
<center>This manual was last updated on 2002-10-03</center>
<p>
<center>
<a href="mailto:njn25@cam.ac.uk">njn25@cam.ac.uk</a><br>
Copyright &copy; 2002-2004 Nicholas Nethercote
<p>
Lackey is licensed under the GNU General Public License,
version 2<br>
Lackey is an example Valgrind tool that does some very basic program
measurement.
</center>
<p>
<h2>1&nbsp; Lackey</h2>
Lackey is a simple Valgrind tool that does some basic program measurement.
It adds quite a lot of simple instrumentation to the program's code. It is
primarily intended to be of use as an example tool.
<p>
It measures three things:
<ol>
<li>The number of calls to <code>_dl_runtime_resolve()</code>, the function
in glibc's dynamic linker that resolves function lookups into shared
objects.<p>
<li>The number of UCode instructions (UCode is Valgrind's RISC-like
intermediate language), x86 instructions, and basic blocks executed by the
program, and some ratios between the three counts.<p>
<li>The number of conditional branches encountered and the proportion of those
taken.<p>
</ol>
<hr width="100%">
</body>
</html>

View File

@ -1,3 +1 @@
docdir = $(datadir)/doc/valgrind
dist_doc_DATA = ms_main.html date.gif
EXTRA_DIST = ms-manual.xml

Binary file not shown.

Before

Width:  |  Height:  |  Size: 9.5 KiB

465
massif/docs/ms-manual.xml Normal file
View File

@ -0,0 +1,465 @@
<?xml version="1.0"?> <!-- -*- sgml -*- -->
<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
<chapter id="ms-manual" xreflabel="Massif: a heap profiler">
<title>Massif: a heap profiler</title>
<para>To use this tool, you must specify
<computeroutput>--tool=massif</computeroutput> on the Valgrind
command line.</para>
<sect1 id="ms-manual.spaceprof" xreflabel="Heap profiling">
<title>Heap profiling</title>
<para>Massif is a heap profiler, i.e. it measures how much heap
memory programs use. In particular, it can give you information
about:</para>
<itemizedlist>
<listitem><para>Heap blocks;</para></listitem>
<listitem><para>Heap administration blocks;</para></listitem>
<listitem><para>Stack sizes.</para></listitem>
</itemizedlist>
<para>Heap profiling is useful to help you reduce the amount of
memory your program uses. On modern machines with virtual
memory, this provides the following benefits:</para>
<itemizedlist>
<listitem><para>It can speed up your program -- a smaller
program will interact better with your machine's caches,
avoid paging, and so on.</para></listitem>
<listitem><para>If your program uses lots of memory, it will
reduce the chance that it exhausts your machine's swap
space.</para></listitem>
</itemizedlist>
<para>Also, there are certain space leaks that aren't detected by
traditional leak-checkers, such as Memcheck's. That's because
the memory isn't ever actually lost -- a pointer remains to it --
but it's not in use. Programs that have leaks like this can
unnecessarily increase the amount of memory they are using over
time.</para>
<sect2 id="ms-manual.heapprof"
xreflabel="Why Use a Heap Profiler?">
<title>Why Use a Heap Profiler?</title>
<para>Everybody knows how useful time profilers are for speeding
up programs. They are particularly useful because people are
notoriously bad at predicting where are the bottlenecks in their
programs.</para>
<para>But the story is different for heap profilers. Some
programming languages, particularly lazy functional languages
like <ulink url="http://www.haskell.org">Haskell</ulink>, have
quite sophisticated heap profilers. But there are few tools as
powerful for profiling C and C++ programs.</para>
<para>Why is this? Maybe it's because C and C++ programmers must
think that they know where the memory is being allocated. After
all, you can see all the calls to
<computeroutput>malloc()</computeroutput> and
<computeroutput>new</computeroutput> and
<computeroutput>new[]</computeroutput>, right? But, in a big
program, do you really know which heap allocations are being
executed, how many times, and how large each allocation is? Can
you give even a vague estimate of the memory footprint for your
program? Do you know this for all the libraries your program
uses? What about administration bytes required by the heap
allocator to track heap blocks -- have you thought about them?
What about the stack? If you are unsure about any of these
things, maybe you should think about heap profiling.</para>
<para>Massif can tell you these things.</para>
<para>Or maybe it's because it's relatively easy to add basic
heap profiling functionality into a program, to tell you how many
bytes you have allocated for certain objects, or similar. But
this information might only be simple like total counts for the
whole program's execution. What about space usage at different
points in the program's execution, for example? And
reimplementing heap profiling code for each project is a
pain.</para>
<para>Massif can save you this effort.</para>
</sect2>
</sect1>
<sect1 id="ms-manual.using" xreflabel="Using Massif">
<title>Using Massif</title>
<sect2 id="ms-manual.overview" xreflabel="Overview">
<title>Overview</title>
<para>First off, as for normal Valgrind use, you probably want to
compile with debugging info (the
<computeroutput>-g</computeroutput> flag). But, as opposed to
Memcheck, you probably <command>do</command> want to turn
optimisation on, since you should profile your program as it will
be normally run.</para>
<para>Then, run your program with <computeroutput>valgrind
--tool=massif</computeroutput> in front of the normal command
line invocation. When the program finishes, Massif will print
summary space statistics. It also creates a graph representing
the program's heap usage in a file called
<filename>massif.pid.ps</filename>, which can be read by any
PostScript viewer, such as Ghostview.</para>
<para>It also puts detailed information about heap consumption in
a file <filename>massif.pid.txt</filename> (text format) or
<filename>massif.pid.html</filename> (HTML format), where
<emphasis>pid</emphasis> is the program's process id.</para>
</sect2>
<sect2 id="ms-manual.basicresults" xreflabel="Basic Results of Profiling">
<title>Basic Results of Profiling</title>
<para>To gather heap profiling information about the program
<computeroutput>prog</computeroutput>, type:</para>
<screen><![CDATA[
% valgrind --tool=massif prog]]></screen>
<para>The program will execute (slowly). Upon completion,
summary statistics that look like this will be printed:</para>
<programlisting><![CDATA[
==27519== Total spacetime: 2,258,106 ms.B
==27519== heap: 24.0%
==27519== heap admin: 2.2%
==27519== stack(s): 73.7%]]></programlisting>
<para>All measurements are done in
<emphasis>spacetime</emphasis>, i.e. space (in bytes) multiplied
by time (in milliseconds). Note that because Massif slows a
program down a lot, the actual spacetime figure is fairly
meaningless; it's the relative values that are
interesting.</para>
<para>Which entries you see in the breakdown depends on the
command line options given. The above example measures all the
possible parts of memory:</para>
<itemizedlist>
<listitem><para>Heap: number of words allocated on the heap, via
<computeroutput>malloc()</computeroutput>,
<computeroutput>new</computeroutput> and
<computeroutput>new[]</computeroutput>.</para>
</listitem>
<listitem>
<para>Heap admin: each heap block allocated requires some
administration data, which lets the allocator track certain
things about the block. It is easy to forget about this, and
if your program allocates lots of small blocks, it can add
up. This value is an estimate of the space required for this
administration data.</para>
</listitem>
<listitem>
<para>Stack(s): the spacetime used by the programs' stack(s).
(Threaded programs can have multiple stacks.) This includes
signal handler stacks.</para>
</listitem>
</itemizedlist>
</sect2>
<sect2 id="ms-manual.graphs" xreflabel="Spacetime Graphs">
<title>Spacetime Graphs</title>
<para>As well as printing summary information, Massif also
creates a file representing a spacetime graph,
<filename>massif.pid.hp</filename>. It will produce a file
called <filename>massif.pid.ps</filename>, which can be viewed in
a PostScript viewer.</para>
<para>Massif uses a program called
<computeroutput>hp2ps</computeroutput> to convert the raw data
into the PostScript graph. It's distributed with Massif, but
came originally from the
<ulink url="http://haskell.cs.yale.edu/ghc/">Glasgow Haskell
Compiler</ulink>. You shouldn't need to worry about this at all.
However, if the graph creation fails for any reason, Massif will
tell you, and will leave behind a file named
<filename>massif.pid.hp</filename>, containing the raw heap
profiling data.</para>
<para>Here's an example graph:</para>
<mediaobject id="spacetime-graph">
<imageobject>
<imagedata fileref="images/massif-graph-sm.png" format="PNG"/>
</imageobject>
<textobject>
<phrase>Spacetime Graph</phrase>
</textobject>
</mediaobject>
<para>The graph is broken into several bands. Most bands
represent a single line of your program that does some heap
allocation; each such band represents all the allocations and
deallocations done from that line. Up to twenty bands are shown;
less significant allocation sites are merged into "other" and/or
"OTHER" bands. The accompanying text/HTML file produced by
Massif has more detail about these heap allocation bands. Then
there are single bands for the stack(s) and heap admin
bytes.</para>
<formalpara>
<title>Note:</title>
<para>it's the height of a band that's important. Don't let the
ups and downs caused by other bands confuse you. For example,
the <computeroutput>read_alias_file</computeroutput> band in the
example has the same height all the time it's in existence.</para>
</formalpara>
<para>The triangles on the x-axis show each point at which a
memory census was taken. These aren't necessarily evenly spread;
Massif only takes a census when memory is allocated or
deallocated. The time on the x-axis is wallclock time, which is
not ideal because you can get different graphs for different
executions of the same program, due to random OS delays. But
it's not too bad, and it becomes less of a problem the longer a
program runs.</para>
<para>Massif takes censuses at an appropriate timescale; censuses
take place less frequently as the program runs for longer. There
is no point having more than 100-200 censuses on a single
graph.</para>
<para>The graphs give a good overview of where your program's
space use comes from, and how that varies over time. The
accompanying text/HTML file gives a lot more information about
heap use.</para>
</sect2>
</sect1>
<sect1 id="ms-manual.heapdetails"
xreflabel="Details of Heap Allocations">
<title>Details of Heap Allocations</title>
<para>The text/HTML file contains information to help interpret
the heap bands of the graph. It also contains a lot of extra
information about heap allocations that you don't see in the
graph.</para>
<para>Here's part of the information that accompanies the above
graph.</para>
<blockquote>
<literallayout>== 0 ===========================</literallayout>
<para>Heap allocation functions accounted for 50.8% of measured
spacetime</para>
<para>Called from:</para>
<itemizedlist>
<listitem id="a401767D1"><para>
<ulink url="#b401767D1">22.1%</ulink>: 0x401767D0:
_nl_intern_locale_data (in /lib/i686/libc-2.3.2.so)</para>
</listitem>
<listitem id="a4017C394"><para>
<ulink url="#b4017C394">8.6%</ulink>: 0x4017C393:
read_alias_file (in /lib/i686/libc-2.3.2.so)</para>
</listitem>
<listitem>
<para>... ... <emphasis>(several entries omitted)</emphasis></para>
</listitem>
<listitem>
<para>and 6 other insignificant places</para>
</listitem>
</itemizedlist>
</blockquote>
<para>The first part shows the total spacetime due to heap
allocations, and the places in the program where most memory was
allocated (Nb: if this program had been compiled with
<computeroutput>-g</computeroutput>, actual line numbers would be
given). These places are sorted, from most significant to least,
and correspond to the bands seen in the graph. Insignificant
sites (accounting for less than 0.5% of total spacetime) are
omitted.</para>
<para>That alone can be useful, but often isn't enough. What if
one of these functions was called from several different places
in the program? Which one of these is responsible for most of
the memory used? For
<computeroutput>_nl_intern_locale_data()</computeroutput>, this
question is answered by clicking on the
<ulink url="#b401767D1">22.1%</ulink> link, which takes us to the
following part of the file:</para>
<blockquote id="b401767D1">
<literallayout>== 1 ===========================</literallayout>
<para>Context accounted for <ulink url="#a401767D1">22.1%</ulink>
of measured spacetime</para>
<para><computeroutput> 0x401767D0: _nl_intern_locale_data (in
/lib/i686/libc-2.3.2.so)</computeroutput></para>
<para>Called from:</para>
<itemizedlist>
<listitem id="a40176F96"><para>
<ulink url="#b40176F96">22.1%</ulink>: 0x40176F95:
_nl_load_locale_from_archive (in
/lib/i686/libc-2.3.2.so)</para>
</listitem>
</itemizedlist>
</blockquote>
<para>At this level, we can see all the places from which
<computeroutput>_nl_load_locale_from_archive()</computeroutput>
was called such that it allocated memory at 0x401767D0. (We can
click on the top <ulink url="#a40176F96">22.1%</ulink> link to go back
to the parent entry.) At this level, we have moved beyond the
information presented in the graph. In this case, it is only
called from one place. We can again follow the link for more
detail, moving to the following part of the file.</para>
<blockquote>
<literallayout>== 2 ===========================</literallayout>
<para id="b40176F96">
Context accounted for <ulink url="#a40176F96">22.1%</ulink> of
measured spacetime</para>
<para><computeroutput> 0x401767D0: _nl_intern_locale_data (in
/lib/i686/libc-2.3.2.so)</computeroutput> <computeroutput>
0x40176F95: _nl_load_locale_from_archive (in
/lib/i686/libc-2.3.2.so)</computeroutput></para>
<para>Called from:</para>
<itemizedlist>
<listitem id="a40176185">
<para>22.1%: 0x40176184: _nl_find_locale (in
/lib/i686/libc-2.3.2.so)</para>
</listitem>
</itemizedlist>
</blockquote>
<para>In this way we can dig deeper into the call stack, to work
out exactly what sequence of calls led to some memory being
allocated. At this point, with a call depth of 3, the
information runs out (thus the address of the child entry,
0x40176184, isn't a link). We could rerun the program with a
greater <computeroutput>--depth</computeroutput> value if we
wanted more information.</para>
<para>Sometimes you will get a code location like this:</para>
<programlisting><![CDATA[
30.8% : 0xFFFFFFFF: ???]]></programlisting>
<para>The code address isn't really 0xFFFFFFFF -- that's
impossible. This is what Massif does when it can't work out what
the real code address is.</para>
<para>Massif produces this information in a plain text file by
default, or HTML with the
<computeroutput>--format=html</computeroutput> option. The plain
text version obviously doesn't have the links, but a similar
effect can be achieved by searching on the code addresses. (In
Vim, the '*' and '#' searches are ideal for this.)</para>
<sect2 id="ms-manual.accuracy" xreflabel="Accuracy">
<title>Accuracy</title>
<para>The information should be pretty accurate. Some
approximations made might cause some allocation contexts to be
attributed with less memory than they actually allocated, but the
amounts should be miniscule.</para>
<para>The heap admin spacetime figure is an approximation, as
described above. If anyone knows how to improve its accuracy,
please let us know.</para>
</sect2>
</sect1>
<sect1 id="ms-manual.options" xreflabel="Massif options">
<title>Massif options</title>
<para>Massif-specific options are:</para>
<itemizedlist>
<listitem>
<para><computeroutput>--heap=no</computeroutput></para>
<para><computeroutput>--heap=yes</computeroutput> [default]</para>
<para>When enabled, profile heap usage in detail. Without
it, the <filename>massif.pid.txt</filename> or
<filename>massif.pid.html</filename> will be very
short.</para>
</listitem>
<listitem>
<para><computeroutput>--heap-admin=n</computeroutput>
[default: 8]</para>
<para>The number of admin bytes per block to use. This can
only be an estimate of the average, since it may vary. The
allocator used by <computeroutput>glibc</computeroutput>
requires somewhere between 4--15 bytes per block, depending
on various factors. It also requires admin space for freed
blocks, although Massif does not count this.</para>
</listitem>
<listitem>
<para><computeroutput>--stacks=no</computeroutput></para>
<para><computeroutput>--stacks=yes</computeroutput> [default]</para>
<para>When enabled, include stack(s) in the profile.
Threaded programs can have multiple stacks.</para>
</listitem>
<listitem>
<para><computeroutput>--depth=n</computeroutput>
[default: 3]</para>
<para>Depth of call chains to present in the detailed heap
information. Increasing it will give more information, but
Massif will run the program more slowly, using more memory,
and produce a bigger <computeroutput>.txt</computeroutput> /
<computeroutput>.hp</computeroutput> file.</para>
</listitem>
<listitem>
<para><computeroutput>--alloc-fn=name</computeroutput></para>
<para>Specify a function that allocates memory. This is
useful for functions that are wrappers to
<computeroutput>malloc()</computeroutput>, which can fill up
the context information uselessly (and give very
uninformative bands on the graph). Functions specified will
be ignored in contexts, i.e. treated as though they were
<computeroutput>malloc()</computeroutput>. This option can
be specified multiple times on the command line, to name
multiple functions.</para>
</listitem>
<listitem>
<para><computeroutput>--format=text</computeroutput> [default]</para>
<para><computeroutput>--format=html</computeroutput></para>
<para>Produce the detailed heap information in text or HTML
format. The file suffix used will be either
<computeroutput>.txt</computeroutput> or
<computeroutput>.html</computeroutput>.</para>
</listitem>
</itemizedlist>
</sect1>
</chapter>

View File

@ -1,331 +0,0 @@
<html>
<head>
<title>Massif: a heap profiler</title>
</head>
<body>
<a name="ms-top"></a>
<h2>7&nbsp; <b>Massif</b>: a heap profiler</h2>
To use this tool, you must specify <code>--tool=massif</code>
on the Valgrind command line.
<a name="spaceprof"></a>
<h3>7.1&nbsp; Heap profiling</h3>
Massif is a heap profiler, i.e. it measures how much heap memory programs use.
In particular, it can give you information about:
<ul>
<li>Heap blocks;
<li>Heap administration blocks;
<li>Stack sizes.
</ul>
Heap profiling is useful to help you reduce the amount of memory your program
uses. On modern machines with virtual memory, this provides the following
benefits:
<ul>
<li>It can speed up your program -- a smaller program will interact better
with your machine's caches, avoid paging, and so on.
<li>If your program uses lots of memory, it will reduce the chance that it
exhausts your machine's swap space.
</ul>
Also, there are certain space leaks that aren't detected by traditional
leak-checkers, such as Memcheck's. That's because the memory isn't ever
actually lost -- a pointer remains to it -- but it's not in use. Programs
that have leaks like this can unnecessarily increase the amount of memory
they are using over time.
<p>
<a name="whyuse_heapprof"></a>
<h3>7.2&nbsp; Why Use a Heap Profiler?</h3>
Everybody knows how useful time profilers are for speeding up programs. They
are particularly useful because people are notoriously bad at predicting where
are the bottlenecks in their programs.
<p>
But the story is different for heap profilers. Some programming languages,
particularly lazy functional languages like <a
href="http://www.haskell.org">Haskell</a>, have quite sophisticated heap
profilers. But there are few tools as powerful for profiling C and C++
programs.
<p>
Why is this? Maybe it's because C and C++ programmers must think that
they know where the memory is being allocated. After all, you can see all the
calls to <code>malloc()</code> and <code>new</code> and <code>new[]</code>,
right? But, in a big program, do you really know which heap allocations are
being executed, how many times, and how large each allocation is? Can you give
even a vague estimate of the memory footprint for your program? Do you know
this for all the libraries your program uses? What about administration bytes
required by the heap allocator to track heap blocks -- have you thought about
them? What about the stack? If you are unsure about any of these things,
maybe you should think about heap profiling.
<p>
Massif can tell you these things.
<p>
Or maybe it's because it's relatively easy to add basic heap profiling
functionality into a program, to tell you how many bytes you have allocated for
certain objects, or similar. But this information might only be simple like
total counts for the whole program's execution. What about space usage at
different points in the program's execution, for example? And reimplementing
heap profiling code for each project is a pain.
<p>
Massif can save you this effort.
<p>
<a name="overview"></a>
<h3>7.3&nbsp; Overview</h3>
First off, as for normal Valgrind use, you probably want to compile with
debugging info (the <code>-g</code> flag). But, as opposed to Memcheck,
you probably <b>do</b> want to turn optimisation on, since you should profile
your program as it will be normally run.
<p>
Then, run your program with <code>valgrind --tool=massif</code> in front of the
normal command line invocation. When the program finishes, Massif will print
summary space statistics. It also creates a graph representing the program's
heap usage in a file called <code>massif.<i>pid</i>.ps</code>, which can
be read by any PostScript viewer, such as Ghostview.
<p>
It also puts detailed information about heap consumption in a file file
<code>massif.<i>pid</i>.txt</code> (text format) or
<code>massif.<i>pid</i>.html</code> (HTML format), where
<code><i>pid</i></code> is the program's process id.
<p>
<a name="basicresults"></a>
<h3>7.4&nbsp; Basic Results of Profiling</h3>
To gather heap profiling information about the program <code>prog</code>,
type:
<p>
<blockquote>
<code>valgrind --tool=massif prog</code>
</blockquote>
<p>
The program will execute (slowly). Upon completion, summary statistics
that look like this will be printed:
<pre>
==27519== Total spacetime: 2,258,106 ms.B
==27519== heap: 24.0%
==27519== heap admin: 2.2%
==27519== stack(s): 73.7%
</pre>
All measurements are done in <i>spacetime</i>, i.e. space (in bytes) multiplied
by time (in milliseconds). Note that because Massif slows a program down a
lot, the actual spacetime figure is fairly meaningless; it's the relative
values that are interesting.
<p>
Which entries you see in the breakdown depends on the command line options
given. The above example measures all the possible parts of memory:
<ul>
<li>Heap: number of words allocated on the heap, via <code>malloc()</code>,
<code>new</code> and <code>new[]</code>.
<p>
<li>Heap admin: each heap block allocated requires some administration data,
which lets the allocator track certain things about the block. It is easy
to forget about this, and if your program allocates lots of small blocks,
it can add up. This value is an estimate of the space required for this
administration data.
<p>
<li>Stack(s): the spacetime used by the programs' stack(s). (Threaded programs
can have multiple stacks.) This includes signal handler stacks.
<p>
</ul>
<p>
<a name="graphs"></a>
<h3>7.5&nbsp; Spacetime Graphs</h3>
As well as printing summary information, Massif also creates a file
representing a spacetime graph, <code>massif.<i>pid</i>.hp</code>.
It will produce a file called <code>massif.<i>pid</i>.ps</code>, which can be
viewed in a PostScript viewer.
<p>
Massif uses a program called <code>hp2ps</code> to convert the raw data into
the PostScript graph. It's distributed with Massif, but came originally
from the <a href="http://haskell.cs.yale.edu/ghc/">Glasgow Haskell
Compiler</a>. You shouldn't need to worry about this at all. However, if
the graph creation fails for any reason, Massif tell you, and will leave
behind a file named <code>massif.<i>pid</i>.hp</code>, containing the raw
heap profiling data.
<p>
Here's an example graph:<br>
<img src="date.gif" alt="spacetime graph">
<p>
The graph is broken into several bands. Most bands represent a single line of
your program that does some heap allocation; each such band represents all
the allocations and deallocations done from that line. Up to twenty bands are
shown; less significant allocation sites are merged into "other" and/or "OTHER"
bands. The accompanying text/HTML file produced by Massif has more detail
about these heap allocation bands. Then there are single bands for the
stack(s) and heap admin bytes.
<p>
Note: it's the height of a band that's important. Don't let the ups and downs
caused by other bands confuse you. For example, the
<code>read_alias_file</code> band in the example has the same height all the
time it's in existence.
<p>
The triangles on the x-axis show each point at which a memory census was taken.
These aren't necessarily evenly spread; Massif only takes a census when
memory is allocated or deallocated. The time on the x-axis is wallclock
time, which is not ideal because you can get different graphs for different
executions of the same program, due to random OS delays. But it's not too
bad, and it becomes less of a problem the longer a program runs.
<p>
Massif takes censuses at an appropriate timescale; censuses take place less
frequently as the program runs for longer. There is no point having more
than 100-200 censuses on a single graph.
<p>
The graphs give a good overview of where your program's space use comes from,
and how that varies over time. The accompanying text/HTML file gives a lot
more information about heap use.
<a name="detailsofheap"></a>
<h3>7.6&nbsp; Details of Heap Allocations</h3>
The text/HTML file contains information to help interpret the heap bands of the
graph. It also contains a lot of extra information about heap allocations that you don't see in the graph.
<p>
Here's part of the information that accompanies the above graph.
<hr>
== 0 ===========================<br>
Heap allocation functions accounted for 50.8% of measured spacetime<br>
<p>
Called from:
<ul>
<li><a name="a401767D1"></a><a href="#b401767D1">22.1%</a>: 0x401767D0: _nl_intern_locale_data (in /lib/i686/libc-2.3.2.so)
<li><a name="a4017C394"></a><a href="#b4017C394"> 8.6%</a>: 0x4017C393: read_alias_file (in /lib/i686/libc-2.3.2.so)
<li><i>(several entries omitted)</i>
<li>and 6 other insignificant places</li>
</ul>
<hr>
The first part shows the total spacetime due to heap allocations, and the
places in the program where most memory was allocated (nb: if this program had
been compiled with <code>-g</code>, actual line numbers would be given). These
places are sorted, from most significant to least, and correspond to the bands
seen in the graph. Insignificant sites (accounting for less than 0.5% of total
spacetime) are omitted.
<p>
That alone can be useful, but often isn't enough. What if one of these
functions was called from several different places in the program? Which one
of these is responsible for most of the memory used? For
<code>_nl_intern_locale_data()</code>, this question is answered by clicking on
the <a href="#b401767D1">22.1%</a> link, which takes us to the following part
of the file.
<hr>
<p>== 1 ===========================<br>
<a name="b401767D1"></a>Context accounted for <a href="#a401767D1">22.1%</a> of measured spacetime<br>
&nbsp;&nbsp;0x401767D0: _nl_intern_locale_data (in /lib/i686/libc-2.3.2.so)<br>
<p>
Called from:
<ul>
<li><a name="a40176F96"></a><a href="#b40176F96">22.1%</a>: 0x40176F95: _nl_load_locale_from_archive (in /lib/i686/libc-2.3.2.so)
</ul>
<hr>
At this level, we can see all the places from which
<code>_nl_load_locale_from_archive()</code> was called such that it allocated
memory at 0x401767D0. (We can click on the top <a href="#a40176F96">22.1%</a>
link to go back to the parent entry.) At this level, we have moved beyond the
information presented in the graph. In this case, it is only called from one
place. We can again follow the link for more detail, moving to the following
part of the file.
<hr>
<p>== 2 ===========================<br>
<a name="b40176F96"></a>Context accounted for <a href="#a40176F96">22.1%</a> of measured spacetime<br>
&nbsp;&nbsp;0x401767D0: _nl_intern_locale_data (in /lib/i686/libc-2.3.2.so)<br>
&nbsp;&nbsp;0x40176F95: _nl_load_locale_from_archive (in /lib/i686/libc-2.3.2.so)<br>
<p>
Called from:
<ul>
<li><a name="a40176185"></a>22.1%: 0x40176184: _nl_find_locale (in /lib/i686/libc-2.3.2.so)
</ul>
<hr>
In this way we can dig deeper into the call stack, to work out exactly what
sequence of calls led to some memory being allocated. At this point, with a
call depth of 3, the information runs out (thus the address of the child entry,
0x40176184, isn't a link). We could rerun the program with a greater
<code>--depth</code> value if we wanted more information.
<p>
Sometimes you will get a code location like this:
<ul>
<li>30.8% : 0xFFFFFFFF: ???
</ul>
The code address isn't really 0xFFFFFFFF -- that's impossible. This is what
Massif does when it can't work out what the real code address is.
<p>
Massif produces this information in a plain text file by default, or HTML with
the <code>--format=html</code> option. The plain text version obviously
doesn't have the links, but a similar effect can be achieved by searching on
the code addresses. (In Vim, the '*' and '#' searches are ideal for this.)
<a name="massifoptions"></a>
<h3>7.7&nbsp; Massif options</h3>
Massif-specific options are:
<ul>
<li><code>--heap=no</code><br>
<code>--heap=yes</code> [default]<br>
When enabled, profile heap usage in detail. Without it, the
<code>massif.<i>pid</i>.txt</code> or
<code>massif.<i>pid</i>.html</code> will be very short.
<p>
<li><code>--heap-admin=<i>n</i></code> [default: 8]<br>
The number of admin bytes per block to use. This can only be an
estimate of the average, since it may vary. The allocator used by
<code>glibc</code> requires somewhere between 4--15 bytes per block,
depending on various factors. It also requires admin space for freed
blocks, although Massif does not count this.
<p>
<li><code>--stacks=no</code><br>
<code>--stacks=yes</code> [default]<br>
When enabled, include stack(s) in the profile. Threaded programs can
have multiple stacks.
<p>
<li><code>--depth=<i>n</i></code> [default: 3]<br>
Depth of call chains to present in the detailed heap information.
Increasing it will give more information, but Massif will run the program
more slowly, using more memory, and produce a bigger
<code>.txt</code>/<code>.hp</code> file.
<p>
<li><code>--alloc-fn=<i>name</i></code><br>
Specify a function that allocates memory. This is useful for functions
that are wrappers to <code>malloc()</code>, which can fill up the context
information uselessly (and give very uninformative bands on the graph).
Functions specified will be ignored in contexts, i.e. treated as though
they were <code>malloc()</code>. This option can be specified multiple
times on the command line, to name multiple functions.
<p>
<li><code>--format=text</code> [default]<br>
<code>--format=html</code><br>
Produce the detailed heap information in text or HTML format. The file
suffix used will be either <code>.txt</code> or <code>.html</code>.
<p>
</ul>
<a name="accuracy"></a>
<h3>7.8&nbsp; Accuracy</h3>
The information should be pretty accurate. Some approximations made might
cause some allocation contexts to be attributed with less memory than they
actually allocated, but the amounts should be miniscule.
<p>
The heap admin spacetime figure is an approximation, as described above. If
anyone knows how to improve its accuracy, please let us know.
</body>
</html>

View File

@ -1,3 +1 @@
docdir = $(datadir)/doc/valgrind
dist_doc_DATA = mc_main.html mc_techdocs.html
EXTRA_DIST = mc-manual.xml mc-tech-docs.xml

1100
memcheck/docs/mc-manual.xml Normal file

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,841 +0,0 @@
<html>
<head>
<title>Memcheck: a heavyweight memory checker</title>
</head>
<a name="mc-top"></a>
<h2>3&nbsp; <b>Memcheck</b>: a heavyweight memory checker</h2>
To use this tool, you must specify <code>--tool=memcheck</code> on the
Valgrind command line.
<h3>3.1&nbsp; Kinds of bugs that memcheck can find</h3>
Memcheck is Valgrind-1.0.X's checking mechanism bundled up into a tool.
All reads and writes of memory are checked, and calls to
malloc/new/free/delete are intercepted. As a result, memcheck can
detect the following problems:
<ul>
<li>Use of uninitialised memory</li>
<li>Reading/writing memory after it has been free'd</li>
<li>Reading/writing off the end of malloc'd blocks</li>
<li>Reading/writing inappropriate areas on the stack</li>
<li>Memory leaks -- where pointers to malloc'd blocks are lost
forever</li>
<li>Mismatched use of malloc/new/new [] vs free/delete/delete []</li>
<li>Overlapping <code>src</code> and <code>dst</code> pointers in
<code>memcpy()</code> and related functions</li>
<li>Some misuses of the POSIX pthreads API</li>
</ul>
<p>
<h3>3.2&nbsp; Command-line flags specific to memcheck</h3>
<ul>
<li><code>--leak-check=no</code> [default]<br>
<code>--leak-check=yes</code>
<p>When enabled, search for memory leaks when the client program
finishes. A memory leak means a malloc'd block, which has not
yet been free'd, but to which no pointer can be found. Such a
block can never be free'd by the program, since no pointer to it
exists. Leak checking is disabled by default because it tends
to generate dozens of error messages. </li><br><p>
<li><code>--show-reachable=no</code> [default]<br>
<code>--show-reachable=yes</code>
<p>When disabled, the memory leak detector only shows blocks for
which it cannot find a pointer to at all, or it can only find a
pointer to the middle of. These blocks are prime candidates for
memory leaks. When enabled, the leak detector also reports on
blocks which it could find a pointer to. Your program could, at
least in principle, have freed such blocks before exit.
Contrast this to blocks for which no pointer, or only an
interior pointer could be found: they are more likely to
indicate memory leaks, because you do not actually have a
pointer to the start of the block which you can hand to
<code>free</code>, even if you wanted to. </li><br><p>
<li><code>--leak-resolution=low</code> [default]<br>
<code>--leak-resolution=med</code> <br>
<code>--leak-resolution=high</code>
<p>When doing leak checking, determines how willing Memcheck is
to consider different backtraces to be the same. When set to
<code>low</code>, the default, only the first two entries need
match. When <code>med</code>, four entries have to match. When
<code>high</code>, all entries need to match.
<p>
For hardcore leak debugging, you probably want to use
<code>--leak-resolution=high</code> together with
<code>--num-callers=40</code> or some such large number. Note
however that this can give an overwhelming amount of
information, which is why the defaults are 4 callers and
low-resolution matching.
<p>
Note that the <code>--leak-resolution=</code> setting does not
affect Memcheck's ability to find leaks. It only changes how
the results are presented.
</li><br><p>
<li><code>--freelist-vol=&lt;number></code> [default: 1000000]
<p>When the client program releases memory using free (in C) or
delete (C++), that memory is not immediately made available for
re-allocation. Instead it is marked inaccessible and placed in
a queue of freed blocks. The purpose is to delay the point at
which freed-up memory comes back into circulation. This
increases the chance that Memcheck will be able to detect
invalid accesses to blocks for some significant period of time
after they have been freed.
<p>
This flag specifies the maximum total size, in bytes, of the
blocks in the queue. The default value is one million bytes.
Increasing this increases the total amount of memory used by
Memcheck but may detect invalid uses of freed blocks which would
otherwise go undetected.</li><br><p>
<li><code>--workaround-gcc296-bugs=no</code> [default]<br>
<code>--workaround-gcc296-bugs=yes</code> <p>When enabled,
assume that reads and writes some small distance below the stack
pointer <code>%esp</code> are due to bugs in gcc 2.96, and does
not report them. The "small distance" is 256 bytes by default.
Note that gcc 2.96 is the default compiler on some popular Linux
distributions (RedHat 7.X, Mandrake) and so you may well need to
use this flag. Do not use it if you do not have to, as it can
cause real errors to be overlooked. Another option is to use a
gcc/g++ which does not generate accesses below the stack
pointer. 2.95.3 seems to be a good choice in this respect.
<p>
Unfortunately (27 Feb 02) it looks like g++ 3.0.4 has a similar
bug, so you may need to issue this flag if you use 3.0.4. A
while later (early Apr 02) this is confirmed as a scheduling bug
in g++-3.0.4.
</li><br><p>
<li><code>--partial-loads-ok=yes</code> [the default]<br>
<code>--partial-loads-ok=no</code>
<p>Controls how Memcheck handles word (4-byte) loads from
addresses for which some bytes are addressible and others
are not. When <code>yes</code> (the default), such loads
do not elicit an address error. Instead, the loaded V bytes
corresponding to the illegal addresses indicate undefined, and
those corresponding to legal addresses are loaded from shadow
memory, as usual.
<p>
When <code>no</code>, loads from partially
invalid addresses are treated the same as loads from completely
invalid addresses: an illegal-address error is issued,
and the resulting V bytes indicate valid data.
</li><br><p>
<li><code>--cleanup=no</code><br>
<code>--cleanup=yes</code> [default]
<p><b>This is a flag to help debug valgrind itself. It is of no
use to end-users.</b> When enabled, various improvments are
applied to the post-instrumented intermediate code, aimed at
removing redundant value checks.</li><br>
<p>
</ul>
<a name="errormsgs"></a>
<h3>3.3&nbsp; Explanation of error messages from Memcheck</h3>
Despite considerable sophistication under the hood, Memcheck can only
really detect two kinds of errors, use of illegal addresses, and use
of undefined values. Nevertheless, this is enough to help you
discover all sorts of memory-management nasties in your code. This
section presents a quick summary of what error messages mean. The
precise behaviour of the error-checking machinery is described in
<a href="#machine">this section</a>.
<h4>3.3.1&nbsp; Illegal read / Illegal write errors</h4>
For example:
<pre>
Invalid read of size 4
at 0x40F6BBCC: (within /usr/lib/libpng.so.2.1.0.9)
by 0x40F6B804: (within /usr/lib/libpng.so.2.1.0.9)
by 0x40B07FF4: read_png_image__FP8QImageIO (kernel/qpngio.cpp:326)
by 0x40AC751B: QImageIO::read() (kernel/qimage.cpp:3621)
Address 0xBFFFF0E0 is not stack'd, malloc'd or free'd
</pre>
<p>This happens when your program reads or writes memory at a place
which Memcheck reckons it shouldn't. In this example, the program did
a 4-byte read at address 0xBFFFF0E0, somewhere within the
system-supplied library libpng.so.2.1.0.9, which was called from
somewhere else in the same library, called from line 326 of
qpngio.cpp, and so on.
<p>Memcheck tries to establish what the illegal address might relate
to, since that's often useful. So, if it points into a block of
memory which has already been freed, you'll be informed of this, and
also where the block was free'd at. Likewise, if it should turn out
to be just off the end of a malloc'd block, a common result of
off-by-one-errors in array subscripting, you'll be informed of this
fact, and also where the block was malloc'd.
<p>In this example, Memcheck can't identify the address. Actually the
address is on the stack, but, for some reason, this is not a valid
stack address -- it is below the stack pointer, %esp, and that isn't
allowed. In this particular case it's probably caused by gcc
generating invalid code, a known bug in various flavours of gcc.
<p>Note that Memcheck only tells you that your program is about to
access memory at an illegal address. It can't stop the access from
happening. So, if your program makes an access which normally would
result in a segmentation fault, you program will still suffer the same
fate -- but you will get a message from Memcheck immediately prior to
this. In this particular example, reading junk on the stack is
non-fatal, and the program stays alive.
<h4>3.3.2&nbsp; Use of uninitialised values</h4>
For example:
<pre>
Conditional jump or move depends on uninitialised value(s)
at 0x402DFA94: _IO_vfprintf (_itoa.h:49)
by 0x402E8476: _IO_printf (printf.c:36)
by 0x8048472: main (tests/manuel1.c:8)
by 0x402A6E5E: __libc_start_main (libc-start.c:129)
</pre>
<p>An uninitialised-value use error is reported when your program uses
a value which hasn't been initialised -- in other words, is undefined.
Here, the undefined value is used somewhere inside the printf()
machinery of the C library. This error was reported when running the
following small program:
<pre>
int main()
{
int x;
printf ("x = %d\n", x);
}
</pre>
<p>It is important to understand that your program can copy around
junk (uninitialised) data to its heart's content. Memcheck observes
this and keeps track of the data, but does not complain. A complaint
is issued only when your program attempts to make use of uninitialised
data. In this example, x is uninitialised. Memcheck observes the
value being passed to _IO_printf and thence to _IO_vfprintf, but makes
no comment. However, _IO_vfprintf has to examine the value of x so it
can turn it into the corresponding ASCII string, and it is at this
point that Memcheck complains.
<p>Sources of uninitialised data tend to be:
<ul>
<li>Local variables in procedures which have not been initialised,
as in the example above.</li><p>
<li>The contents of malloc'd blocks, before you write something
there. In C++, the new operator is a wrapper round malloc, so
if you create an object with new, its fields will be
uninitialised until you (or the constructor) fill them in, which
is only Right and Proper.</li>
</ul>
<h4>3.3.3&nbsp; Illegal frees</h4>
For example:
<pre>
Invalid free()
at 0x4004FFDF: free (vg_clientmalloc.c:577)
by 0x80484C7: main (tests/doublefree.c:10)
by 0x402A6E5E: __libc_start_main (libc-start.c:129)
by 0x80483B1: (within tests/doublefree)
Address 0x3807F7B4 is 0 bytes inside a block of size 177 free'd
at 0x4004FFDF: free (vg_clientmalloc.c:577)
by 0x80484C7: main (tests/doublefree.c:10)
by 0x402A6E5E: __libc_start_main (libc-start.c:129)
by 0x80483B1: (within tests/doublefree)
</pre>
<p>Memcheck keeps track of the blocks allocated by your program with
malloc/new, so it can know exactly whether or not the argument to
free/delete is legitimate or not. Here, this test program has
freed the same block twice. As with the illegal read/write errors,
Memcheck attempts to make sense of the address free'd. If, as
here, the address is one which has previously been freed, you wil
be told that -- making duplicate frees of the same block easy to spot.
<h4>3.3.4&nbsp; When a block is freed with an inappropriate
deallocation function</h4>
In the following example, a block allocated with <code>new[]</code>
has wrongly been deallocated with <code>free</code>:
<pre>
Mismatched free() / delete / delete []
at 0x40043249: free (vg_clientfuncs.c:171)
by 0x4102BB4E: QGArray::~QGArray(void) (tools/qgarray.cpp:149)
by 0x4C261C41: PptDoc::~PptDoc(void) (include/qmemarray.h:60)
by 0x4C261F0E: PptXml::~PptXml(void) (pptxml.cc:44)
Address 0x4BB292A8 is 0 bytes inside a block of size 64 alloc'd
at 0x4004318C: __builtin_vec_new (vg_clientfuncs.c:152)
by 0x4C21BC15: KLaola::readSBStream(int) const (klaola.cc:314)
by 0x4C21C155: KLaola::stream(KLaola::OLENode const *) (klaola.cc:416)
by 0x4C21788F: OLEFilter::convert(QCString const &amp;) (olefilter.cc:272)
</pre>
The following was told to me be the KDE 3 developers. I didn't know
any of it myself. They also implemented the check itself.
<p>
In C++ it's important to deallocate memory in a way compatible with
how it was allocated. The deal is:
<ul>
<li>If allocated with <code>malloc</code>, <code>calloc</code>,
<code>realloc</code>, <code>valloc</code> or
<code>memalign</code>, you must deallocate with <code>free</code>.
<li>If allocated with <code>new[]</code>, you must deallocate with
<code>delete[]</code>.
<li>If allocated with <code>new</code>, you must deallocate with
<code>delete</code>.
</ul>
The worst thing is that on Linux apparently it doesn't matter if you
do muddle these up, and it all seems to work ok, but the same program
may then crash on a different platform, Solaris for example. So it's
best to fix it properly. According to the KDE folks "it's amazing how
many C++ programmers don't know this".
<p>
Pascal Massimino adds the following clarification:
<code>delete[]</code> must be called associated with a
<code>new[]</code> because the compiler stores the size of the array
and the pointer-to-member to the destructor of the array's content
just before the pointer actually returned. This implies a
variable-sized overhead in what's returned by <code>new</code> or
<code>new[]</code>. It rather surprising how compilers [Ed:
runtime-support libraries?] are robust to mismatch in
<code>new</code>/<code>delete</code>
<code>new[]</code>/<code>delete[]</code>.
<h4>3.3.5&nbsp; Passing system call parameters with inadequate
read/write permissions</h4>
Memcheck checks all parameters to system calls, i.e:
<ul>
<li>It checks all the direct parameters themselves.
<li>Also, if a system call needs to read from a buffer provided by your
program, Memcheck checks that the entire buffer is addressible and has
valid data, ie, it is readable.
<li>Also, if the system call needs to write to a user-supplied buffer, Memcheck
checks that the buffer is addressible.
</ul>
After the system call, Memcheck updates its administrative information to
precisely reflect any changes in memory permissions caused by the system call.
<p>Here's an example of two system calls with invalid parameters:
<pre>
#include &lt;stdlib.h>
#include &lt;unistd.h>
int main( void )
{
char* arr = malloc(10);
int* arr2 = malloc(sizeof(int));
write( 1 /* stdout */, arr, 10 );
exit(arr2[0]);
}
</pre>
<p>You get these complaints ...
<pre>
Syscall param write(buf) points to uninitialised byte(s)
at 0x25A48723: __write_nocancel (in /lib/tls/libc-2.3.3.so)
by 0x259AFAD3: __libc_start_main (in /lib/tls/libc-2.3.3.so)
by 0x8048348: (within /auto/homes/njn25/grind/head4/a.out)
Address 0x25AB8028 is 0 bytes inside a block of size 10 alloc'd
at 0x259852B0: malloc (vg_replace_malloc.c:130)
by 0x80483F1: main (a.c:5)
Syscall param exit(error_code) contains uninitialised byte(s)
at 0x25A21B44: __GI__exit (in /lib/tls/libc-2.3.3.so)
by 0x8048426: main (a.c:8)
</pre>
<p>... because the program has (a) tried to write uninitialised junk from
the malloc'd block to the standard output, and (b) passed an uninitialised
value to <code>exit</code>. Note that the first error refers to the memory
pointed to by <code>buf</code> (not <code>buf</code> itself), but the second
error refers to the argument <code>error_code</code> itself.
<h4>3.3.6&nbsp; Overlapping source and destination blocks</h4>
The following C library functions copy some data from one memory block
to another (or something similar): <code>memcpy()</code>,
<code>strcpy()</code>, <code>strncpy()</code>, <code>strcat()</code>,
<code>strncat()</code>. The blocks pointed to by their <code>src</code> and
<code>dst</code> pointers aren't allowed to overlap. Memcheck checks
for this.
<p>
For example:
<pre>
==27492== Source and destination overlap in memcpy(0xbffff294, 0xbffff280, 21)
==27492== at 0x40026CDC: memcpy (mc_replace_strmem.c:71)
==27492== by 0x804865A: main (overlap.c:40)
==27492== by 0x40246335: __libc_start_main (../sysdeps/generic/libc-start.c:129)
==27492== by 0x8048470: (within /auto/homes/njn25/grind/head6/memcheck/tests/overlap)
==27492==
</pre>
<p>
You don't want the two blocks to overlap because one of them could get
partially trashed by the copying.
<a name="suppfiles"></a>
<h3>3.4&nbsp; Writing suppressions files</h3>
The basic suppression format was described in <a
href="coregrind_core.html#suppress">this section</a>.
<p>
The suppression (2nd) line should have the form:
<pre>
Memcheck:suppression_type
</pre>
Or, since some of the suppressions are shared with Addrcheck:
<pre>
Memcheck,Addrcheck:suppression_type
</pre>
<p>
The Memcheck suppression types are as follows:
<code>Value1</code>,
<code>Value2</code>,
<code>Value4</code>,
<code>Value8</code>,
<code>Value16</code>,
meaning an uninitialised-value error when
using a value of 1, 2, 4, 8 or 16 bytes.
Or
<code>Cond</code> (or its old name, <code>Value0</code>),
meaning use of an uninitialised CPU condition code. Or:
<code>Addr1</code>,
<code>Addr2</code>,
<code>Addr4</code>,
<code>Addr8</code>,
<code>Addr16</code>,
meaning an invalid address during a
memory access of 1, 2, 4, 8 or 16 bytes respectively. Or
<code>Param</code>,
meaning an invalid system call parameter error. Or
<code>Free</code>, meaning an invalid or mismatching free.
<code>Overlap</code>, meaning a <code>src</code>/<code>dst</code>
overlap in <code>memcpy() or a similar function</code>. Last but not least,
you can suppress leak reports with <code>Leak</code>. Leak suppression was
added in valgrind-1.9.3, I believe.
<p>
The extra information line: for Param errors, is the name of the offending
system call parameter.
No other error kinds have this extra line.
<p>
The first line of the calling context: for Value and Addr errors, it is either
the name of the function in which the error occurred, or, failing that, the
full path of the .so file or executable containing the error location. For
Free errors, is the name of the function doing the freeing (eg,
<code>free</code>, <code>__builtin_vec_delete</code>, etc). For Overlap
errors, is the name of the function with the overlapping arguments (eg.
<code>memcpy()</code>, <code>strcpy()</code>, etc).
<p>
Lastly, there's the rest of the calling context.
<p>
<a name="machine"></a>
<h3>3.5&nbsp; Details of Memcheck's checking machinery</h3>
Read this section if you want to know, in detail, exactly what and how
Memcheck is checking.
<a name="vvalue"></a>
<h4>3.5.1&nbsp; Valid-value (V) bits</h4>
It is simplest to think of Memcheck implementing a synthetic Intel x86
CPU which is identical to a real CPU, except for one crucial detail.
Every bit (literally) of data processed, stored and handled by the
real CPU has, in the synthetic CPU, an associated "valid-value" bit,
which says whether or not the accompanying bit has a legitimate value.
In the discussions which follow, this bit is referred to as the V
(valid-value) bit.
<p>Each byte in the system therefore has a 8 V bits which follow
it wherever it goes. For example, when the CPU loads a word-size item
(4 bytes) from memory, it also loads the corresponding 32 V bits from
a bitmap which stores the V bits for the process' entire address
space. If the CPU should later write the whole or some part of that
value to memory at a different address, the relevant V bits will be
stored back in the V-bit bitmap.
<p>In short, each bit in the system has an associated V bit, which
follows it around everywhere, even inside the CPU. Yes, the CPU's
(integer and <code>%eflags</code>) registers have their own V bit
vectors.
<p>Copying values around does not cause Memcheck to check for, or
report on, errors. However, when a value is used in a way which might
conceivably affect the outcome of your program's computation, the
associated V bits are immediately checked. If any of these indicate
that the value is undefined, an error is reported.
<p>Here's an (admittedly nonsensical) example:
<pre>
int i, j;
int a[10], b[10];
for (i = 0; i &lt; 10; i++) {
j = a[i];
b[i] = j;
}
</pre>
<p>Memcheck emits no complaints about this, since it merely copies
uninitialised values from <code>a[]</code> into <code>b[]</code>, and
doesn't use them in any way. However, if the loop is changed to
<pre>
for (i = 0; i &lt; 10; i++) {
j += a[i];
}
if (j == 77)
printf("hello there\n");
</pre>
then Valgrind will complain, at the <code>if</code>, that the
condition depends on uninitialised values. Note that it
<b>doesn't</b> complain at the <code>j += a[i];</code>, since
at that point the undefinedness is not "observable". It's only
when a decision has to be made as to whether or not to do the
<code>printf</code> -- an observable action of your program -- that
Memcheck complains.
<p>Most low level operations, such as adds, cause Memcheck to
use the V bits for the operands to calculate the V bits for the
result. Even if the result is partially or wholly undefined,
it does not complain.
<p>Checks on definedness only occur in two places: when a value is
used to generate a memory address, and where control flow decision
needs to be made. Also, when a system call is detected, valgrind
checks definedness of parameters as required.
<p>If a check should detect undefinedness, an error message is
issued. The resulting value is subsequently regarded as well-defined.
To do otherwise would give long chains of error messages. In effect,
we say that undefined values are non-infectious.
<p>This sounds overcomplicated. Why not just check all reads from
memory, and complain if an undefined value is loaded into a CPU register?
Well, that doesn't work well, because perfectly legitimate C programs routinely
copy uninitialised values around in memory, and we don't want endless complaints
about that. Here's the canonical example. Consider a struct
like this:
<pre>
struct S { int x; char c; };
struct S s1, s2;
s1.x = 42;
s1.c = 'z';
s2 = s1;
</pre>
<p>The question to ask is: how large is <code>struct S</code>, in
bytes? An int is 4 bytes and a char one byte, so perhaps a struct S
occupies 5 bytes? Wrong. All (non-toy) compilers we know of will
round the size of <code>struct S</code> up to a whole number of words,
in this case 8 bytes. Not doing this forces compilers to generate
truly appalling code for subscripting arrays of <code>struct
S</code>'s.
<p>So s1 occupies 8 bytes, yet only 5 of them will be initialised.
For the assignment <code>s2 = s1</code>, gcc generates code to copy
all 8 bytes wholesale into <code>s2</code> without regard for their
meaning. If Memcheck simply checked values as they came out of
memory, it would yelp every time a structure assignment like this
happened. So the more complicated semantics described above is
necessary. This allows gcc to copy <code>s1</code> into
<code>s2</code> any way it likes, and a warning will only be emitted
if the uninitialised values are later used.
<p>One final twist to this story. The above scheme allows garbage to
pass through the CPU's integer registers without complaint. It does
this by giving the integer registers V tags, passing these around in
the expected way. This complicated and computationally expensive to
do, but is necessary. Memcheck is more simplistic about
floating-point loads and stores. In particular, V bits for data read
as a result of floating-point loads are checked at the load
instruction. So if your program uses the floating-point registers to
do memory-to-memory copies, you will get complaints about
uninitialised values. Fortunately, I have not yet encountered a
program which (ab)uses the floating-point registers in this way.
<a name="vaddress"></a>
<h4>3.5.2&nbsp; Valid-address (A) bits</h4>
Notice that the previous subsection describes how the validity of values
is established and maintained without having to say whether the
program does or does not have the right to access any particular
memory location. We now consider the latter issue.
<p>As described above, every bit in memory or in the CPU has an
associated valid-value (V) bit. In addition, all bytes in memory, but
not in the CPU, have an associated valid-address (A) bit. This
indicates whether or not the program can legitimately read or write
that location. It does not give any indication of the validity or the
data at that location -- that's the job of the V bits -- only whether
or not the location may be accessed.
<p>Every time your program reads or writes memory, Memcheck checks the
A bits associated with the address. If any of them indicate an
invalid address, an error is emitted. Note that the reads and writes
themselves do not change the A bits, only consult them.
<p>So how do the A bits get set/cleared? Like this:
<ul>
<li>When the program starts, all the global data areas are marked as
accessible.</li><br>
<p>
<li>When the program does malloc/new, the A bits for exactly the
area allocated, and not a byte more, are marked as accessible.
Upon freeing the area the A bits are changed to indicate
inaccessibility.</li><br>
<p>
<li>When the stack pointer register (%esp) moves up or down, A bits
are set. The rule is that the area from %esp up to the base of
the stack is marked as accessible, and below %esp is
inaccessible. (If that sounds illogical, bear in mind that the
stack grows down, not up, on almost all Unix systems, including
GNU/Linux.) Tracking %esp like this has the useful side-effect
that the section of stack used by a function for local variables
etc is automatically marked accessible on function entry and
inaccessible on exit.</li><br>
<p>
<li>When doing system calls, A bits are changed appropriately. For
example, mmap() magically makes files appear in the process's
address space, so the A bits must be updated if mmap()
succeeds.</li><br>
<p>
<li>Optionally, your program can tell Valgrind about such changes
explicitly, using the client request mechanism described above.
</ul>
<a name="together"></a>
<h4>3.5.3&nbsp; Putting it all together</h4>
Memcheck's checking machinery can be summarised as follows:
<ul>
<li>Each byte in memory has 8 associated V (valid-value) bits,
saying whether or not the byte has a defined value, and a single
A (valid-address) bit, saying whether or not the program
currently has the right to read/write that address.</li><br>
<p>
<li>When memory is read or written, the relevant A bits are
consulted. If they indicate an invalid address, Valgrind emits
an Invalid read or Invalid write error.</li><br>
<p>
<li>When memory is read into the CPU's integer registers, the
relevant V bits are fetched from memory and stored in the
simulated CPU. They are not consulted.</li><br>
<p>
<li>When an integer register is written out to memory, the V bits
for that register are written back to memory too.</li><br>
<p>
<li>When memory is read into the CPU's floating point registers, the
relevant V bits are read from memory and they are immediately
checked. If any are invalid, an uninitialised value error is
emitted. This precludes using the floating-point registers to
copy possibly-uninitialised memory, but simplifies Valgrind in
that it does not have to track the validity status of the
floating-point registers.</li><br>
<p>
<li>As a result, when a floating-point register is written to
memory, the associated V bits are set to indicate a valid
value.</li><br>
<p>
<li>When values in integer CPU registers are used to generate a
memory address, or to determine the outcome of a conditional
branch, the V bits for those values are checked, and an error
emitted if any of them are undefined.</li><br>
<p>
<li>When values in integer CPU registers are used for any other
purpose, Valgrind computes the V bits for the result, but does
not check them.</li><br>
<p>
<li>One the V bits for a value in the CPU have been checked, they
are then set to indicate validity. This avoids long chains of
errors.</li><br>
<p>
<li>When values are loaded from memory, valgrind checks the A bits
for that location and issues an illegal-address warning if
needed. In that case, the V bits loaded are forced to indicate
Valid, despite the location being invalid.
<p>
This apparently strange choice reduces the amount of confusing
information presented to the user. It avoids the
unpleasant phenomenon in which memory is read from a place which
is both unaddressible and contains invalid values, and, as a
result, you get not only an invalid-address (read/write) error,
but also a potentially large set of uninitialised-value errors,
one for every time the value is used.
<p>
There is a hazy boundary case to do with multi-byte loads from
addresses which are partially valid and partially invalid. See
details of the flag <code>--partial-loads-ok</code> for details.
</li><br>
</ul>
Memcheck intercepts calls to malloc, calloc, realloc, valloc,
memalign, free, new and delete. The behaviour you get is:
<ul>
<li>malloc/new: the returned memory is marked as addressible but not
having valid values. This means you have to write on it before
you can read it.</li><br>
<p>
<li>calloc: returned memory is marked both addressible and valid,
since calloc() clears the area to zero.</li><br>
<p>
<li>realloc: if the new size is larger than the old, the new section
is addressible but invalid, as with malloc.</li><br>
<p>
<li>If the new size is smaller, the dropped-off section is marked as
unaddressible. You may only pass to realloc a pointer
previously issued to you by malloc/calloc/realloc.</li><br>
<p>
<li>free/delete: you may only pass to free a pointer previously
issued to you by malloc/calloc/realloc, or the value
NULL. Otherwise, Valgrind complains. If the pointer is indeed
valid, Valgrind marks the entire area it points at as
unaddressible, and places the block in the freed-blocks-queue.
The aim is to defer as long as possible reallocation of this
block. Until that happens, all attempts to access it will
elicit an invalid-address error, as you would hope.</li><br>
</ul>
<a name="leaks"></a>
<h3>3.6&nbsp; Memory leak detection</h3>
Memcheck keeps track of all memory blocks issued in response to calls
to malloc/calloc/realloc/new. So when the program exits, it knows
which blocks are still outstanding -- have not been returned, in other
words. Ideally, you want your program to have no blocks still in use
at exit. But many programs do.
<p>For each such block, Memcheck scans the entire address space of the
process, looking for pointers to the block. One of three situations
may result:
<ul>
<li>A pointer to the start of the block is found. This usually
indicates programming sloppiness; since the block is still
pointed at, the programmer could, at least in principle, free'd
it before program exit.</li><br>
<p>
<li>A pointer to the interior of the block is found. The pointer
might originally have pointed to the start and have been moved
along, or it might be entirely unrelated. Memcheck deems such a
block as "dubious", that is, possibly leaked,
because it's unclear whether or
not a pointer to it still exists.</li><br>
<p>
<li>The worst outcome is that no pointer to the block can be found.
The block is classified as "leaked", because the
programmer could not possibly have free'd it at program exit,
since no pointer to it exists. This might be a symptom of
having lost the pointer at some earlier point in the
program.</li>
</ul>
Memcheck reports summaries about leaked and dubious blocks.
For each such block, it will also tell you where the block was
allocated. This should help you figure out why the pointer to it has
been lost. In general, you should attempt to ensure your programs do
not have any leaked or dubious blocks at exit.
<p>The precise area of memory in which Memcheck searches for pointers
is: all naturally-aligned 4-byte words for which all A bits indicate
addressibility and all V bits indicated that the stored value is
actually valid.
<p>
<a name="clientreqs"></a>
<h3>3.7&nbsp; Client Requests</h3>
The following client requests are defined in <code>memcheck.h</code>. They
also work for Addrcheck. See <code>memcheck.h</code> for exact
details of their arguments.
<ul>
<li><code>VALGRIND_MAKE_NOACCESS</code>,
<code>VALGRIND_MAKE_WRITABLE</code> and
<code>VALGRIND_MAKE_READABLE</code>. These mark address
ranges as completely inaccessible, accessible but containing
undefined data, and accessible and containing defined data,
respectively. Subsequent errors may have their faulting
addresses described in terms of these blocks. Returns a
"block handle". Returns zero when not run on Valgrind.
<p>
<li><code>VALGRIND_DISCARD</code>: At some point you may want
Valgrind to stop reporting errors in terms of the blocks
defined by the previous three macros. To do this, the above
macros return a small-integer "block handle". You can pass
this block handle to <code>VALGRIND_DISCARD</code>. After
doing so, Valgrind will no longer be able to relate
addressing errors to the user-defined block associated with
the handle. The permissions settings associated with the
handle remain in place; this just affects how errors are
reported, not whether they are reported. Returns 1 for an
invalid handle and 0 for a valid handle (although passing
invalid handles is harmless). Always returns 0 when not run
on Valgrind.
<p>
<li><code>VALGRIND_CHECK_WRITABLE</code> and
<code>VALGRIND_CHECK_READABLE</code>: check immediately
whether or not the given address range has the relevant
property, and if not, print an error message. Also, for the
convenience of the client, returns zero if the relevant
property holds; otherwise, the returned value is the address
of the first byte for which the property is not true.
Always returns 0 when not run on Valgrind.
<p>
<li><code>VALGRIND_CHECK_DEFINED</code>: a quick and easy way
to find out whether Valgrind thinks a particular variable
(lvalue, to be precise) is addressible and defined. Prints
an error message if not. Returns no value.
<p>
<li><code>VALGRIND_DO_LEAK_CHECK</code>: run the memory leak detector
right now. Returns no value. I guess this could be used to
incrementally check for leaks between arbitrary places in the
program's execution. Warning: not properly tested!
<p>
<li><code>VALGRIND_COUNT_LEAKS</code>: fills in the four arguments with
the number of bytes of memory found by the previous leak check to
be leaked, dubious, reachable and suppressed. Again, useful in
test harness code, after calling <code>VALGRIND_DO_LEAK_CHECK</code>.
<p>
<li><code>VALGRIND_GET_VBITS</code> and
<code>VALGRIND_SET_VBITS</code>: allow you to get and set the V (validity)
bits for an address range. You should probably only set V bits that you
have got with <code>VALGRIND_GET_VBITS</code>. Only for those who really
know what they are doing.
<p>
</ul>

File diff suppressed because it is too large Load Diff

View File

@ -1,3 +1 @@
docdir = $(datadir)/doc/valgrind
dist_doc_DATA = nl_main.html
EXTRA_DIST = nl-manual.xml

22
none/docs/nl-manual.xml Normal file
View File

@ -0,0 +1,22 @@
<?xml version="1.0"?> <!-- -*- sgml -*- -->
<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
<chapter id="nl-manual" xreflabel="Nulgrind">
<title>Nulgrind: the ``null'' tool</title>
<subtitle>A tool that does not very much at all</subtitle>
<para>Nulgrind is the minimal tool for Valgrind. It does no
initialisation or finalisation, and adds no instrumentation to
the program's code. It is mainly of use for Valgrind's
developers for debugging and regression testing.</para>
<para>Nonetheless you can run programs with Nulgrind. They will
run roughly 5 times more slowly than normal, for no useful
effect. Note that you need to use the option
<computeroutput>--tool=none</computeroutput> to run Nulgrind
(ie. not <computeroutput>--tool=nulgrind</computeroutput>).</para>
</chapter>

View File

@ -1,57 +0,0 @@
<html>
<head>
<style type="text/css">
body { background-color: #ffffff;
color: #000000;
font-family: Times, Helvetica, Arial;
font-size: 14pt}
h4 { margin-bottom: 0.3em}
code { color: #000000;
font-family: Courier;
font-size: 13pt }
pre { color: #000000;
font-family: Courier;
font-size: 13pt }
a:link { color: #0000C0;
text-decoration: none; }
a:visited { color: #0000C0;
text-decoration: none; }
a:active { color: #0000C0;
text-decoration: none; }
</style>
<title>Cachegrind</title>
</head>
<body bgcolor="#ffffff">
<a name="title"></a>
<h1 align=center>Nulgrind</h1>
<center>This manual was last updated on 2002-10-02</center>
<p>
<center>
<a href="mailto:njn25@cam.ac.uk">njn25@cam.ac.uk</a><br>
Copyright &copy; 2000-2004 Nicholas Nethercote
<p>
Nulgrind is licensed under the GNU General Public License,
version 2<br>
Nulgrind is a Valgrind tool that does not very much at all.
</center>
<p>
<h2>1&nbsp; Nulgrind</h2>
Nulgrind is the minimal tool for Valgrind. It does no initialisation or
finalisation, and adds no instrumentation to the program's code. It is mainly
of use for Valgrind's developers for debugging and regression testing.
<p>
Nonetheless you can run programs with Nulgrind. They will run roughly 5-10
times more slowly than normal, for no useful effect. Note that you need to use
the option <code>--tool=none</code> to run Nulgrind (ie. not
<code>--tool=nulgrind</code>).
<hr width="100%">
</body>
</html>