Spelling fixes and misc tidying for the manual. (Brian Gough)

git-svn-id: svn://svn.valgrind.org/valgrind/trunk@7173
This commit is contained in:
Julian Seward 2007-11-17 09:43:25 +00:00
parent 71dbabbf30
commit 5e2a8da202
10 changed files with 58 additions and 56 deletions

View File

@ -73,7 +73,7 @@ number.</para>
<para>Thus, the first cost line specifies that in line 15 of source file
"file.f" there is code belonging to function "main". While running, 90 CPU
cycles passed by, and 2 of the 14 instructions executed were floating point
operations. Similarily, the next line specifies that there were 12 instructions
operations. Similarly, the next line specifies that there were 12 instructions
executed in the context of function "main" which can be related to line 16 in
file "file.f", taking 20 CPU cycles. If a cost line specifies less event counts
than given in the "events" line, the rest is assumed to be zero. I.e., there
@ -93,8 +93,8 @@ called: profile data only contains sums.</para>
<para>The most important extension to the original format of Cachegrind is the
ability to specify call relationship among functions. More generally, you
specify assoziations among positions. For this, the second part of the
file also can contain assoziation specifications. These look similar to
specify associations among positions. For this, the second part of the
file also can contain association specifications. These look similar to
position specifications, but consist of 2 lines. For calls, the format
looks like
<screen>
@ -109,7 +109,7 @@ called function, and a "cfl=" specification if the function is in another
source file. The 2nd line looks like a regular cost line with the difference
that inclusive cost spent inside of the function call has to be specified.</para>
<para>Other assoziations which or for example (conditional) jumps. See the
<para>Other associations which or for example (conditional) jumps. See the
reference below for details.</para>
</sect2>
@ -198,7 +198,7 @@ fl=(2)
fn=(3)
20 700</screen></para>
<para>As position specifications carry no information themself, but only change
<para>As position specifications carry no information themselves, but only change
the meaning of subsequent cost lines or associations, they can appear
everywhere in the file without any negative consequence. Especially, you can
define name compression mappings directly after the header, and before any cost
@ -225,7 +225,7 @@ fn=(1)
<title>Subposition Compression</title>
<para>If a Callgrind data file should hold costs for each assembler instruction
of a program, you specify subpostion "instr" in the "positions:" header line,
of a program, you specify subposition "instr" in the "positions:" header line,
and each cost line has to include the address of some instruction. Addresses
are allowed to have a size of 64bit to support 64bit architectures. Thus,
repeating similar, long addresses for almost every line in the data file can
@ -239,7 +239,7 @@ also for line numbers; both addresses and line numbers are called "subpositions"
of the last cost line, and starts with a "+" to specify a positive difference,
a "-" to specify a negative difference, or consists of "*" to specify the same
subposition. Because absolute subpositions always are positive (ie. never
prefixed by "-"), any relative specification is non-ambigous; additionally,
prefixed by "-"), any relative specification is non-ambiguous; additionally,
absolute and relative subposition specifications can be mixed freely.
Assume the following example (subpositions can always be specified
as hexadecimal numbers, beginning with "0x"):
@ -292,7 +292,7 @@ Fetches", this can be specified the header line
<screen>event: Ir : Instruction Fetches
events: Ir Dr</screen></para>
<para>In this example, "Dr" itself has no long name assoziated. The order of
<para>In this example, "Dr" itself has no long name associated. The order of
"event:" lines and the "events:" line is of no importance. Additionally,
inherited event types can be introduced for which no raw data is available, but
which are calculated from given types. Suppose the last example, you could add
@ -339,7 +339,7 @@ for "Ir and "Dr".</para>
| ('#' NoNewLineChar*)
| CostLine
| PositionSpecification
| AssoziationSpecification</screen>
| AssociationSpecification</screen>
<screen>CostLine := SubPositionList Costs?</screen>
<screen>SubPositionList := (SubPosition+ Space+)+</screen>
<screen>SubPosition := Number | "+" Number | "-" Number | "*"</screen>
@ -349,7 +349,7 @@ for "Ir and "Dr".</para>
<screen>CostPosition := "ob" | "fl" | "fi" | "fe" | "fn"</screen>
<screen>CalledPosition := " "cob" | "cfl" | "cfn"</screen>
<screen>PositionName := ( "(" Number ")" )? (Space* NoNewLineChar* )?</screen>
<screen>AssoziationSpecification := CallSpezification
<screen>AssociationSpecification := CallSpecification
| JumpSpecification</screen>
<screen>CallSpecification := CallLine "\n" CostLine</screen>
<screen>CallLine := "calls=" Space* Number Space+ SubPositionList</screen>
@ -433,7 +433,7 @@ for "Ir and "Dr".</para>
</listitem>
<listitem>
<para><computeroutput>events: event type abbrevations</computeroutput> [Cachegrind]</para>
<para><computeroutput>events: event type abbreviations</computeroutput> [Cachegrind]</para>
<para>A list of short names of the event types logged in this file.
The order is the same as in cost lines. The first event type is the
second or third number in a cost line, depending on the value of

View File

@ -283,7 +283,7 @@ callgrind.out.<emphasis>pid</emphasis>.<emphasis>part</emphasis>-<emphasis>threa
and <option><xref linkend="opt.dump-after"/>=funcprefix</option>.
To zero cost counters before entering a function, use
<option><xref linkend="opt.zero-before"/>=funcprefix</option>.
The prefix method for specifying function names was choosen to
The prefix method for specifying function names was chosen to
ease the use with C++: you don't have to specify full
signatures.</para> <para>You can specify these options multiple
times for different function prefixes.</para>
@ -412,10 +412,10 @@ callgrind.out.<emphasis>pid</emphasis>.<emphasis>part</emphasis>-<emphasis>threa
cut off uninteresting areas.</para>
<para>Despite the meaningless of inclusive costs in cycles, the big
drawback for visualization motivates the possibility to temporarely
drawback for visualization motivates the possibility to temporarily
switch off cycle detection in KCachegrind, which can lead to
misguiding visualization. However, often cycles appear because of
unlucky superposition of independant call chains in a way that
unlucky superposition of independent call chains in a way that
the profile result will see a cycle. Neglecting uninteresting
calls with very small measured inclusive cost would break these
cycles. In such cases, incorrect handling of cycles by not detecting
@ -436,7 +436,7 @@ callgrind.out.<emphasis>pid</emphasis>.<emphasis>part</emphasis>-<emphasis>threa
symbol explosion. The latter imposes large memory requirement for Callgrind
with possible out-of-memory conditions, and big profile data files.</para>
<para>A further possibility to avoid cycles in Callgrinds profile data
<para>A further possibility to avoid cycles in Callgrind's profile data
output is to simply leave out given functions in the call graph. Of course, this
also skips any call information from and to an ignored function, and thus can
break a cycle. Candidates for this typically are dispatcher functions in event
@ -619,7 +619,7 @@ be executed. For interactive control use
</term>
<listitem>
<para>Dump profile data every &lt;count&gt; basic blocks.
Whether a dump is needed is only checked when Valgrinds internal
Whether a dump is needed is only checked when Valgrind's internal
scheduler is run. Therefore, the minimum setting useful is about 100000.
The count is a 64-bit value to make long dump periods possible.
</para>

View File

@ -193,7 +193,7 @@ collect2: ld returned 1 exit status
much more slowly, but should detect the use of the out-of-date
code.</para>
<para>Alternativaly, if you have the source code to the JIT compiler
<para>Alternatively, if you have the source code to the JIT compiler
you can insert calls to the
<computeroutput>VALGRIND_DISCARD_TRANSLATIONS</computeroutput>
client request to mark out-of-date code, saving you from using
@ -555,7 +555,7 @@ int main(void)
<para>As for eager reporting of copies of uninitialised memory values,
this has been suggested multiple times. Unfortunately, almost all
programs legitimately copy uninitialise memory values around (because
programs legitimately copy uninitialised memory values around (because
compilers pad structs to preserve alignment) and eager checking leads to
hundreds of false positives. Therefore Memcheck does not support eager
checking at this time.</para>

View File

@ -113,7 +113,7 @@ uninitialised value errors, or missing uninitialised value errors. We have
looked in detail into fixing this, and unfortunately the result is that
doing so would give a further significant slowdown in what is already a slow
tool. So the best solution is to turn off optimisation altogether. Since
this often makes things unmanagably slow, a reasonable compromise is to use
this often makes things unmanageably slow, a reasonable compromise is to use
<computeroutput>-O</computeroutput>. This gets you the majority of the
benefits of higher optimisation levels whilst keeping relatively small the
chances of false positives or false negatives from Memcheck. Also, you
@ -422,7 +422,7 @@ distribution, provides some good examples.</para>
<listitem>
<para>Second line: name of the tool(s) that the suppression is for
(if more than one, comma-separated), and the name of the suppression
itself, separated by a colon (Nb: no spaces are allowed), eg:</para>
itself, separated by a colon (n.b.: no spaces are allowed), eg:</para>
<programlisting><![CDATA[
tool_name1,tool_name2:suppression_name]]></programlisting>
@ -530,7 +530,7 @@ freely mix <computeroutput>obj:</computeroutput> and
<para>As mentioned above, Valgrind's core accepts a common set of flags.
The tools also accept tool-specific flags, which are documented
seperately for each tool.</para>
separately for each tool.</para>
<para>You invoke Valgrind like this:</para>
@ -732,7 +732,7 @@ categories.</para>
causes the log file name to be qualified using the contents of the
environment variable <computeroutput>$VAR</computeroutput>. This
is useful when running MPI programs. For further details, see
<link linkend="manual-core.comment">Section 2.3 "The Commentary"</link>
<link linkend="manual-core.comment">the commentary</link>
in the manual.
</para>
</listitem>
@ -751,7 +751,7 @@ categories.</para>
be used in conjunction with the
<computeroutput>valgrind-listener</computeroutput> program. For
further details, see
<link linkend="manual-core.comment">Section 2.3 "The Commentary"</link>
<link linkend="manual-core.comment">the commentary</link>
in the manual.</para>
</listitem>
</varlistentry>
@ -890,7 +890,7 @@ that can report errors, e.g. Memcheck, but not Cachegrind.</para>
</listitem>
</varlistentry>
<varlistentry id="opt.gen-suppressions" xreflabel="--gen-supressions">
<varlistentry id="opt.gen-suppressions" xreflabel="--gen-suppressions">
<term>
<option><![CDATA[--gen-suppressions=<yes|no|all> [default: no] ]]></option>
</term>
@ -1096,7 +1096,7 @@ need to use these.</para>
<para>The GNU C library (<function>libc.so</function>), which is
used by all programs, may allocate memory for its own uses.
Usually it doesn't bother to free that memory when the program
ends - there would be no point, since the Linux kernel reclaims
ends&mdash;there would be no point, since the Linux kernel reclaims
all process resources when a process exits anyway, so it would
just slow things down.</para>
@ -1418,7 +1418,7 @@ tool-specific macros).</para>
<term><command><computeroutput>VALGRIND_DESTROY_MEMPOOL</computeroutput>:</command></term>
<listitem>
<para>This should be used in conjunction with
<computeroutput>VALGRIND_CREATE_MEMPOOL</computeroutput>
<computeroutput>VALGRIND_CREATE_MEMPOOL</computeroutput>.
Again, see the comments in <filename>valgrind.h</filename> for
information on how to use it.</para>
</listitem>
@ -1428,7 +1428,7 @@ tool-specific macros).</para>
<term><command><computeroutput>VALGRIND_MEMPOOL_ALLOC</computeroutput>:</command></term>
<listitem>
<para>This should be used in conjunction with
<computeroutput>VALGRIND_CREATE_MEMPOOL</computeroutput>
<computeroutput>VALGRIND_CREATE_MEMPOOL</computeroutput>.
Again, see the comments in <filename>valgrind.h</filename> for
information on how to use it.</para>
</listitem>
@ -1438,7 +1438,7 @@ tool-specific macros).</para>
<term><command><computeroutput>VALGRIND_MEMPOOL_FREE</computeroutput>:</command></term>
<listitem>
<para>This should be used in conjunction with
<computeroutput>VALGRIND_CREATE_MEMPOOL</computeroutput>
<computeroutput>VALGRIND_CREATE_MEMPOOL</computeroutput>.
Again, see the comments in <filename>valgrind.h</filename> for
information on how to use it.</para>
</listitem>
@ -1505,8 +1505,8 @@ tool-specific macros).</para>
<term><command><computeroutput>VALGRIND_STACK_CHANGE(id, start, end)</computeroutput>:</command></term>
<listitem>
<para>Changes a previously registered stack. Informs
Valgrind that the previously registerer stack with stack id
<computeroutput>id</computeroutput> has changed it's start and end
Valgrind that the previously registered stack with stack id
<computeroutput>id</computeroutput> has changed its start and end
values. Use this if your user-level thread package implements
stack growth.</para>
</listitem>
@ -1548,7 +1548,7 @@ concurrency, critical race, locking, or similar, bugs.</para>
<para>Your program will use the native
<computeroutput>libpthread</computeroutput>, but not all of its facilities
will work. In particular, synchonisation of processes via shared-memory
will work. In particular, synchronisation of processes via shared-memory
segments will not work. This relies on special atomic instruction sequences
which Valgrind does not emulate in a way which works between processes.
Unfortunately there's no way for Valgrind to warn when this is happening,
@ -1599,7 +1599,7 @@ will create a core dump in the usual way.</para>
<title>Function wrapping</title>
<para>
Valgrind versions 3.2.0 and above and can do function wrapping on all
Valgrind versions 3.2.0 and above can do function wrapping on all
supported targets. In function wrapping, calls to some specified
function are intercepted and rerouted to a different, user-supplied
function. This can do whatever it likes, typically examining the
@ -2197,7 +2197,7 @@ following constraints:</para>
programs behave as if they had been run on a machine with 64-bit IEEE
floats, for example PowerPC. On amd64 FP arithmetic is done by
default on SSE2, so amd64 looks more like PowerPC than x86 from an FP
perspective, and there are far fewer noticable accuracy differences
perspective, and there are far fewer noticeable accuracy differences
than with x86.</para>
<para>Rounding: Valgrind does observe the 4 IEEE-mandated rounding
@ -2212,7 +2212,7 @@ following constraints:</para>
negative number, etc), division by zero, overflow, underflow,
inexact (loss of precision).</para>
<para>For each exception, two courses of action are defined by 754:
<para>For each exception, two courses of action are defined by IEEE754:
either (1) a user-defined exception handler may be called, or (2) a
default action is defined, which "fixes things up" and allows the
computation to proceed without throwing an exception.</para>

View File

@ -153,7 +153,7 @@ and profiling. This manual is structured similarly.</para>
it supports. Then, each tool has its own chapter in this manual. You
only need to read the documentation for the core and for the tool(s) you
actually use, although you may find it helpful to be at least a little
bit familar with what all tools do. If you're new to all this, you probably
bit familiar with what all tools do. If you're new to all this, you probably
want to run the Memcheck tool. The final chapter explains how to write a
new tool.</para>

View File

@ -79,7 +79,7 @@ leaks.</para>
<listitem>
<para><option>callgrind</option> adds call graph tracing to cachegrind. It can be
used to get call counts and inclusive cost for each call happening in your
program. In addition to cachegrind, callgrind can annotate threads separatly,
program. In addition to cachegrind, callgrind can annotate threads separately,
and every instruction of disassembler output of your program with the number of
instructions executed and cache misses incurred.</para>
</listitem>

View File

@ -1,6 +1,7 @@
<?xml version="1.0"?> <!-- -*- sgml -*- -->
<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd"
[ <!ENTITY % vg-entities SYSTEM "../../docs/xml/vg-entities.xml"> %vg-entities; ]>
<chapter id="hg-manual" xreflabel="Helgrind: thread error detector">
@ -320,7 +321,7 @@ sections below explain them. Here we merely note their presence:</para>
points of these two threads, so you can see which threads it is
referring to.</para>
</listitem>
<listitem><para>Helgrind tries to provide an explaination of why the
<listitem><para>Helgrind tries to provide an explanation of why the
race exists: "<computeroutput>Location 0x601034 has never been
protected by any lock</computeroutput>".</para>
</listitem>
@ -878,7 +879,7 @@ of false data-race errors.</para>
<para>Make sure your application, and all the libraries it uses,
use the POSIX threading primitives. Helgrind needs to be able to
see all events pertaining to thread creation, exit, locking and
other syncronisation events. To do so it intercepts many POSIX
other synchronisation events. To do so it intercepts many POSIX
pthread_ functions.</para>
<para>Do not roll your own threading primitives (mutexes, etc)

View File

@ -1,6 +1,7 @@
<?xml version="1.0"?> <!-- -*- sgml -*- -->
<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd">
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd"
[ <!ENTITY % vg-entities SYSTEM "../../docs/xml/vg-entities.xml"> %vg-entities; ]>
<chapter id="ms-manual" xreflabel="Massif: a heap profiler">

View File

@ -167,7 +167,7 @@ the following problems:</para>
<listitem>
<para>Controls how <constant>memcheck</constant> handles word-sized,
word-aligned loads from addresses for which some bytes are
addressible and others are not. When <varname>yes</varname>, such
addressable and others are not. When <varname>yes</varname>, such
loads do not produce an address error. Instead, loaded bytes
originating from illegal addresses are marked as uninitialised, and
those corresponding to legal addresses are handled in the normal
@ -418,12 +418,12 @@ permissions</title>
</listitem>
<listitem>
<para>Also, if a system call needs to read from a buffer provided by
your program, Memcheck checks that the entire buffer is addressible
your program, Memcheck checks that the entire buffer is addressable
and has valid data, ie, it is readable.</para>
</listitem>
<listitem>
<para>Also, if the system call needs to write to a user-supplied
buffer, Memcheck checks that the buffer is addressible.</para>
buffer, Memcheck checks that the buffer is addressable.</para>
</listitem>
</itemizedlist>
</para>
@ -937,7 +937,7 @@ follows:</para>
<para>This apparently strange choice reduces the amount of confusing
information presented to the user. It avoids the unpleasant
phenomenon in which memory is read from a place which is both
unaddressible and contains invalid values, and, as a result, you get
unaddressable and contains invalid values, and, as a result, you get
not only an invalid-address (read/write) error, but also a
potentially large set of uninitialised-value errors, one for every
time the value is used.</para>
@ -958,24 +958,24 @@ is:</para>
<itemizedlist>
<listitem>
<para>malloc/new/new[]: the returned memory is marked as addressible
<para>malloc/new/new[]: the returned memory is marked as addressable
but not having valid values. This means you have to write to it
before you can read it.</para>
</listitem>
<listitem>
<para>calloc: returned memory is marked both addressible and valid,
<para>calloc: returned memory is marked both addressable and valid,
since calloc clears the area to zero.</para>
</listitem>
<listitem>
<para>realloc: if the new size is larger than the old, the new
section is addressible but invalid, as with malloc.</para>
section is addressable but invalid, as with malloc.</para>
</listitem>
<listitem>
<para>If the new size is smaller, the dropped-off section is marked
as unaddressible. You may only pass to realloc a pointer previously
as unaddressable. You may only pass to realloc a pointer previously
issued to you by malloc/calloc/realloc.</para>
</listitem>
@ -983,7 +983,7 @@ is:</para>
<para>free/delete/delete[]: you may only pass to these functions a
pointer previously issued to you by the corresponding allocation
function. Otherwise, Memcheck complains. If the pointer is indeed
valid, Memcheck marks the entire area it points at as unaddressible,
valid, Memcheck marks the entire area it points at as unaddressable,
and places the block in the freed-blocks-queue. The aim is to defer
as long as possible reallocation of this block. Until that happens,
all attempts to access it will elicit an invalid-address error, as

View File

@ -708,7 +708,7 @@ follows:</para>
do stores and loads of V bits to/from the sparse array which
keeps track of V bits in memory, and
<computeroutput>VGM_(handle_esp_assignment)</computeroutput>,
which messes with memory addressibility resulting from
which messes with memory addressability resulting from
changes in <computeroutput>%ESP</computeroutput>.</para>
</listitem>
@ -1185,7 +1185,7 @@ express the instrumentation. The former group contains:</para>
<listitem>
<para><computeroutput>LEA1</computeroutput> and
<computeroutput>LEA2</computeroutput> are not strictly
necessary, but allow faciliate better translations. They
necessary, but facilitate better translations. They
record the fancy x86 addressing modes in a direct way, which
allows those amodes to be emitted back into the final
instruction stream more or less verbatim.</para>
@ -1302,7 +1302,7 @@ uopcodes are as follows:</para>
from the synthesised shadow memory that Valgrind maintains.
In fact they do more than that, since they also do
address-validity checks, and emit complaints if the
read/written addresses are unaddressible.</para>
read/written addresses are unaddressable.</para>
</listitem>
<listitem>
@ -1716,7 +1716,7 @@ transformations are done:</para>
because it is vital the instrumenter always has an up-to-date
<computeroutput>%ESP</computeroutput> value available,
<computeroutput>%ESP</computeroutput> changes affect
addressibility of the memory around the simulated stack
addressability of the memory around the simulated stack
pointer.</para>
<para>The implication of the above paragraph is that the
@ -2594,9 +2594,9 @@ if elements are used before they get new values.</para>
VALGRIND_MAKE_READABLE(addr, len)]]></programlisting>
<para>and also, to check that memory is
addressible/initialised,</para>
addressable/initialised,</para>
<programlisting><![CDATA[
VALGRIND_CHECK_ADDRESSIBLE(addr, len)
VALGRIND_CHECK_ADDRESSABLE(addr, len)
VALGRIND_CHECK_INITIALISED(addr, len)]]></programlisting>
<para>I then include in my sources a header defining these
@ -2691,7 +2691,7 @@ the error.</para>
run it on post-CPP'd C/C++ source. The parser/prettyprinter
is probably not as hard as it sounds; I would write it in Haskell,
a powerful functional language well suited to doing symbolic
computation, with which I am intimately familar. There is
computation, with which I am intimately familiar. There is
already a C parser written in Haskell by someone in the
Haskell community, and that would probably be a good starting
point.</para>