This ioctl argument struct has never had such a member.
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
git-svn-id: svn://svn.valgrind.org/valgrind/trunk@15386
More recent Xen toolstacks use this for the SID_TO_CONTEXT operation
only, even when XSM is not in use.
XSM is actually an abstraction layer, of which the only current
implementation is FLASK. So this blindly assumes that the backend is
FLASK. Should another XSM backend be invented then we will have to
sort of detecting the correct one.
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
git-svn-id: svn://svn.valgrind.org/valgrind/trunk@15384
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
git-svn-id: svn://svn.valgrind.org/valgrind/trunk@15381
XENMEM_machphys_compat_mfn_list is functionally identical to
XENMEM_machphys_mfn_list but returns a different list from Xen.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
git-svn-id: svn://svn.valgrind.org/valgrind/trunk@15379
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com
git-svn-id: svn://svn.valgrind.org/valgrind/trunk@15373
The XEN_DOMCTL_[gs]et_vcpu_msrs work simiarly to the other get/set pairs,
taking a vcpu, buffer and size. A query with a buffer of NULL is a request
for the maximum size.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
git-svn-id: svn://svn.valgrind.org/valgrind/trunk@15370
The VKI_XEN_DOMCTL_[gs]et_ext_vcpucontext hypercalls have had interface
changes, but are largly just extentions of the existing structure.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
git-svn-id: svn://svn.valgrind.org/valgrind/trunk@15369
The change causing the sysctl bump is not in an implemented subop yet, so no
change is required. The change causing the domctl bump is in an implemented
subop, but has also been reverted in favor of a different way of performing
the same actions. Therefore, there is no net difference.
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
git-svn-id: svn://svn.valgrind.org/valgrind/trunk@15365
Currently, each SecMap has an array of linesF, referenced by the linesZ
of the secmap that needs a lineF, via an index stored in dict[1].
When the array is full, its size is doubled.
The linesF array of a secmap is freed when the SecMap is GC-ed.
The above strategy has the following consequences:
A. in average, 25% of the LinesF are unused.
B. if a SecMap has 'temporarily' a need for linesF, but afterwards,
these linesF are converted to normal lineZ representation, the linesF
will not be recuperated unless the SecMap is GC-ed (i.e. fully marked
no access).
The patch replaces the linesF array private per SecMap
by a pool allocator of LinesF shared between all SecMap.
A lineZ that needs a lineF will directly point to its lineF (using a pointer
stored in dict[1]), instead of having in dict[1] the index in the SecMap
linesF array.
When a lineZ needs a lineF, it is allocated from the pool allocator.
When a lineZ does not need anymore a lineF, it is returned back to the
pool allocator.
On a firefox startup, the above strategy reduces the memory for linesF
by about 42Mb. It seems that the more firefox is used (e.g. to visit
a few websites), the bigger the memory gain.
After opening the home page of valgrind, wikipedia and google, the memory
gain is about 94Mb:
trunk:
linesF: 392,181 allocd ( 203,934,120 bytes occupied) ( 173,279 used)
patch:
linesF: 212,966 allocd ( 109,038,592 bytes occupied) ( 170,252 used)
There is also less alloc/free operations in core arena with the patch:
trunk:
core : 810,680,320/ 802,291,712 max/curr mmap'd, 17/19 unsplit/split sb unmmap'd, 759,441,224/ 703,191,896 max/curr, 40631760/16376828248 totalloc-blocks/bytes, 188015696 searches 8 rzB
patch:
core : 701,628,416/ 690,753,536 max/curr mmap'd, 12/29 unsplit/split sb unmmap'd, 643,041,944/ 577,793,712 max/curr, 32050040/14056017712 totalloc-blocks/bytes, 174097728 searches 8 rzB
In terms of performance, no CPU impact detected on Firefox startup.
Note we have no representative reproducible (and preferrably small)
perf test that uses extensively linesF. Firefox is a good heavy lineF
user but is far to be reproducible, and is very far to be small.
Theoretically, in terms of CPU performance, the patch might have some
small benefits here and there for read operations, as the lineF pointer
is directly retrieved from the lineZ, rather than retrieved via an indirection
in the linesF array.
For write operations, the patch might need a little bit more CPU,
as we replace an
assignment to lineF inUse boolean to False (and then probably back to True
when the cacheline is written back)
by
a call to pool allocator VG_(freeEltPA) (and then probably a call to
VG_(allocEltPA) when the cacheline is written back).
These PA functions are small, so cost should be ok.
We might however still maintain in clear_LineF_of_Z the last cleared lineF
and re-use it in alloc_LineF_for_Z. Not sure how many calls to the PA functions
would be avoided by this '1 elt cache' (and the needed 'if elt == NULL'
check in both clear_LineF_of_Z and alloc_LineF_for_Z.
This possible optimisationwill be looked at later.
git-svn-id: svn://svn.valgrind.org/valgrind/trunk@15253
when the size is known in advance.
3 places identified where this function can be used trivially.
The result is a reduction of 'realloc' operations in core
arena, and a small reduction in ttaux arena
(it is the nr of operations that decreases, the memory usage itself
stays the same (ignoring some 'rounding' effects).
E.g. for perf/bigcode 0, we change from
core 1085742/ 216745904 totalloc-blocks/bytes, 1085733 searches
ttaux 5348/ 6732560 totalloc-blocks/bytes, 5326 searches
to
core 712666/ 190998592 totalloc-blocks/bytes, 712657 searches
ttaux 5319/ 6731808 totalloc-blocks/bytes, 5296 searches
For bz2, we switch from
core 50285/ 32383664 totalloc-blocks/bytes, 50256 searches
ttaux 670/ 245160 totalloc-blocks/bytes, 669 searches
to
core 32564/ 29971984 totalloc-blocks/bytes, 32535 searches
ttaux 605/ 243280 totalloc-blocks/bytes, 604 searches
Performance wise, on amd64, this improves memcheck performance
on perf tests by 0.0, 0.1 or 0.2 seconds depending on the test.
git-svn-id: svn://svn.valgrind.org/valgrind/trunk@15173
Valgrind aspects, to match vex r3124.
See bug 339778 - Linux/TileGx platform support to Valgrind
git-svn-id: svn://svn.valgrind.org/valgrind/trunk@15080
named 'now' with this contents:
#!
/bin/date
the platform selection logic does this:
--11196:1:launcher no tool requested, defaulting to 'memcheck'
--11196:2:launcher selecting platform for './now'
--11196:2:launcher selecting platform for './now'
--11196:2:launcher opened './now'
--11196:2:launcher read 13 bytes from './now'
--11196:2:launcher selecting platform for ''
--11196:2:launcher selecting platform for '/home/florian/bin/'
--11196:2:launcher opened '/home/florian/bin/'
--11196:2:launcher selected platform 'unknown'
--11196:1:launcher no platform detected, defaulting platform to 'amd64-linux'
That is not quite right. Instead the platform should be determined by
examining the default shell.
Additionally, define VKI_BINPRM_BUF_SIZE because on linux only that many
characters are considered on a #! line. C.f. <linux>/fs/binfmt_script.c
m_ume/* needs to be adapted as well but that is a different patch.
git-svn-id: svn://svn.valgrind.org/valgrind/trunk@15068
Some architectures, e.g. s390, don't have dedicated recvmmsg and sendmmsg
system calls, but use the socketcall multiplexing system call with
SYS_RECVMMSG or SYS_SENDMMSG (just like the accept4 systemcall can also
be called through socketcall). Create separate helpers for recvmmsg and
sendmmsg helpers that can be used by either the direct syscall or the
socket call.
git-svn-id: svn://svn.valgrind.org/valgrind/trunk@14964