1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
|
[[!meta copyright="Copyright © 2011 Free Software Foundation, Inc."]]
[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable
id="license" text="Permission is granted to copy, distribute and/or modify this
document under the terms of the GNU Free Documentation License, Version 1.2 or
any later version published by the Free Software Foundation; with no Invariant
Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license
is included in the section entitled [[GNU Free Documentation
License|/fdl]]."]]"""]]
[[!tag open_issue_documentation]]
IRC, freenode, #hurd, 2011-01-06
<antrik> hm, odd... vmstat tells me that ~500 MiB of RAM are in use; but
the sum of all RSS is <300 MiB... what's the rest?
<braunr> kernel memory ?
<braunr> the zone allocator maybe
<braunr> or the page cache simply
<antrik> braunr: which page cache? AIUI, caches are implemented by the
individual filesystem servers -- in which case any memory used by them
should show up in RSS
<antrik> also, gnumach is listed among other tasks, so I'd assume the
kernel memery also to be accounted for
<braunr> antrik: no, the kernel maintains a page cache, very similar to
what is done in Linux, and almost the same as in FreeBSD
<braunr> the file system servers are just backing stores
<braunr> the RSS for the gnumach tasks only includes kernel memory
<braunr> I don't think the page cache is accounted for
<braunr> because it's not really kernel memory, it's a cache of user space
memory
<antrik> apparently my understanding of Mach paging is still (or again?)
rather incomplete :-(
<antrik> BTW, is there any way to find out how much anonymous memory a
process is using? the "virtual" includes discardable mappings, and is
thus not very helpful...
<antrik> (that applies to Linux as well though)
<braunr> can you provide an example of the output of vmstat please ?
<braunr> I don't have a Hurd VM near me
<antrik> olaf@alien:~$ vmstat
<antrik> pagesize: 4K
<antrik> size: 501M
<antrik> free: 6.39M
<antrik> active: 155M
<antrik> inactive: 310M
<antrik> wired: 29.4M
<antrik> zero filled: 15.3G
<antrik> reactivated: 708M
<antrik> pageins: 3.43G
<antrik> pageouts: 1.55G
<antrik> page faults: 26844574
<antrik> cow faults: 3736174
<antrik> memobj hit ratio: 92%
<antrik> swap size: 733M
<antrik> swap free: 432M
<antrik> interesting... closing a single screen window temporarily raises
the "free" value by almost 10 MB
<antrik> I guess bash is rather hungry nowadays ;-)
<braunr> antrik: I guess the only way is using pmap or looking into
/proc/<pid>/maps
<braunr> but it won't give you the amount of physical memory used by
anonymous mappings
<antrik> nah, I don't even want that... just like to know how much memory
(RAM+swap) a process is really using
<braunr> antrik: then the RSS field is what you want
<antrik> OTOH, anonymous doesn't include program code or other actively
used mappings... so not very useful either
<antrik> nah, RSS doesn't count anything that is in swap
<braunr> well
<braunr> don't you have a SWAP column ?
<braunr> hm
<braunr> i guess not
<braunr> antrik: why do you say it doesn't include other actively used
mappings ?
<braunr> antrik: and the inclusion of program code also depends on the
implementation of the ELF handler
<braunr> I don't know how the hurd does that, but some ELF loaders use
anonymous memory for the execution view
<antrik> well, if a program maps a data file, and regularily accesses parts
of the file, they won't occupy physical RAM all the time (and show up in
RSS), but they are not anonymous mappings. similar to program code
<braunr> then this anonymous memory is shared by all processes using that
code
<antrik> oh, interesting
<antrik> is it really a completely distinct mapping, rather than just COW?
<braunr> the first is
<braunr> others are COW
<antrik> so if a program loads 200 MB of libraries, they are all read in on
startup, and occupy RAM or swap subsequently, even if most of the code is
never actually run?...
<kilobug> library code should be backed by the library file on disk, not be
swap
<braunr> depends on the implementation
<braunr> I guess most use the file system backend
<braunr> but in the Hurd, ext2fs.static and ld.so.1 use anonymous memory
<braunr> (that's the case for another reason, still, I don't think the
report in top/ps clearly indicates that fact)
<kilobug> braunr: yeah for bootstrapping issues, makes sense
<braunr> it may also depends on the pic/pie options used when building
libraries
IRC, freenode, #hurd, 2011-07-24
< braunr> the panic is probably due to memory shortage
< braunr> so as antrik suggested, use more swap
< antrik> gg0: you could run "vmstat 1" in another terminal to watch memory
usage
< antrik> that way we will know for sure whether it's related
< braunr> antrik: it's trickier than that
< braunr> it depends if the zones used are pageable
< antrik> braunr: well, if it's a zone map exhaustion, then the swap size
won't change anything?...
< braunr> antrik: in this case no, but if the zone is pageable and the
pager (backing anonymous memory) refuses to create memory because it
estimates it's full (all swap space is reserved), it will fail to
< braunr> too
< braunr> but i don't think there are much pageable zones in the kernel
< antrik> yes, but in that case we can see the exhaustion in vmstat :-)
< braunr> many*
< braunr> i'm not sure
< braunr> reserved swap space doesn't mean it's used
< braunr> that's one of the major changes in freebsd 4 or 5 i was
mentioning
< antrik> if it's reserved, it wouldn't show up as "free", would it?...
< braunr> (btw, it's also what makes anonymous memory merging so hard)
< braunr> yes it would
< braunr> well, it could, i'm not sure
< braunr> anonymous memory is considered as a file
< braunr> one big file filled with zeroes, which is the swap partition
< braunr> when you allocate pageable anonymous memory, a part of this
"file" is reserved
< braunr> but i don't know if the reported number if the reserved
(allocated) space, or used (actually containing data)
< braunr> is*
< braunr> i also suspect wired allocations can fail because of a full swap
(because the kernel is unable to make free pages)
< braunr> in this case vmstat will show it
< antrik> what does it matter whether there is data there or not? if it's
reserved, it's not free. if it behaves differently, I'd consider that a
serious bug
< braunr> maybe the original developers intended to monitor its actual
usage
< braunr> antrik: i've just checked how the free count gets updated, and it
looks like it is on both seqnos_memory_object_data_initialize and
seqnos_memory_object_data_write
< braunr> antrik: so i guess reserved memory is accounted for
|