1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
|
[[!meta copyright="Copyright © 2011, 2012 Free Software Foundation, Inc."]]
[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable
id="license" text="Permission is granted to copy, distribute and/or modify this
document under the terms of the GNU Free Documentation License, Version 1.2 or
any later version published by the Free Software Foundation; with no Invariant
Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license
is included in the section entitled [[GNU Free Documentation
License|/fdl]]."]]"""]]
[[!taglink open_issue_documentation]]
A bunch of this should also be covered in other (introductionary) material,
like Bushnell's Hurd paper. All this should be unfied and streamlined.
IRC, freenode, #hurd, 2011-03-08:
<foocraft> I've a question on what are the "units" in the hurd project, if
you were to divide them into units if they aren't, and what are the
dependency relations between those units(roughly, nothing too pedantic
for now)
<antrik> there is GNU Mach (the microkernel); there are the server
libraries in the Hurd package; there are the actual servers in the same;
and there is the POSIX implementation layer in glibc
<antrik> relations are a bit tricky
<antrik> Mach is the base layer which implements IPC and memory management
<foocraft> hmm I'll probably allocate time for dependency graph generation,
in the worst case
<antrik> on top of this, the Hurd servers, using the server libraries,
implement various aspects of the system functionality
<antrik> client programs use libc calls to use the servers
<antrik> (servers also use libc to communicate with other servers and/or
Mach though)
<foocraft> so every server depends solely on mach, and no other server?
<foocraft> s/mach/mach and/or libc/
<antrik> I think these things should be pretty clear one you are somewhat
familiar with the Hurd architecture... nothing really tricky there
<antrik> no
<antrik> servers often depend on other servers for certain functionality
---
IRC, freenode, #hurd, 2011-03-12:
<dEhiN> when mach first starts up, does it have some basic i/o or fs
functionality built into it to start up the initial hurd translators?
<antrik> I/O is presently completely in Mach
<antrik> filesystems are in userspace
<antrik> the root filesystem and exec server are loaded by grub
<dEhiN> o I see
<dEhiN> so in order to start hurd, you would have to start mach and
simultaneously start the root filesystem and exec server?
<antrik> not exactly
<antrik> GRUB loads all three, and then starts Mach. Mach in turn starts
the servers according to the multiboot information passed from GRUB
<dEhiN> ok, so does GRUB load them into ram?
<dEhiN> I'm trying to figure out in my mind how hurd is initially started
up from a low-level pov
<antrik> yes, as I said, GRUB loads them
<dEhiN> ok, thanks antrik...I'm new to the idea of microkernels, but a
veteran of monolithic kernels
<dEhiN> although I just learned that windows nt is a hybrid kernel which I
never knew!
<rm> note there's a /hurd/ext2fs.static
<rm> I belive that's what is used initially... right?
<antrik> yes
<antrik> loading the shared libraries in addition to the actual server
would be unweildy
<antrik> so the root FS server is linked statically instead
<dEhiN> what does the root FS server do?
<antrik> well, it serves the root FS ;-)
<antrik> it also does some bootstrapping work during startup, to bring the
rest of the system up
---
Provide a cross-linked sources documentation, including generated files, like
RPC stubs.
* <http://www.gnu.org/software/global/>
---
[[Hurd_101]].
---
More stuff like [[hurd/IO_path]].
---
IRC, freenode, #hurd, 2011-10-18:
<frhodes> what happens @ boot. and which translators are started in what
order?
<antrik> short version: grub loads mach, ext2, and ld.so/exec; mach starts
ext2; ext2 starts exec; ext2 execs a few other servers; ext2 execs
init. from there on, it's just standard UNIX stuff
---
IRC, OFTC, #debian-hurd, 2011-11-02:
<sekon_> is __dir_lookup a RPC ??
<sekon_> where can i find the source of __dir_lookup ??
<sekon_> grepping most gives out rvalue assignments
<sekon_> -assignments
<sekon_> but in hurs/fs.h it is used as a function ??
<pinotree> it should be the mig-generated function for that rpc
<sekon_> how do i know how its implemented ??
<sekon_> is there any way to delve deeprer into mig-generated functions
<tschwinge> sekon_: The MIG-generated stuff will either be found in the
package's build directory (if it's building it for themselves), or in the
glibc build directory (libhurduser, libmachuser; which are all the
available user RPC stubs).
<tschwinge> sekon_: The implementation can be found in the various Hurd
servers/libraries.
<tschwinge> sekon_: For example, [hurd]/libdiskfs/dir-lookup.c.
<tschwinge> sekon_: What MIG does is provide a function call interface for
these ``functions'', and the Mach microkernel then dispatches the
invocation to the corresponding server, for example a /hurd/ext2fs file
system (via libdiskfs).
<tschwinge> sekon_: This may help a bit:
http://www.gnu.org/software/hurd/hurd/hurd_hacking_guide.html
---
IRC, freenode, #hurd, 2012-01-08:
<abique> can you tell me how is done in hurd: "ls | grep x" ?
<abique> in bash
<youpi> ls's standard output is a port to the pflocal server, and grep x's
standard input is a port to the pflocal server
<youpi> the connexion between both ports inside the pflocal server being
done by bash when it calls pipe()
<abique> youpi, so STDOUT_FILENO, STDIN_FILENO, STDERR_FILENO still exists
?
<youpi> sure, hurd is compatible with posix
<abique> so bash 1) creates T1 (ls) and T2 (grep), then create a pipe at
the pflocal server, then connects both ends to T1 and T2, then start(T1),
start(T2) ?
<youpi> not exactly
<youpi> it's like on usual unix, bash creates the pipe before creating the
tasks
<youpi> then forks to create both of them, handling them each side of the
pipe
<abique> ok I see
<youpi> s/handling/handing/
<abique> but when you do pipe() on linux, it creates a kernel object, this
time it's 2 port on the pflocal ?
<youpi> yes
<abique> how are spawned tasks ?
<abique> with fork() ?
<youpi> yes
<youpi> which is just task_create() and duplicating the ports into the new
task
<abique> ok
<abique> so it's easy to rewrite fork() with a good control of duplicated
fd
<abique> about threading, mutexes, conditions, etc.. are kernel objects or
just userland objects ?
<youpi> just ports
<youpi> (only threads are kernel objects)
<abique> so, about efficiency, are pipes and mutexes efficient ?
<youpi> depends what you call "efficient"
<youpi> it's less efficient than on linux, for sure
<youpi> but enough for a workable system
<abique> maybe hurd is the right place for a userland thread library like
pth or any fiber library
<abique> ?
<youpi> hurd already uses a userland thread library
<youpi> libcthreads
<abique> is it M:N ?
<youpi> libthreads, actually
<youpi> yes
<abique> nice
<abique> is the task scheduler in the kernel ?
<youpi> the kernel thread scheduler, yes, of course
<youpi> there has to be one
<abique> are the posix open()/readdir()/etc... the direct vfs or wraps an
hurd layer libvfs ?
<youpi> they wrap RPCs to the filesystem servers
<antrik> the Bushnell paper is probably the closest we have to a high-level
documentation of these concepts...
<antrik> the Hurd does not have a central VFS component at all. name
lookups are performed directly on the individual FS servers
<antrik> that's probably the most fundamental design feature of the Hurd
<antrik> (all filesystem operations actually, not only lookups)
IRC, freenode, #hurd, 2012-01-09:
<braunr> youpi: are you sure cthreads are M:N ? i'm almost sure they're 1:1
<braunr> and no modern OS is a right place for any thread userspace
library, as they wouldn't have support to run threads on different
processors (unless processors can be handled by userspace servers, but
still, it requires intimate cooperation between the threading library and
the kernel/userspace server in any case
<youpi> braunr: in libthreads, they are M:N
<youpi> you can run threads on different processors by using several kernel
threads, there's no problem in there, a lot of projects do this
<braunr> a pure userspace library can't use kernel threads
<braunr> at least pth was explacitely used on systems like bsd at a time
when they didn't have kernel threads exactly for that reason
<braunr> explicitely*
<braunr> and i'm actually quite surprised to learn that we have an M:N
threading model :/
<youpi> why do you say "can't" ?
<braunr> but i wanted to reply to abique and he's not around
<youpi> of course you need kernel threads
<youpi> but all you need is to bind them
<braunr> well, what i call a userspace threading library is a library that
completely implement threads without the support of the kernel
<braunr> or only limited support, like signals
<youpi> errr, you can't implement anything with absolutely no support of
the kernel
<braunr> pth used only SIGALRM iirc
<youpi> asking for more kernel threads to use more processors doesn't seem
much
<braunr> it's not
<braunr> but i'm refering to what abique said
<braunr> 01:32 < abique> maybe hurd is the right place for a userland
thread library like pth or any fiber library
<youpi> well, it's indeed more, because the glibc lets external libraries
provide their mutex
<youpi> while on linux, glibc doesn't
<braunr> i believe he meant removing thread support from the kernel :p
<youpi> ah
<braunr> and replying "nice" to an M:N threading model is also suspicious,
since experience seems to show 1:1 models are better
<youpi> "better" ????
<braunr> yes
<youpi> well
<youpi> I don't have any time to argue about that
<youpi> because that'd be extremely long
<braunr> simpler, so far less bugs, and also less headache concerning posix
conformance
<youpi> but there's no absolute "better" here
<youpi> but less performant
<youpi> less flexible
<braunr> that's why i mention experience :)
<youpi> I mean experience too
<braunr> why less performant ?
<youpi> because you pay kernel transition
<youpi> because you don't know anything about the application threads
<youpi> etc.
<braunr> really ?
<youpi> yes
<braunr> i fail to see where the overhead is
<youpi> I'm not saying m:n is generally better than 1:1 either
<youpi> thread switch, thread creation, etc.
<braunr> creation is slower, i agree, but i'm not sure it's used frequently
enough to really matter
<youpi> it is sometimes used frequently enough
<youpi> and in those cases it would be a headache to avoid it
<braunr> ok
<braunr> i thought thread pools were used in those cases
<youpi> synchronized with kernel mutexes ?
<youpi> that's still slow
<braunr> it reduces to the thread switch overhead
<braunr> which, i agree is slightly slower
<braunr> ok, i's a bit less performant :)
<braunr> well don't futexes exist just for that too ?
<youpi> yes and no
<youpi> in that case they don't help
<youpi> because they do sleep
<youpi> they help only when the threads are living
<braunr> ok
<youpi> now as I said I don't have to talk much more, I have to leave :)
|