summaryrefslogtreecommitdiff
path: root/open_issues/anatomy_of_a_hurd_system.mdwn
blob: ba72b00f68fba9e37d51f469b8efc1b47fb30fb5 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
[[!meta copyright="Copyright © 2011, 2012, 2013 Free Software Foundation,
Inc."]]

[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable
id="license" text="Permission is granted to copy, distribute and/or modify this
document under the terms of the GNU Free Documentation License, Version 1.2 or
any later version published by the Free Software Foundation; with no Invariant
Sections, no Front-Cover Texts, and no Back-Cover Texts.  A copy of the license
is included in the section entitled [[GNU Free Documentation
License|/fdl]]."]]"""]]

[[!taglink open_issue_documentation]]

A bunch of this should also be covered in other (introductionary) material,
like Bushnell's Hurd paper.  All this should be unfied and streamlined.

[[!toc]]


# IRC, freenode, #hurd, 2011-03-08

    <foocraft> I've a question on what are the "units" in the hurd project, if
      you were to divide them into units if they aren't, and what are the
      dependency relations between those units(roughly, nothing too pedantic
      for now)
    <antrik> there is GNU Mach (the microkernel); there are the server
      libraries in the Hurd package; there are the actual servers in the same;
      and there is the POSIX implementation layer in glibc
    <antrik> relations are a bit tricky
    <antrik> Mach is the base layer which implements IPC and memory management
    <foocraft> hmm I'll probably allocate time for dependency graph generation,
      in the worst case
    <antrik> on top of this, the Hurd servers, using the server libraries,
      implement various aspects of the system functionality
    <antrik> client programs use libc calls to use the servers
    <antrik> (servers also use libc to communicate with other servers and/or
      Mach though)
    <foocraft> so every server depends solely on mach, and no other server?
    <foocraft> s/mach/mach and/or libc/
    <antrik> I think these things should be pretty clear one you are somewhat
      familiar with the Hurd architecture... nothing really tricky there
    <antrik> no
    <antrik> servers often depend on other servers for certain functionality


# IRC, freenode, #hurd, 2011-03-12

    <dEhiN> when mach first starts up, does it have some basic i/o or fs
      functionality built into it to start up the initial hurd translators?
    <antrik> I/O is presently completely in Mach
    <antrik> filesystems are in userspace
    <antrik> the root filesystem and exec server are loaded by grub
    <dEhiN> o I see
    <dEhiN> so in order to start hurd, you would have to start mach and
      simultaneously start the root filesystem and exec server?
    <antrik> not exactly
    <antrik> GRUB loads all three, and then starts Mach. Mach in turn starts
      the servers according to the multiboot information passed from GRUB
    <dEhiN> ok, so does GRUB load them into ram?
    <dEhiN> I'm trying to figure out in my mind how hurd is initially started
      up from a low-level pov
    <antrik> yes, as I said, GRUB loads them
    <dEhiN> ok, thanks antrik...I'm new to the idea of microkernels, but a
      veteran of monolithic kernels
    <dEhiN> although I just learned that windows nt is a hybrid kernel which I
      never knew!
    <rm> note there's a /hurd/ext2fs.static
    <rm> I belive that's what is used initially... right?
    <antrik> yes
    <antrik> loading the shared libraries in addition to the actual server
      would be unweildy
    <antrik> so the root FS server is linked statically instead
    <dEhiN> what does the root FS server do?
    <antrik> well, it serves the root FS ;-)
    <antrik> it also does some bootstrapping work during startup, to bring the
      rest of the system up


# Source Code Documentation

Provide a cross-linked sources documentation, including generated files, like
RPC stubs.

  * <http://www.gnu.org/software/global/>


# [[Hurd_101]]


# [[hurd/IO_path]]

Need more stuff like that.


# IRC, freenode, #hurd, 2011-10-18

    <frhodes> what happens @ boot. and which translators are started in what
      order?
    <antrik> short version: grub loads mach, ext2, and ld.so/exec; mach starts
      ext2; ext2 starts exec; ext2 execs a few other servers; ext2 execs
      init. from there on, it's just standard UNIX stuff


# IRC, OFTC, #debian-hurd, 2011-11-02

    <sekon_> is __dir_lookup a RPC ??
    <sekon_> where can i find the source of __dir_lookup ??
    <sekon_> grepping most gives out rvalue assignments 
    <sekon_> -assignments 
    <sekon_> but in hurs/fs.h it is used as a function ??
    <pinotree> it should be the mig-generated function for that rpc
    <sekon_> how do i know how its implemented ??
    <sekon_> is there any way to delve deeprer into mig-generated functions 
    <tschwinge> sekon_: The MIG-generated stuff will either be found in the
      package's build directory (if it's building it for themselves), or in the
      glibc build directory (libhurduser, libmachuser; which are all the
      available user RPC stubs).
    <tschwinge> sekon_: The implementation can be found in the various Hurd
      servers/libraries.
    <tschwinge> sekon_: For example, [hurd]/libdiskfs/dir-lookup.c.
    <tschwinge> sekon_: What MIG does is provide a function call interface for
      these ``functions'', and the Mach microkernel then dispatches the
      invocation to the corresponding server, for example a /hurd/ext2fs file
      system (via libdiskfs).
    <tschwinge> sekon_: This may help a bit:
      http://www.gnu.org/software/hurd/hurd/hurd_hacking_guide.html


# IRC, freenode, #hurd, 2012-01-08

    <abique> can you tell me how is done in hurd:  "ls | grep x" ?
    <abique> in bash
    <youpi> ls's standard output is a port to the pflocal server, and grep x's
      standard input is a port to the pflocal server
    <youpi> the connexion between both ports inside the pflocal server being
      done by bash when it calls pipe()
    <abique> youpi, so STDOUT_FILENO, STDIN_FILENO, STDERR_FILENO still exists
      ?
    <youpi> sure, hurd is compatible with posix
    <abique> so bash 1) creates T1 (ls) and T2 (grep), then create a pipe at
      the pflocal server, then connects both ends to T1 and T2, then start(T1),
      start(T2) ?
    <youpi> not exactly
    <youpi> it's like on usual unix, bash creates the pipe before creating the
      tasks
    <youpi> then forks to create both of them, handling them each side of the
      pipe
    <abique> ok I see
    <youpi> s/handling/handing/
    <abique> but when you do pipe() on linux, it creates a kernel object, this
      time it's 2 port on the pflocal ?
    <youpi> yes
    <abique> how are spawned tasks ?
    <abique> with fork() ?
    <youpi> yes
    <youpi> which is just task_create() and duplicating the ports into the new
      task
    <abique> ok
    <abique> so it's easy to rewrite fork() with a good control of duplicated
      fd
    <abique> about threading, mutexes, conditions, etc.. are kernel objects or
      just userland objects ?
    <youpi> just ports
    <youpi> (only threads are kernel objects)
    <abique> so, about efficiency, are pipes and mutexes efficient ?
    <youpi> depends what you call "efficient"
    <youpi> it's less efficient than on linux, for sure
    <youpi> but enough for a workable system
    <abique> maybe hurd is the right place for a userland thread library like
      pth or any fiber library
    <abique> ?
    <youpi> hurd already uses a userland thread library
    <youpi> libcthreads
    <abique> is it M:N ?
    <youpi> libthreads, actually
    <youpi> yes

Actually, the Hurd has never used an M:N model. Both libthreads (cthreads) and libpthread use an 1:1 model.

    <abique> nice
    <abique> is the task scheduler in the kernel ?
    <youpi> the kernel thread scheduler, yes, of course
    <youpi> there has to be one
    <abique> are the posix open()/readdir()/etc... the direct vfs or wraps an
      hurd layer libvfs ?
    <youpi> they wrap RPCs to the filesystem servers
    <antrik> the Bushnell paper is probably the closest we have to a high-level
      documentation of these concepts...
    <antrik> the Hurd does not have a central VFS component at all. name
      lookups are performed directly on the individual FS servers
    <antrik> that's probably the most fundamental design feature of the Hurd
    <antrik> (all filesystem operations actually, not only lookups)


## IRC, freenode, #hurd, 2012-01-09

    <braunr> youpi: are you sure cthreads are M:N ? i'm almost sure they're 1:1
    <braunr> and no modern OS is a right place for any thread userspace
      library, as they wouldn't have support to run threads on different
      processors (unless processors can be handled by userspace servers, but
      still, it requires intimate cooperation between the threading library and
      the kernel/userspace server in any case
    <youpi> braunr: in libthreads, they are M:N
    <youpi> you can run threads on different processors by using several kernel
      threads, there's no problem in there, a lot of projects do this
    <braunr> a pure userspace library can't use kernel threads
    <braunr> at least pth was explacitely used on systems like bsd at a time
      when they didn't have kernel threads exactly for that reason
    <braunr> explicitely*
    <braunr> and i'm actually quite surprised to learn that we have an M:N
      threading model :/
    <youpi> why do you say "can't" ?
    <braunr> but i wanted to reply to abique and he's not around
    <youpi> of course you need kernel threads
    <youpi> but all you need is to bind them
    <braunr> well, what i call a userspace threading library is a library that
      completely implement threads without the support of the kernel
    <braunr> or only limited support, like signals
    <youpi> errr, you can't implement anything with absolutely no support of
      the kernel
    <braunr> pth used only SIGALRM iirc
    <youpi> asking for more kernel threads to use more processors doesn't seem
      much
    <braunr> it's not
    <braunr> but i'm refering to what abique said
    <braunr> 01:32 < abique> maybe hurd is the right place for a userland
      thread library like pth or any fiber library
    <youpi> well, it's indeed more, because the glibc lets external libraries
      provide their mutex
    <youpi> while on linux, glibc doesn't
    <braunr> i believe he meant removing thread support from the kernel :p
    <youpi> ah
    <braunr> and replying "nice" to an M:N threading model is also suspicious,
      since experience seems to show 1:1 models are better
    <youpi> "better" ????
    <braunr> yes
    <youpi> well
    <youpi> I don't have any time to argue  about that
    <youpi> because that'd be extremely long
    <braunr> simpler, so far less bugs, and also less headache concerning posix
      conformance
    <youpi> but there's no absolute "better" here
    <youpi> but less performant
    <youpi> less flexible
    <braunr> that's why i mention experience :)
    <youpi> I mean experience too
    <braunr> why less performant ?
    <youpi> because you pay kernel transition
    <youpi> because you don't know anything about the application threads
    <youpi> etc.
    <braunr> really ?
    <youpi> yes
    <braunr> i fail to see where the overhead is
    <youpi> I'm not saying m:n is generally better than 1:1 either
    <youpi> thread switch, thread creation, etc.
    <braunr> creation is slower, i agree, but i'm not sure it's used frequently
      enough to really matter
    <youpi> it is sometimes used frequently enough
    <youpi> and in those cases it would be a headache to avoid it
    <braunr> ok
    <braunr> i thought thread pools were used in those cases
    <youpi> synchronized with kernel mutexes ?
    <youpi> that's still slow
    <braunr> it reduces to the thread switch overhead
    <braunr> which, i agree is slightly slower
    <braunr> ok, i's a bit less performant :)
    <braunr> well don't futexes exist just for that too ?
    <youpi> yes and no
    <youpi> in that case they don't help
    <youpi> because they do sleep
    <youpi> they help only when the threads are living
    <braunr> ok
    <youpi> now as I said I don't have to talk much more, I have to leave :)


# IRC, freenode, #hurd, 2012-12-06

    <braunr> spiderweb: have you read
      http://www.gnu.org/software/hurd/hurd-paper.html ?
    <spiderweb> I'll have a look.
    <braunr> and also the beginning of
      http://ftp.sceen.net/mach/mach_a_new_kernel_foundation_for_unix_development.pdf
    <braunr> these two should provide a good look at the big picture the hurd
      attemtps to achieve
    <Tekk_> I can't help but wonder though, what advantages were really
      achieved with early mach?
    <Tekk_> weren't they just running a monolithic unix server like osx does?
    <braunr> most mach-based systems were
    <braunr> but thanks to that, they could provide advanced features over
      other well established unix systems
    <braunr> while also being compatible
    <Tekk_> so basically it was just an ease of development thing
    <braunr> well that's what mach aimed at being
    <braunr> same for the hurd
    <braunr> making things easy
    <Tekk_> but as a side effect hurd actually delivers on the advantages of
      microkernels aside from that, but the older systems wouldn't, correct?
    <braunr> that's how there could be network file systems in very short time
      and very scarce resources (i.e. developers working on it), while on other
      systems it required a lot more to accomplish that
    <braunr> no, it's not a side effect of the microkernel
    <braunr> the hurd retains and extends the concept of flexibility introduced
      by mach
    <Tekk_> the improved stability, etc. isn't a side effect of being able to
      restart generally thought of as system-critical processes?
    <braunr> no
    <braunr> you can't restart system critical processes on the hurd either
    <braunr> that's one feature of minix, and they worked hard on it
    <Tekk_> ah, okay. so that's currently just the domain of minix
    <Tekk_> okay
    <Tekk_> spiderweb: well, there's 1 advantage of minix for you :P
    <braunr> the main idea of mach is to make it easy to extend unix
    <braunr> without having hundreds of system calls
    <braunr> the hurd keeps that and extends it by making many operations
      unprivileged
    <braunr> you don't need special code for kernel modules any more
    <braunr> it's easy
    <braunr> you don't need special code to handle suid bits and other ugly
      similar hacks,
    <braunr> it's easy
    <braunr> you don't need fuse
    <braunr> easy
    <braunr> etc..


# Service Directory

## IRC, freenode, #hurd, 2012-12-06

    <spiderweb> what is the #1 feature that distinguished hurd from other
      operating systems. the concept of translators. (will read more when I get
      more time).
    <braunr> yes, translators
    <braunr> using the VFS as a service directory
    <braunr> and the VFS permissions to control access to those services


## IRC, freenode, #hurd, 2013-05-23

    <gnu_srs> Hi, is there any efficient way to control which backed
      translators are called via RPC with a user space program?
    <gnu_srs> Take for example io_stat: S_io_stat is defined in boot/boot.c,
      pfinet/io-ops.c and pflocal/io.c
    <gnu_srs> And the we have libdiskfs/io-stat.c:diskfs_S_io_stat,
      libnetfs/io-stat.c:netfs_S_io_stat, libtreefs/s-io.c:treefs_S_io_stat,
      libtrivfs/io-stat.c:trivfs_S_io_stat
    <gnu_srs> How are they related?
    <braunr> gnu_srs: it depends on the server (translator) managing the files
      (nodes) you're accessing
    <braunr> so use fsysopts to know the server, and see what this server uses
    <gnu_srs> fsysopts /hurd/pfinet and fsysopts /hurd/pflocal gives the same
      answer: ext2fs --writable --no-inherit-dir-group --store-type=typed
      device:hd0s1
    <braunr> of course
    <braunr> the binaries are regular files
    <braunr> see /servers/socket/1 and /servers/socket/2 instead
    <braunr> which are the nodes representing the *service*
    <braunr> again, the hurd uses the file system as a service directory
    <braunr> this usage of the file system is at the core of the hurd design
    <braunr> files are not mere files, they're service names
    <braunr> it happens that, for most files, the service behind them is the
      same as for regular files
    <braunr> gnu_srs: this *must* be obvious for you to do any tricky work on
      the hurd

    <gnu_srs> Anyway, if I create a test program calling io_stat I assume
      S_io_stat in pflocal is called.
    <gnu_srs> How to make the program call S_io_stat in pfinet instead? 
    <braunr> create a socket managed by pfinet
    <braunr> i.e. an inet or inet6 socket
    <braunr> you can't assume io_stat is serviced by pflocal
    <braunr> only stats on unix sockets of pipes will be
    <braunr> or*
    <gnu_srs> thanks, what about the *_S_io_stat functions?
    <braunr> what about them ?
    <gnu_srs> How they fit into the picture, e.g. diskfs_io_stat?
    <gnu_srs> *diskfs_S_io_stat
    <braunr> gnu_srs: if you open a file managed by a server using libdiskfs,
      e.g. ext2fs, that one will be called
    <gnu_srs> Using the same user space call: io_stat, right?
    <braunr> it's all userspace
    <braunr> say rather, client-side
    <braunr> the client calls the posix stat() function, which is implemented
      by glibc, which converts it into a call to io_stat, and sends it to the
      server managing the open file
    <braunr> the io_stat can change depending on the server
    <braunr> the remote io_stat implementation, i mean
    <braunr> identify the server, and you will identify the actual
      implementation


## IRC, freenode, #hurd, 2013-06-30

    <hacklu> hi, what is the replacer of netname_check_in? 

    <hacklu> I want to ask another question. in my opinion, the rpc is the
      mach's way, and the translator is the hurd's way. so somebody want to
      lookup a service, it should not need to ask the mach kernel know about
      this query. the hurd will take the control. 
    <hacklu> am I right?
    <braunr> no
    <braunr> that's nonsense
    <braunr> service lookups has never been in mach
    <braunr> first mach based systems used a service directory, whereas the
      hurd uses the file system for that
    <braunr> you still need mach to communicate with either of those
    <hacklu> how to understand the term of service directory here?
    <braunr> a server everyone knows
    <braunr> which gives references to other servers
    <braunr> usually, depending on the name
    <braunr> e.g. name_lookup("net") -> port right to network server
    <hacklu> is that people use netname_check_in to register service in the
      past? now used libtrivfs?
    <braunr> i don't know about netname_check_in
    <braunr> old mach (not gnumach) documentation might mention this service
      directory
    <braunr> libtrivfs doesn't have much to do with that
    <braunr> on the hurd, the equivalent is the file system
    <hacklu> maybe that is outdate, I just found that exist old doc, and old
      code which can't be build.
    <braunr> every process knows /
    <braunr> the file system is the service directory
    <braunr> nodes refer to services
    <hacklu> so the file system is the nameserver, any new service should
      register in it before other can use
    <braunr> and the file system is distributed, so looking up a service may
      require several queries
    <braunr> setting a translator is exactly that, registering a program to
      service requests on a node
    <braunr> the file system isn't one server though
    <braunr> programs all know about /, but then, lookups are recursive
    <braunr> e.g. if you have / and /home, and are looking for
      /home/hacklu/.profile, you ask / which tells you about /home, and /home
      will give you a right to /home/hacklu/.profile
    <hacklu> even in the past, the mach don't provide name register service,
      there must be an other server to provide this service?
    <braunr> yes
    <braunr> what's nonsense in your sentence is comparing RPCs and translators
    <braunr> translators are merely servers attached to the file system, using
      RPCs to communicate with the rest of the system
    <hacklu> I know yet, the two just one thing.
    <braunr> no
    <braunr> two things :p
    <braunr> completely different and unrelated except for one using the other
    <hacklu> ah, just one used aonther one.
    <hacklu> is exist anyway to anounce service except settrans with file node?
    <braunr> more or less
    <braunr> tasks can have special ports
    <braunr> that's how one task knows about / for example
    <braunr> at task creation, a right to / is inserted in the new task
    <hacklu> I think this is also a file node way.
    <braunr> no
    <braunr> if i'm right, auth is referenced the same way
    <braunr> and there is no node for auth
    <hacklu> how the user get the port of auth with node?
    <braunr> it's given when a task is created
    <hacklu> pre-set in the creation of one task?
    <braunr> i'm unconfortable with "pre-set"
    <braunr> inserted at creation time
    <braunr> auth is started very early
    <braunr> then tasks are given a reference to it


# IRC, freenode, #hurd, 2012-12-10

    <spiderweb> I want to work on hurd, but I think I'm going to start with
      minix, I own the minix book 3rd ed. it seems like a good intro to
      operating systems in general. like I don't even know what a semaphore is
      yet.
    <braunr> well, enjoy learning :)
    <spiderweb> once I finish that book, what reading do you guys recommend?
    <spiderweb> other than the wiki
    <braunr> i wouldn't recommend starting with a book that focuses on one
      operating system anyway
    <braunr> you tend to think in terms of what is done in that specific
      implementation and compare everything else to that
    <braunr> tannenbaum is not only the main author or minix, but also the one
      of the book http://en.wikipedia.org/wiki/Modern_Operating_Systems
    <braunr>
      http://en.wikipedia.org/wiki/List_of_important_publications_in_computer_science#Operating_systems
      should be a pretty good list :)


# IRC, freenode, #hurd, 2013-03-12

    <mjjc> i have a question regarding ipc in hurd. if a task is created, does
      it contain any default port rights in its space? i am trying to deduce
      how one calls dir_lookup() on the root translator in glibc's open().
    <kilobug> mjjc: yes, there are some default port rights, but I don't
      remember the details :/
    <mjjc> kilobug: do you know where i should search for details?
    <kilobug> mjjc: hum either in the Hurd's hacking guide
      https://www.gnu.org/software/hurd/hacking-guide/ or directly in the
      source code of exec server/libc I would say, or just ask again the
      question here later on to see if someone else has more information
    <mjjc> ok, thanks
    <pinotree> there's also rpctrace to, as the name says, trace all the rpc's
      executed
    <braunr> some ports are introduced in new tasks, yes
    <braunr> see
      http://www.gnu.org/software/hurd/hacking-guide/hhg.html#The-main-function
    <braunr> and
    <braunr>
      http://www.gnu.org/software/hurd/gnumach-doc/Task-Special-Ports.html#Task-Special-Ports
    <mjjc> yes, the second link was just what i was looking for, thanks
    <braunr> the second is very general
    <braunr> also, the first applies to translators only
    <braunr> if you're looking for how to do it for a non-translator
      application, the answer is probably somewhere in glibc
    <braunr> _hurd_startup i'd guess


# IRC, freenode, #hurd, 2013-06-15

    <damo22> ive been reading a little about exokernels or unikernels, and i
      was wondering if it might be relevant to the GNU/hurd design.  I'm not
      too familiar with hurd terminology so forgive me.  what if every
      privileged service was compiled as its own mini "kernel" that handled (a)
      any hardware related to that service (b) any device nodes exposed by that
      service etc...
    <braunr> yes but not really that way
    <damo22> under the current hurd model of the operating system, how would
      you talk to hardware that required specific timings like sound hardware?
    <braunr> through mapped memory
    <damo22> is there such a thing as an interrupt request in hurd?
    <braunr> obviously
    <damo22> ok
    <damo22> is there any documentation i can read that involves a driver that
      uses irqs for hurd?
    <braunr> you can read the netdde code
    <braunr> dde being another project, there may be documentation about it
    <braunr> somewhere else
    <braunr> i don't know where
    <damo22> thanks
    <damo22> i read a little about dde, apparently it reuses existing code from
      linux or bsd by reimplementing parts of the old kernel like an api or
      something
    <braunr> yes
    <damo22> it must translate these system calls into ipc or something
    <damo22> then mach handles it?
    <braunr> exactly
    <braunr> that's why i say it's not the exokernel way of doing things
    <damo22> ok
    <damo22> so does every low level hardware access go through mach?'
    <braunr> yes
    <braunr> well no
    <braunr> interrupts do
    <braunr> ports (on x86)
    <braunr> everything else should be doable through mapped memory
    <damo22> seems surprising that the code for it is so small
    <braunr> 1/ why surprising ? and 2/ "so small" ?
    <damo22> its like the core of the OS, and yet its tiny compared to say the
      linux kernel
    <braunr> it's a microkenrel
    <braunr> well, rather an hybrid
    <braunr> the size of the equivalent code in linux is about the same
    <damo22> ok
    <damo22> with the model that privileged instructions get moved to
      userspace, how does one draw the line between what is OS and what is user
      code
    <braunr> privileged instructions remain in the kernel
    <braunr> that's one of the few responsibilities of the kernel
    <damo22> i see, so it is an illusion that the user has privilege in a sense
    <braunr> hum no
    <braunr> or, define "illusion"
    <damo22> well the user can suddenly do things never imaginable in linux
    <damo22> that would have required sudo
    <braunr> yes
    <braunr> well, they're not unimaginable on linux
    <braunr> it's just not how it's meant to work
    <damo22> :)
    <braunr> and why things like fuse are so slow
    <braunr> i still don't get "i see, so it is an illusion that the user has
      privilege in a sense"
    <damo22> because the user doesnt actually have the elevated privilege its
      the server thing (translator)?
    <braunr> it does
    <braunr> not at the hardware level, but at the system level
    <braunr> not being able to do it directly doesn't mean you can't do it
    <damo22> right
    <braunr> it means you need indirections
    <braunr> that's what the kernel provides
    <damo22> so the user cant do stuff like outb 0x13, 0x1
    <braunr> he can
    <braunr> he also can on linux
    <damo22> oh
    <braunr> that's an x86 specifity though
    <damo22> but the user would need hardware privilege to do that
    <braunr> no
    <damo22> or some kind of privilege
    <braunr> there is a permission bitmap in the TSS that allows userspace to
      directly access some ports
    <braunr> but that's really x86 specific, again
    <damo22> i was using it as an example
    <damo22> i mean you wouldnt want userspace to directly access everything
    <braunr> yes
    <braunr> the only problem with that is dma reall
    <braunr> y
    <braunr> because dma usually access physical memory directly
    <damo22> are you saying its good to let userspace access everything minus
      dma?
    <braunr> otherwise you can just centralize permissions in one place (the
      kernel or an I/O server for example)
    <braunr> no
    <braunr> you don't let userspace access everything
    <damo22> ah
    <damo22> yes
    <braunr> userspace asks for permission to access one specific part (a
      memory range through mapping)
    <braunr> and can't access the rest (except through dma)
    <damo22> except through dma??  doesnt that pose a large security threat?
    <braunr> no
    <braunr> you don't give away dma access to anyone
    <braunr> only drivers
    <damo22> ahh
    <braunr> and drivers are normally privileged applications anyway
    <damo22> so a driver runs in userspace?
    <braunr> so the only effect is that bugs can affect other address spaces
      indirectly
    <braunr> netdde does
    <damo22> interesting
    <braunr> and they all should but that's not the case for historical reasons
    <damo22> i want to port ALSA to hurd userspace :D
    <braunr> that's not so simple unfortunately
    <braunr> one of the reasons it's hard is that pci access needs arbitration
    <braunr> and we don't have that yet
    <damo22> i imagine that would be difficult
    <braunr> yes
    <braunr> also we're not sure we want alsa
    <braunr> alsa drivers, maybe, but probably not the interface itself
    <damo22> its tangled spaghetti
    <damo22> but the guy who wrote JACK for audio hates OSS, and believes it is
      rubbish due to the fact it tries to read and write to a pcm device node
      like a filesystem with no care for timing
    <braunr> i don't know audio well enough to tell you anything about that
    <braunr> was that about oss3 or oss4 ?
    <braunr> also, the hurd isn't a real time system
    <braunr> so we don't really care about timings
    <braunr> but with "good enough" latencies, it shouldn't be a problem
    <damo22> but if the audio doesnt reach the sound card in time, you will get
      a crackle or a pop or a pause in the signal
    <braunr> yep
    <braunr> it happens on linux too when the system gets some load
    <damo22> some users find this unnacceptable
    <braunr> some users want real time systems
    <braunr> using soft real time is usually plenty enough to "solve" this kind
      of problems
    <damo22> will hurd ever be a real time system?
    <braunr> no idea
    <youpi> if somebody works on it why not
    <youpi> it's the same as linux
    <braunr> it should certainly be simpler than on linux though
    <damo22> hmm
    <braunr> microkernels are well suited for real time because of the well
      defined interfaces they provide and the small amount of code running in
      kernel
    <damo22> that sounds promising
    <braunr> you usually need to add priority inheritance and take care of just
      a few corner cases and that's all
    <braunr> but as youpi said, it still requires work
    <braunr> and nobody's working on it
    <braunr> you may want to check l4 fiasco.oc though


# System Personality

## IRC, freenode, #hurd, 2013-07-29

    <teythoon> over the past few days I gained a new understanding of the Hurd
    <braunr> teythoon: really ? :)
    <tschwinge> teythoon: That it's a complex and distributed system?  ;-)
    <tschwinge> And at the same time a really simple one?
    <tschwinge> ;-D
    <teythoon> it's just a bunch of mach programs and some do communicate and
      behave in a way a posix system would, but that is more a convention than
      anything else
    <teythoon> tschwinge: yes, kind of simple and complex :)
    <braunr> the right terminology is "system personality"
    <braunr> 11:03 < teythoon> over the past few days I gained a new
      understanding of the Hurd
    <braunr> teythoon: still no answer on that :)
    <teythoon> braunr: ah, I spent lot's of time with the core servers and
      early bootstrapping and now I gained the feeling that I've seen the Hurd
      for what it really is for the first time


# RPC Interfaces

## IRC, freenode, #hurd, 2013-09-03

    <rekado> I'm a little confused by the hurd and incubator git repos.
    <rekado> DDE is only found in the dde branch in incubator, but not in the
      hurd repo.
    <rekado> Does this mean that DDE is not ready for master yet?
    <braunr> yes
    <rekado> If DDE is not yet used in the hurd (except in the dde branch in
      the incubator repo), does pfinet use some custom glue code to use the
      Linux drivers?
    <braunr> this has nothing to do with pfinet
    <braunr> pfinet is the networking stack, netdde are the networking drivers
    <braunr> the interface between them doesn't change, whether drivers are in
      kernel or not
    <rekado> I see


# IRC, freenode, #hurd, 2013-09-20

    <giuscri> HI there, I have no previous knowledge about OS's. I'm trying to
      undestand the structure of the Hurd and the comparison between, say,
      Linux way of managing stuff ...
    <giuscri> for instance, I read: "Unlike other popular kernel software, the
      Hurd has an object-oriented structure that allows it to evolve without
      compromising its design."
    <giuscri> that means that while for adding feature to the Linux-kernel you
      have to add some stuff `inside` a procedure, whilst in the Hurd kernel
      you can just, in principle at least, add an object and making the kernel
      using it?...
    <giuscri> Am I making stuff too simple?
    <giuscri> Thanks
    <braunr> not exactly
    <braunr> unix historically has a "file-oriented" structure
    <braunr> the hurd allows servers to implement whatever type they want,
      through the ability to create custom interfaces
    <braunr> custom interfaces means custom calls, custom semantics, custom
      methods on objects
    <braunr> you're not restricted to the set of file interfaces (open, seek,
      read, write, select, close, etc..) that unix normally provides
    <giuscri> braunr: uhm ...some example?
    <braunr> see processes for example
    <braunr> see
      http://darnassus.sceen.net/gitweb/savannah_mirror/hurd.git/tree/HEAD:/hurd
    <braunr> this is the collection of interfaces the hurd provides
    <braunr> most of them map to unix calls, because gnu aims at posix
      compatibility too
    <braunr> some are internal, like processes
    <braunr> or authentication
    <braunr> but most importantly, you're not restricted to that, you can add
      your own interfaces
    <braunr> on a unix, you'd need new system calls
    <braunr> or worse, extending through the catch-all ioctl call
    <giuscri> braunr: mhn ...sorry, not getting that.
    <braunr> what part ?
    <kilobug> ioctl has become such a mess :s
    <giuscri> braunr: when you say that Unix is `file-oriented` you're
      referring to the fact that sending/receiving data to/from the kernel is
      designed like sending/receiving data to/from a file ...?
    <braunr> not merely sending/receiving
    <braunr> note how formatted your way of thinking is
    <braunr> you directly think in terms of sending/receiving (i.e. read and
      write)
    <giuscri> braunr: (yes)
    <braunr> that's why unix is file oriented, access to objects is done that
      way
    <braunr> on the hurd, the file interface is one interface
    <braunr> there is nothing preventing you from implementing services with a
      different interface
    <braunr> as a real world example, people interested in low latency
      profesionnal audio usually dislike send/recv
    <braunr> see
      http://lac.linuxaudio.org/2003/zkm/slides/paul_davis-jack/unix.html for
      example
    <kilobug> braunr: how big and messy ioctl has become is a good proof that
      the Unix way, while powerful, does have its limits
    <braunr> giuscri: keep in mind the main goal of the hurd is extensibility
      without special privileges
    <giuscri> braunr: privileges?
    <braunr> root
    <giuscri> braunr: what's wrong with privileges?
    <braunr> they allow malicious/buggy stuff to happne
    <braunr> and have dramatic effects
    <giuscri> braunr: you're obviously *not* referring to the fact that once
      one have the root privileges could change some critical-data
    <giuscri> ?
    <braunr> i'm referring to why privilege separation exists in the first
      place
    <braunr> if you have unprivileged users, that's because you don't want them
      to mess things up
    <braunr> on unix, extending the system requires privileges, giving those
      who do it the ability to destroy everything
    <giuscri> braunr: yes, I think the same
    <braunr> the hurd is designed to allow unprivileged users to extend their
      part of the system, and to some extent share that with other users
    <braunr> although work still remains to completely achieve that
    <giuscri> braunr: mhn ...that's the `server`-layer between the
      single-application and kernel ?
    <braunr> the multi-server based approach not only allows that, but
      mitigates damage even when privileged servers misbehave
    <braunr> one aspect of it yes
    <braunr> but as i was just saying, even root servers can't mess things too
      much
    <braunr> for example, our old (sometimes buggy) networking stack can be
      restarted when it behaves wrong
    <braunr> the only side effect being some applications (ssh and exim come to
      mind) which need to be restarted too because they don't expect the
      network stack to be restarted
    <giuscri> braunr: ...instead?
    <braunr> ?
    <kilobug> giuscri: on Linux, if the network stack crash/freezes, you don't
      have any other option than rebooting the system - usually with a nice
      "kernel pani"
    <kilobug> giuscri: and you may even get filesystem corruption "for free" in
      the bundle
    <braunr> and hoping it didn't corrupt something important like file system
      caches before being flushed
    <giuscri> kilobug, braunr : mhn, ook