summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorThomas Schwinge <thomas@codesourcery.com>2013-09-26 15:58:55 +0200
committerThomas Schwinge <thomas@codesourcery.com>2013-09-26 15:58:55 +0200
commit583535d65ef20474d970c71f90d39ff5f5bf7180 (patch)
tree9867c64dd41e510613fc1e95a357c832dc71c3d5
parent295f1c447d706315e5bce15365f89c9e6983c721 (diff)
parent99f7a42c80813fcbec01277ceb13a82f7f4421c7 (diff)
Merge commit '99f7a42c80813fcbec01277ceb13a82f7f4421c7'
Conflicts: open_issues/dde.mdwn
-rw-r--r--contributing.mdwn2
-rw-r--r--faq/sata_disk_drives/discussion.mdwn6
-rw-r--r--hurd/running.mdwn2
-rw-r--r--hurd/running/cloud.mdwn18
-rw-r--r--hurd/running/openstack.mdwn14
-rw-r--r--open_issues/cloud.mdwn49
-rw-r--r--open_issues/dde.mdwn185
-rw-r--r--open_issues/user-space_device_drivers.mdwn2
-rw-r--r--open_issues/virtio.mdwn208
9 files changed, 237 insertions, 249 deletions
diff --git a/contributing.mdwn b/contributing.mdwn
index 75b99bbd..68dcca0c 100644
--- a/contributing.mdwn
+++ b/contributing.mdwn
@@ -103,7 +103,7 @@ access to it from userland. exec would probably call it from `hurd/exec/exec.c`,
which exposes the partitions of the disk image, using parted, and
the parted-based storeio (`settrans -c foos1 /hurd/storeio -T typed
part:1:file:/home/samy/tmp/foo`). This would be libnetfs-based.
-* Write virtio drivers for KVM. Ideally they would be userland. That means getting documented about how virtio works, and implement it. The hurdish part is mostly about exposing the driver interface. The devnode translator can be used as a skeleton.
+* Write [[virtio drivers for KVM|virtio#KVM]].
* Port valgrind. There is a whole
[[GSoC proposal|community/gsoc/project_ideas/valgrind ]] about this, but the
basic port could be small.
diff --git a/faq/sata_disk_drives/discussion.mdwn b/faq/sata_disk_drives/discussion.mdwn
index 3f063b77..e9da8560 100644
--- a/faq/sata_disk_drives/discussion.mdwn
+++ b/faq/sata_disk_drives/discussion.mdwn
@@ -34,6 +34,9 @@ License|/fdl]]."]]"""]]
<youpi> but not so much actually
<anatoly> What about virtio? will it speed up?
+
+[[open_issues/virtio]].
+
<youpi> probably not so much
<youpi> because in the end it works the same
<youpi> the guest writes the physical addresse in mapped memory
@@ -97,6 +100,9 @@ License|/fdl]]."]]"""]]
http://git.qemu.org/?p=qemu.git;a=blob;f=hw/ide/ahci.c;h=eab60961bd818c22cf819d85d0bd5485d3a17754;hb=HEAD
<braunr> looks ok in recent versions
<braunr> looks useful to have virtio drivers though
+
+[[open_issues/virtio]].
+
<anatoly> virtio is shown as fastest way for IO in the presentation
<anatoly> Hm, failed to run qemu with AHCI enabled
<anatoly> qemu 1.1 from debian testing
diff --git a/hurd/running.mdwn b/hurd/running.mdwn
index 15ee25d9..b3caf21a 100644
--- a/hurd/running.mdwn
+++ b/hurd/running.mdwn
@@ -17,7 +17,7 @@ There are several different ways to run a GNU/Hurd system:
* [[microkernel/mach/gnumach/ports/Xen]] - In Xen
* [[Live_CD]]
* [[QEMU]] - In QEMU
-* [[openstack]] - In openstack
+* [[cloud]] - In the "cloud": OpenStack
* [[chroots|chroot]] need a couple of tricks to work properly.
* [[VirtualBox]] - In VirtualBox
* [[vmware]] (**non-free!**)
diff --git a/hurd/running/cloud.mdwn b/hurd/running/cloud.mdwn
new file mode 100644
index 00000000..b063fd7b
--- /dev/null
+++ b/hurd/running/cloud.mdwn
@@ -0,0 +1,18 @@
+[[!meta copyright="Copyright © 2013 Free Software Foundation, Inc."]]
+
+[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable
+id="license" text="Permission is granted to copy, distribute and/or modify this
+document under the terms of the GNU Free Documentation License, Version 1.2 or
+any later version published by the Free Software Foundation; with no Invariant
+Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license
+is included in the section entitled [[GNU Free Documentation
+License|/fdl]]."]]"""]]
+
+# [[!wikipedia OpenStack]]
+
+It is possible to run the Hurd as a KVM-based OpenStack cloud instance.
+
+[[For the time being|open_issues/virtio]], you'll have to avoid using virtio
+drivers, and use emulated hardware instead:
+
+ $ glance image-create --property hw_disk_bus=ide --property hw_cdrom_bus=ide --property hw_vif_model=rtl8139 --disk-format raw --container-format bare --name gnu-hurd --copy-from http://people.debian.org/~sthibault/hurd-i386/debian-hurd.img
diff --git a/hurd/running/openstack.mdwn b/hurd/running/openstack.mdwn
deleted file mode 100644
index af03583b..00000000
--- a/hurd/running/openstack.mdwn
+++ /dev/null
@@ -1,14 +0,0 @@
-[[!meta copyright="Copyright © 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012,
-2013 Free Software Foundation, Inc."]]
-
-[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable
-id="license" text="Permission is granted to copy, distribute and/or modify this
-document under the terms of the GNU Free Documentation License, Version 1.2 or
-any later version published by the Free Software Foundation; with no Invariant
-Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license
-is included in the section entitled [[GNU Free Documentation
-License|/fdl]]."]]"""]]
-
-One can tell openstack to avoid using virtio drivers, and use emulated hardware instead:
-
- glance image-create --property hw_disk_bus=ide --property hw_cdrom_bus=ide --property hw_vif_model=rtl8139 --disk-format raw --container-format bare --name gnu-hurd --copy-from http://people.debian.org/~sthibault/hurd-i386/debian-hurd.img
diff --git a/open_issues/cloud.mdwn b/open_issues/cloud.mdwn
deleted file mode 100644
index 58ed2f5b..00000000
--- a/open_issues/cloud.mdwn
+++ /dev/null
@@ -1,49 +0,0 @@
-[[!meta copyright="Copyright © 2013 Free Software Foundation, Inc."]]
-
-[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable
-id="license" text="Permission is granted to copy, distribute and/or modify this
-document under the terms of the GNU Free Documentation License, Version 1.2 or
-any later version published by the Free Software Foundation; with no Invariant
-Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license
-is included in the section entitled [[GNU Free Documentation
-License|/fdl]]."]]"""]]
-
-Some *cloud*y things.
-
-[[!toc]]
-
-
-# [[!wikipedia OpenStack]]
-
-## IRC, freenode, #hurd, 2013-09-21
-
- <jproulx> Hmmm, was hoping to run hurd on my kvm based openstack cloud, but
- no virtio.
- <jproulx> I see "Write virtio drivers for KVM. Ideally they would be
- userland" is listed as a "small hack", as a sysadmin rather than an OS
- hacker it doesn't sound small to me, but if there's some standard
- documentation on porting drivers I could take a run at it.
- <youpi> well, perhaps "small" is not the proper word
- <youpi> compared to e.g. revamping disk i/o :)
- <youpi> it's not something one can achieve in e.g. 1h, for instance
- <youpi> it's not something straightforward either, one has to get
- documentation about virtio (I don't know what exists), and get
- documentation about the mach device interface (that's in the gnumach
- manual, the devnode translator can be used as a skeleton)
- <youpi> jproulx: openstack imposes the use of virtio drivers? that's odd
- <jproulx> that's more like I'd expect. I there's enough search terms in
- your response for me to see what's really involved
- <jproulx> youpi it doesn't impose that but it is how mine is configured the
- other thousand VMs are happier that way.
- <jproulx> I can look at that side too and see if I need to have everything
- use the same device settings or if I can control it per instance
- <jproulx> A bit of a non-sequitur at this point but just in case someone
- searches the transcripts and sees my questions about hurd on openstack,
- yes it is possible to specify non-virtio devices per image, here's the
- commandline to load sthibault's qemu image into openstack with devices
- that work:
- <jproulx> glance image-create --property hw_disk_bus=ide --property
- hw_cdrom_bus=ide --property hw_vif_model=rtl8139 --disk-format raw
- --container-format bare --name gnu-hurd --copy-from
- http://people.debian.org/~sthibault/hurd-i386/debian-hurd.img
- <youpi> jproulx: thanks, I've pushed it on the wiki
diff --git a/open_issues/dde.mdwn b/open_issues/dde.mdwn
index 9cb31d1c..fe9fd8aa 100644
--- a/open_issues/dde.mdwn
+++ b/open_issues/dde.mdwn
@@ -602,187 +602,4 @@ In context of [[libpthread]].
partitions/media...
-# virtio
-
-
-## IRC, freenode, #hurd, 2012-07-01
-
- <braunr> hm, i haven't looked but, does someone know if virtio is included
- in netdde ?
- <youpi> braunr: nope, there's an underlying virtio layer needed before
-
-
-## IRC, freenode, #hurd, 2013-07-24
-
- <teythoon> btw, I'd love to see libvirt support in hurd
- <teythoon> I tried to hack up a dde based net translator
- <teythoon> afaics they are very much like any other pci device, so the
- infrastructure should be there
- <teythoon> if anything I expect the libvirt stuff to be more easily
- portable
- <youpi> what do you mean by "a dde based net translator" ?
- <youpi> ah, you mean virtio support in netdde ?
- <teythoon> yes
- <teythoon> virtio net is present in the kernel version we use for the dde
- drivers
- <teythoon> so I just copied the dde driver over, but I had no luck
- compiling it
- <youpi> ok, but what would be the benefice over e1000 & co?
- <teythoon> any of the dde drivers btw
- <teythoon> youpi: less overhead
- <youpi> e1000 is already low overhead actually
- <youpi> there are less and less differences in strategies for driving a
- real board, and a virtual one
- <youpi> we are seeing shared memory request buffer, dma, etc. in real
- boards
- <youpi> which ends up being almost exactly what virtio does :)
- <youpi> ahci, for instance, really looks extremely like a virtio interface
- <youpi> (I know, it's a disk, but that's the same idea, and I do know what
- I'm talking about here :) )
- <teythoon> that would actually be my next wish, a virtio disk driver, and
- virt9p ;)
- <braunr> on the other hand, i wouldn't spend much time on a virtio disk
- driver for now
- <braunr> the hurd as it is can't boot on a device that isn't managed by the
- kernel
- <braunr> we'd need to change the boot protocol
- <teythoon> ok, I wasn't planning to, just wanted to see if I can easily
- hack up the virtio-net translator
- <braunr> well, as youpi pointed, there is little benefit to that as well
- <braunr> but if that's what you find fun, help yourself :)
- <teythoon> I didn't know that, I assumed there was some value to the virtio
- stuff
- <braunr> there is
- <braunr> but relatively to other improvements, it's low
-
-
-## IRC, freenode, #hurd, 2013-09-14
-
- <rekado> I'm slowly beginning to understand the virtio driver framework
- after reading Rusty's virtio paper and the Linux sources of a few virtio
- drivers.
- <rekado> Has anyone started working on virtio drivers yet?
- <youpi> rekado: nobody has worked on virtio drivers, as I know of
- <rekado> youpi: I'm still having a hard time figuring out where virtio
- would fit in in the hurd.
- <rekado> I'm afraid I don't understand how drivers in the hurd work at all.
- Will part of this have to be implemented in Mach?
- <youpi> rekado: it could be implemented either as a Mach driver, or as a
- userland driver
- <youpi> better try the second alternative
- <youpi> i.e. as a translator
- <youpi> sitting on e.g. /dev/eth0 or /dev/hd0
-
-
-## IRC, freenode, #hurd, 2013-09-18
-
- <rekado> To get started with virtio I'd like to write a simple driver for
- the entropy device which appears as a PCI device when running qemu with
- -device virtio-rng-pci .
- <braunr> why entropy ?
- <rekado> because it's the easiest.
- <braunr> is it ?
- <braunr> the driver itself may be, but integrating it within the system
- probably isn't
- <rekado> It uses the virtio framework but only really consists of a
- read-only buffer virtqueue
- <braunr> you're likely to want something that can be part of an already
- existing subsystem like networking
- <rekado> All the driver has to do is push empty buffers onto the queue and
- pass the data it receives back from the host device to the client
- <rekado> The thing about existing subsystems is: I don't really understand
- them enough.
- <rekado> I understand virtio, though.
- <braunr> but isn't your goal understanding at least one ?
- <rekado> yes.
- <braunr> then i suggest working on virtio-net
- <braunr> and making it work in netdde
- <rekado> But to write a virtio driver for network I must first understand
- how to actually talk to the host virtio driver/device.
- <braunr> rekado: why ?
- <rekado> There is still a knowledge gap between what I know about virtio
- and what I have learned about the Hurd/Mach.
- <braunr> are you trying to learn about virtio or the hurd ?
- <rekado> both, because I'd like to write virtio drivers for the hurd.
- <braunr> hm no
- <rekado> with virtio drivers pass buffers to queues and then notify the
- host.
- <braunr> you may want it, but it's not what's best for the project
- <rekado> oh.
- <braunr> what's best is reusing existing drivers
- <braunr> we're much too far from having enough manpower to maintain our own
- <rekado> you mean porting the linux virtio drivers?
- <braunr> there already is a virtio-net driver in linux 2.6
- <braunr> so yes, reuse it
- <braunr> the only thing which might be worth it is a gnumach in-kernel
- driver for virtio block devices
- <braunr> because currently, we need our boot devices to be supported by the
- kernel itself ...
- <rekado> when I boot the hurd with qemu and the entropy device I see it as
- an unknown PCI device in the output of lspci.
- <braunr> that's just the lspci database which doesn't know it
- <rekado> Well, does this mean that I could actually talk to the device
- already? E.g., through libpciaccess?
- <rekado> I'm asking because I don't understand how exactly devices "appear"
- on the Hurd.
- <braunr> it's one of the most difficult topic currently
- <braunr> you probably can talk to the device, yes
- <braunr> but there are issues with pci arbitration
- * rekado takes notes: "pci arbitration"
- <rekado> so, this is about coordinating bus access, right?
- <braunr> yes
- <braunr> i'm not a pci expert so i can't tell you much more
- <rekado> heh, okay.
- <rekado> what kind of "issues with pci arbitration" are you referring to,
- though?
- <rekado> Is this due to something that Mach isn't doing?
- <braunr> ideally, mach doesn't know about pci
- <braunr> the fact we still need in-kernel drivers for pci devices is a big
- problem
- <braunr> we may need something like a pci server in userspace
- <braunr> on l4 system it's called an io server
- <rekado> How do in-kernel drivers avoid these issues?
- <braunr> they don't
- <rekado> Or rather: why is it they don't have these issues?
- <braunr> they do
- <rekado> oh.
- <braunr> we had it when youpi added the sata driver
- <braunr> so currently, all drivers need to avoid sharing common interrupts
- for example
- <braunr> again, since i'm not an expert about pci, i don't know more about
- the details
- <Hooligan0> pci arbitrations are made by hardware ... no ?
- <braunr> Hooligan0: i don't know
- <braunr> i'm not merely talking about bus mastering here
- <braunr> simply preventing drivers from mapping the same physical memory
- should be enforced somewhere
- <braunr> i'm not sure it is
- <braunr> same for irq sharing
- <Hooligan0> braunr : is the support for boot devices into the kernel is
- really needed if a loader put servers into the memory before starting
- mach ?
- <braunr> Hooligan0: there is a chicken-and-egg problem during boot,
- whatever the solution
- <braunr> obviously, we can preload from memory, but then you really want
- your root file system to use a disk
- <braunr> Hooligan0: the problem with preloading from memory is that you
- want the root file system to use a real device
- <braunr> the same way / refers to one on unix
- <braunr> so you have an actual, persistent hierarchy from which the system
- can be initialized and translators started
- <braunr> you also want to share as much as possible between the early
- programs and the others
- <braunr> so for example, both the disk driver and the root file system
- should be able to use the same libc instance
- <braunr> this requires a "switch root" mechanism that needs to be well
- defined and robust
- <braunr> otherwise we'd just build our drivers and root fs statically
- <braunr> (which is currently done with rootfs actually)
- <braunr> and this isn't something we're comfortable with
- <braunr> so for now, in-kernel drivers
- <Hooligan0> humm ... disk driver and libc ... i see
- <Hooligan0> in other way ... disk drivers can use only a little number of
- lib* functions ; so with a static version, a bit of memory is lots
- <Hooligan0> s/lots/lost
- <Hooligan0> and maybe the driver can be hot-replaced after boot (ok ok,
- it's more simple to say than to write)
+# [[virtio]]
diff --git a/open_issues/user-space_device_drivers.mdwn b/open_issues/user-space_device_drivers.mdwn
index be77f8e1..d6c33d30 100644
--- a/open_issues/user-space_device_drivers.mdwn
+++ b/open_issues/user-space_device_drivers.mdwn
@@ -205,6 +205,8 @@ A similar problem is described in
kernel
<braunr> we'd need to change the boot protocol
+[[virtio]].
+
#### IRC, freenode, #hurd, 2013-06-28
diff --git a/open_issues/virtio.mdwn b/open_issues/virtio.mdwn
new file mode 100644
index 00000000..8298cbfe
--- /dev/null
+++ b/open_issues/virtio.mdwn
@@ -0,0 +1,208 @@
+[[!meta copyright="Copyright © 2010, 2011, 2012, 2013 Free Software Foundation,
+Inc."]]
+
+[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable
+id="license" text="Permission is granted to copy, distribute and/or modify this
+document under the terms of the GNU Free Documentation License, Version 1.2 or
+any later version published by the Free Software Foundation; with no Invariant
+Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license
+is included in the section entitled [[GNU Free Documentation
+License|/fdl]]."]]"""]]
+
+[[!tag open_issue_hurd open_issue_gnumach]]
+
+
+# IRC, freenode, #hurd, 2012-07-01
+
+In context of [[DDE]].
+
+ <braunr> hm, i haven't looked but, does someone know if virtio is included
+ in netdde ?
+ <youpi> braunr: nope, there's an underlying virtio layer needed before
+
+
+# IRC, freenode, #hurd, 2013-07-24
+
+ <teythoon> btw, I'd love to see libvirt support in hurd
+ <teythoon> I tried to hack up a dde based net translator
+ <teythoon> afaics they are very much like any other pci device, so the
+ infrastructure should be there
+ <teythoon> if anything I expect the libvirt stuff to be more easily
+ portable
+ <youpi> what do you mean by "a dde based net translator" ?
+ <youpi> ah, you mean virtio support in netdde ?
+ <teythoon> yes
+ <teythoon> virtio net is present in the kernel version we use for the dde
+ drivers
+ <teythoon> so I just copied the dde driver over, but I had no luck
+ compiling it
+ <youpi> ok, but what would be the benefice over e1000 & co?
+ <teythoon> any of the dde drivers btw
+ <teythoon> youpi: less overhead
+ <youpi> e1000 is already low overhead actually
+ <youpi> there are less and less differences in strategies for driving a
+ real board, and a virtual one
+ <youpi> we are seeing shared memory request buffer, dma, etc. in real
+ boards
+ <youpi> which ends up being almost exactly what virtio does :)
+ <youpi> ahci, for instance, really looks extremely like a virtio interface
+ <youpi> (I know, it's a disk, but that's the same idea, and I do know what
+ I'm talking about here :) )
+ <teythoon> that would actually be my next wish, a virtio disk driver, and
+ virt9p ;)
+ <braunr> on the other hand, i wouldn't spend much time on a virtio disk
+ driver for now
+ <braunr> the hurd as it is can't boot on a device that isn't managed by the
+ kernel
+ <braunr> we'd need to change the boot protocol
+ <teythoon> ok, I wasn't planning to, just wanted to see if I can easily
+ hack up the virtio-net translator
+ <braunr> well, as youpi pointed, there is little benefit to that as well
+ <braunr> but if that's what you find fun, help yourself :)
+ <teythoon> I didn't know that, I assumed there was some value to the virtio
+ stuff
+ <braunr> there is
+ <braunr> but relatively to other improvements, it's low
+
+
+# IRC, freenode, #hurd, 2013-09-14
+
+ <rekado> I'm slowly beginning to understand the virtio driver framework
+ after reading Rusty's virtio paper and the Linux sources of a few virtio
+ drivers.
+ <rekado> Has anyone started working on virtio drivers yet?
+ <youpi> rekado: nobody has worked on virtio drivers, as I know of
+ <rekado> youpi: I'm still having a hard time figuring out where virtio
+ would fit in in the hurd.
+ <rekado> I'm afraid I don't understand how drivers in the hurd work at all.
+ Will part of this have to be implemented in Mach?
+ <youpi> rekado: it could be implemented either as a Mach driver, or as a
+ userland driver
+ <youpi> better try the second alternative
+ <youpi> i.e. as a translator
+ <youpi> sitting on e.g. /dev/eth0 or /dev/hd0
+
+
+## IRC, freenode, #hurd, 2013-09-18
+
+ <rekado> To get started with virtio I'd like to write a simple driver for
+ the entropy device which appears as a PCI device when running qemu with
+ -device virtio-rng-pci .
+ <braunr> why entropy ?
+ <rekado> because it's the easiest.
+ <braunr> is it ?
+ <braunr> the driver itself may be, but integrating it within the system
+ probably isn't
+ <rekado> It uses the virtio framework but only really consists of a
+ read-only buffer virtqueue
+ <braunr> you're likely to want something that can be part of an already
+ existing subsystem like networking
+ <rekado> All the driver has to do is push empty buffers onto the queue and
+ pass the data it receives back from the host device to the client
+ <rekado> The thing about existing subsystems is: I don't really understand
+ them enough.
+ <rekado> I understand virtio, though.
+ <braunr> but isn't your goal understanding at least one ?
+ <rekado> yes.
+ <braunr> then i suggest working on virtio-net
+ <braunr> and making it work in netdde
+ <rekado> But to write a virtio driver for network I must first understand
+ how to actually talk to the host virtio driver/device.
+ <braunr> rekado: why ?
+ <rekado> There is still a knowledge gap between what I know about virtio
+ and what I have learned about the Hurd/Mach.
+ <braunr> are you trying to learn about virtio or the hurd ?
+ <rekado> both, because I'd like to write virtio drivers for the hurd.
+ <braunr> hm no
+ <rekado> with virtio drivers pass buffers to queues and then notify the
+ host.
+ <braunr> you may want it, but it's not what's best for the project
+ <rekado> oh.
+ <braunr> what's best is reusing existing drivers
+ <braunr> we're much too far from having enough manpower to maintain our own
+ <rekado> you mean porting the linux virtio drivers?
+ <braunr> there already is a virtio-net driver in linux 2.6
+ <braunr> so yes, reuse it
+ <braunr> the only thing which might be worth it is a gnumach in-kernel
+ driver for virtio block devices
+ <braunr> because currently, we need our boot devices to be supported by the
+ kernel itself ...
+ <rekado> when I boot the hurd with qemu and the entropy device I see it as
+ an unknown PCI device in the output of lspci.
+ <braunr> that's just the lspci database which doesn't know it
+ <rekado> Well, does this mean that I could actually talk to the device
+ already? E.g., through libpciaccess?
+ <rekado> I'm asking because I don't understand how exactly devices "appear"
+ on the Hurd.
+ <braunr> it's one of the most difficult topic currently
+ <braunr> you probably can talk to the device, yes
+ <braunr> but there are issues with pci arbitration
+ * rekado takes notes: "pci arbitration"
+ <rekado> so, this is about coordinating bus access, right?
+ <braunr> yes
+ <braunr> i'm not a pci expert so i can't tell you much more
+ <rekado> heh, okay.
+ <rekado> what kind of "issues with pci arbitration" are you referring to,
+ though?
+ <rekado> Is this due to something that Mach isn't doing?
+ <braunr> ideally, mach doesn't know about pci
+ <braunr> the fact we still need in-kernel drivers for pci devices is a big
+ problem
+ <braunr> we may need something like a pci server in userspace
+ <braunr> on l4 system it's called an io server
+ <rekado> How do in-kernel drivers avoid these issues?
+ <braunr> they don't
+ <rekado> Or rather: why is it they don't have these issues?
+ <braunr> they do
+ <rekado> oh.
+ <braunr> we had it when youpi added the sata driver
+ <braunr> so currently, all drivers need to avoid sharing common interrupts
+ for example
+ <braunr> again, since i'm not an expert about pci, i don't know more about
+ the details
+ <Hooligan0> pci arbitrations are made by hardware ... no ?
+ <braunr> Hooligan0: i don't know
+ <braunr> i'm not merely talking about bus mastering here
+ <braunr> simply preventing drivers from mapping the same physical memory
+ should be enforced somewhere
+ <braunr> i'm not sure it is
+ <braunr> same for irq sharing
+ <Hooligan0> braunr : is the support for boot devices into the kernel is
+ really needed if a loader put servers into the memory before starting
+ mach ?
+ <braunr> Hooligan0: there is a chicken-and-egg problem during boot,
+ whatever the solution
+ <braunr> obviously, we can preload from memory, but then you really want
+ your root file system to use a disk
+ <braunr> Hooligan0: the problem with preloading from memory is that you
+ want the root file system to use a real device
+ <braunr> the same way / refers to one on unix
+ <braunr> so you have an actual, persistent hierarchy from which the system
+ can be initialized and translators started
+ <braunr> you also want to share as much as possible between the early
+ programs and the others
+ <braunr> so for example, both the disk driver and the root file system
+ should be able to use the same libc instance
+ <braunr> this requires a "switch root" mechanism that needs to be well
+ defined and robust
+ <braunr> otherwise we'd just build our drivers and root fs statically
+ <braunr> (which is currently done with rootfs actually)
+ <braunr> and this isn't something we're comfortable with
+ <braunr> so for now, in-kernel drivers
+ <Hooligan0> humm ... disk driver and libc ... i see
+ <Hooligan0> in other way ... disk drivers can use only a little number of
+ lib* functions ; so with a static version, a bit of memory is lots
+ <Hooligan0> s/lots/lost
+ <Hooligan0> and maybe the driver can be hot-replaced after boot (ok ok,
+ it's more simple to say than to write)
+
+
+<a name="KVM"></a>
+# Virtio Drivers for KVM
+
+In context of [[hurd/running/cloud]], *OpenStack*.
+
+Ideally they would be userland. That means getting documentation about how
+virtio works, and implement it. The hurdish part is mostly about exposing the
+driver interface. The [[hurd/translator/devnode]] translator can be used as a
+skeleton.