From 49a086299e047b18280457b654790ef4a2e5abfa Mon Sep 17 00:00:00 2001 From: Samuel Thibault Date: Wed, 18 Feb 2015 00:58:35 +0100 Subject: Revert "rename open_issues.mdwn to service_solahart_jakarta_selatan__082122541663.mdwn" This reverts commit 95878586ec7611791f4001a4ee17abf943fae3c1. --- open_issues/virtio.mdwn | 208 ++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 208 insertions(+) create mode 100644 open_issues/virtio.mdwn (limited to 'open_issues/virtio.mdwn') diff --git a/open_issues/virtio.mdwn b/open_issues/virtio.mdwn new file mode 100644 index 00000000..8298cbfe --- /dev/null +++ b/open_issues/virtio.mdwn @@ -0,0 +1,208 @@ +[[!meta copyright="Copyright © 2010, 2011, 2012, 2013 Free Software Foundation, +Inc."]] + +[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable +id="license" text="Permission is granted to copy, distribute and/or modify this +document under the terms of the GNU Free Documentation License, Version 1.2 or +any later version published by the Free Software Foundation; with no Invariant +Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license +is included in the section entitled [[GNU Free Documentation +License|/fdl]]."]]"""]] + +[[!tag open_issue_hurd open_issue_gnumach]] + + +# IRC, freenode, #hurd, 2012-07-01 + +In context of [[DDE]]. + + hm, i haven't looked but, does someone know if virtio is included + in netdde ? + braunr: nope, there's an underlying virtio layer needed before + + +# IRC, freenode, #hurd, 2013-07-24 + + btw, I'd love to see libvirt support in hurd + I tried to hack up a dde based net translator + afaics they are very much like any other pci device, so the + infrastructure should be there + if anything I expect the libvirt stuff to be more easily + portable + what do you mean by "a dde based net translator" ? + ah, you mean virtio support in netdde ? + yes + virtio net is present in the kernel version we use for the dde + drivers + so I just copied the dde driver over, but I had no luck + compiling it + ok, but what would be the benefice over e1000 & co? + any of the dde drivers btw + youpi: less overhead + e1000 is already low overhead actually + there are less and less differences in strategies for driving a + real board, and a virtual one + we are seeing shared memory request buffer, dma, etc. in real + boards + which ends up being almost exactly what virtio does :) + ahci, for instance, really looks extremely like a virtio interface + (I know, it's a disk, but that's the same idea, and I do know what + I'm talking about here :) ) + that would actually be my next wish, a virtio disk driver, and + virt9p ;) + on the other hand, i wouldn't spend much time on a virtio disk + driver for now + the hurd as it is can't boot on a device that isn't managed by the + kernel + we'd need to change the boot protocol + ok, I wasn't planning to, just wanted to see if I can easily + hack up the virtio-net translator + well, as youpi pointed, there is little benefit to that as well + but if that's what you find fun, help yourself :) + I didn't know that, I assumed there was some value to the virtio + stuff + there is + but relatively to other improvements, it's low + + +# IRC, freenode, #hurd, 2013-09-14 + + I'm slowly beginning to understand the virtio driver framework + after reading Rusty's virtio paper and the Linux sources of a few virtio + drivers. + Has anyone started working on virtio drivers yet? + rekado: nobody has worked on virtio drivers, as I know of + youpi: I'm still having a hard time figuring out where virtio + would fit in in the hurd. + I'm afraid I don't understand how drivers in the hurd work at all. + Will part of this have to be implemented in Mach? + rekado: it could be implemented either as a Mach driver, or as a + userland driver + better try the second alternative + i.e. as a translator + sitting on e.g. /dev/eth0 or /dev/hd0 + + +## IRC, freenode, #hurd, 2013-09-18 + + To get started with virtio I'd like to write a simple driver for + the entropy device which appears as a PCI device when running qemu with + -device virtio-rng-pci . + why entropy ? + because it's the easiest. + is it ? + the driver itself may be, but integrating it within the system + probably isn't + It uses the virtio framework but only really consists of a + read-only buffer virtqueue + you're likely to want something that can be part of an already + existing subsystem like networking + All the driver has to do is push empty buffers onto the queue and + pass the data it receives back from the host device to the client + The thing about existing subsystems is: I don't really understand + them enough. + I understand virtio, though. + but isn't your goal understanding at least one ? + yes. + then i suggest working on virtio-net + and making it work in netdde + But to write a virtio driver for network I must first understand + how to actually talk to the host virtio driver/device. + rekado: why ? + There is still a knowledge gap between what I know about virtio + and what I have learned about the Hurd/Mach. + are you trying to learn about virtio or the hurd ? + both, because I'd like to write virtio drivers for the hurd. + hm no + with virtio drivers pass buffers to queues and then notify the + host. + you may want it, but it's not what's best for the project + oh. + what's best is reusing existing drivers + we're much too far from having enough manpower to maintain our own + you mean porting the linux virtio drivers? + there already is a virtio-net driver in linux 2.6 + so yes, reuse it + the only thing which might be worth it is a gnumach in-kernel + driver for virtio block devices + because currently, we need our boot devices to be supported by the + kernel itself ... + when I boot the hurd with qemu and the entropy device I see it as + an unknown PCI device in the output of lspci. + that's just the lspci database which doesn't know it + Well, does this mean that I could actually talk to the device + already? E.g., through libpciaccess? + I'm asking because I don't understand how exactly devices "appear" + on the Hurd. + it's one of the most difficult topic currently + you probably can talk to the device, yes + but there are issues with pci arbitration + * rekado takes notes: "pci arbitration" + so, this is about coordinating bus access, right? + yes + i'm not a pci expert so i can't tell you much more + heh, okay. + what kind of "issues with pci arbitration" are you referring to, + though? + Is this due to something that Mach isn't doing? + ideally, mach doesn't know about pci + the fact we still need in-kernel drivers for pci devices is a big + problem + we may need something like a pci server in userspace + on l4 system it's called an io server + How do in-kernel drivers avoid these issues? + they don't + Or rather: why is it they don't have these issues? + they do + oh. + we had it when youpi added the sata driver + so currently, all drivers need to avoid sharing common interrupts + for example + again, since i'm not an expert about pci, i don't know more about + the details + pci arbitrations are made by hardware ... no ? + Hooligan0: i don't know + i'm not merely talking about bus mastering here + simply preventing drivers from mapping the same physical memory + should be enforced somewhere + i'm not sure it is + same for irq sharing + braunr : is the support for boot devices into the kernel is + really needed if a loader put servers into the memory before starting + mach ? + Hooligan0: there is a chicken-and-egg problem during boot, + whatever the solution + obviously, we can preload from memory, but then you really want + your root file system to use a disk + Hooligan0: the problem with preloading from memory is that you + want the root file system to use a real device + the same way / refers to one on unix + so you have an actual, persistent hierarchy from which the system + can be initialized and translators started + you also want to share as much as possible between the early + programs and the others + so for example, both the disk driver and the root file system + should be able to use the same libc instance + this requires a "switch root" mechanism that needs to be well + defined and robust + otherwise we'd just build our drivers and root fs statically + (which is currently done with rootfs actually) + and this isn't something we're comfortable with + so for now, in-kernel drivers + humm ... disk driver and libc ... i see + in other way ... disk drivers can use only a little number of + lib* functions ; so with a static version, a bit of memory is lots + s/lots/lost + and maybe the driver can be hot-replaced after boot (ok ok, + it's more simple to say than to write) + + + +# Virtio Drivers for KVM + +In context of [[hurd/running/cloud]], *OpenStack*. + +Ideally they would be userland. That means getting documentation about how +virtio works, and implement it. The hurdish part is mostly about exposing the +driver interface. The [[hurd/translator/devnode]] translator can be used as a +skeleton. -- cgit v1.2.3