From 5bd36fdff16871eb7d06fc26cac07e7f2703432b Mon Sep 17 00:00:00 2001 From: Thomas Schwinge Date: Thu, 29 Nov 2012 01:33:22 +0100 Subject: IRC. --- open_issues/user-space_device_drivers.mdwn | 428 +++++++++++++++++++++++++++++ 1 file changed, 428 insertions(+) (limited to 'open_issues/user-space_device_drivers.mdwn') diff --git a/open_issues/user-space_device_drivers.mdwn b/open_issues/user-space_device_drivers.mdwn index 25168fce..8cde8281 100644 --- a/open_issues/user-space_device_drivers.mdwn +++ b/open_issues/user-space_device_drivers.mdwn @@ -50,6 +50,65 @@ Also see [[device drivers and IO systems]]. * I/O MMU. + +### IRC, freenode, #hurd, 2012-08-15 + + hi. does hurd support mesa? + carli2: software only, but yes + :( + so you did not solve the problem with the CS checkers and GPU DMA + for microkernels yet, right? + cs = ? + control stream + the data sent to the gpu + no + and to be honest we're not currently trying to + well, a microkernel containing cs checkers for each hardware is + not a microkernel any more + the problem is having the ability to check + or rather, giving only what's necessary to delegate checking to + mmus + but maybe the kernel could have a smaller interface like a + function to check if a memory block is owned by a process + i'm not sure what you refer to + about DMA-capable devices you can send messages to + carli2: dma must be delegated to a trusted server + linux checks the data sent to these devices, parses them and + checks all pointers if they are in a memory range that the client is + allowed to read/write from + the client ? + in linux, 3d drivers are in user space, so the kernel side checks + the pointer sent to the GPU + carli2: mach could do that as well + well, there is a rather large part in kernel space too + so in hurd I trust some drivers to not do evil things? + those in the kernel yes + what does "in the kernel" mean? afaik a microkernel only has + memory manager and some basic memory sharing and messaging functionality + did you read about the hurd ? + mach is considered an hybrid kernel, not a true microkernel + even with all drivers outside, it's still an hybrid + although we're to move some parts into userlands :) + braunr: ah, why? + youpi: the vm part is too large + ok + the microkernel dogma is no policy inside the kernel + "except scheduling because it's very complicated" + but all modern systems have moved memory management outisde the + kernel, leaving just the kernel abstraction inside + the adress space kernel abstraction + and the two components required to make it work are what l4re + calls region mappers (the rough equivalent of our vm_map), which decides + how to allocate regions in an address space + and the pager, like ours, which are already external + i'm not a OS developer, i mostly develop games, web services and + sometimes I fix gpu drivers + that was just FYI + but yes, dma must be considered something privileged + and the hurd doesn't have the infrastructure you seem to be + looking for + + ## I/O Ports * Security considerations. @@ -63,8 +122,13 @@ Also see [[device drivers and IO systems]]. * [[GNU Mach|microkernel/mach/gnumach]] is said to have a high overhead when doing RPC calls. + ## System Boot +A similar problem is described in +[[community/gsoc/project_ideas/unionfs_boot]], and needs to be implemented. + + ### IRC, freenode, #hurd, 2011-07-27 < braunr> btw, was there any formulation of the modifications required to @@ -89,12 +153,270 @@ Also see [[device drivers and IO systems]]. < Tekk_> mhm < braunr> s/disk/storage/ + ### IRC, freenode, #hurd, 2012-04-25 btw, remember the initrd thing? I just came across task.c in libstore/ :) +### IRC, freenode, #hurd, 2012-07-17 + + OK, here is a stupid question I have always had. If you move + PCI and disk drivers in to userspace, how do do initial bootstrap to get + the system booting? + that's hard + basically you make the boot loader load all the components you + need in ram + then you make it give each component something (ports) so they can + communicate + + +### IRC, freenode, #hurd, 2012-08-12 + + braunr: so, about booting with userspace disk drivers + after rereading the chapter in my thesis, I see that there aren't + really all than many interesting options... + I pondered some variants involving a temporary boot filesystem + with handoff to the real root FS; but ultimately concluded with another + option that is slightly less elegant but probably gets a much better + usefulness/complexity ratio: + just start the root filesystem as the first process as we used to; + only hack it so that initially it doesn't try to access the disk, but + instead gets the files from GRUB + once the disk driver is operational, we flip a switch, and the + root filesystem starts reading stuff from disk normally + transparently for all other processes + How does grub access the disk without drivers? + bddebian: GRUB obviously has its own drivers... that's how it + loads the kernel and modules + bddebian: basically, it would have to load additional modules for + all the components necessary to get the Hurd disk driver going + Right, why wouldn't that be possible? + (I have some more crazy ideas too -- but these are mostly + orthogonal :-) ) + ? + I'm describing this because I'm pretty sure it *is* possible :-) + That grub loads the kernel and whatever server/module gets + access to the disk + not sure what you mean + Well as usual I probably don't know the proper terminology but + why could grub load gnumach and the hurd "disk server" that contains the + userspace drivers? + disk server? + Oh FFS whatever contains the disk drivers :) + diskdde, whatever :) + actually, I never liked the idea of having a big driver blob very + much... ideally each driver should have it's own file + but that's admittedly beside the point :-) + its + so to restate: in addition to gnumach, ext2fs.static, and ld.so, + in the new scenario GRUB will also load exec, the disk driver, any + libraries these two depend upon, and any additional infrastructure + involved in getting the disk driver running (for automatic probing or + whatever) + probably some other Hurd core servers too, so we can have a more + complete POSIX environment for the disk driver to run in + There ya go :) + the interesting part is modifying ext2fs so it will access only + the GRUB-provided files, until it is told that it's OK now to access the + real disk + (and the mechanism how ext2 actually gets at the GRUB-provided + files) + Or write some new really small ext2fs? :) + ? + I'm just talking out my butt. Something temporary that gets + disposed of when the real disk is available :) + well, I mentioned above that I considered some handoff + schemes... but they would probably be more complex to implement than + doing the switchover internally in ext2 + Ah + boot up in a ramdisk? :) + (and the temporary FS would *not* be an ext2 obviously, but rather + some special ramdisk-like filesystem operating from GRUB-loaded files...) + again, that would require a complicated handoff-scheme + Bah, what do I know? :) + (well, you could of course go with a trivial chroot()... but that + would be ugly and inefficient, as the initial processes would still run + from the ramdisk) + Aren't most things running in memory initially anyway? At what + point must it have access to the real disk? + antrik: but doesn't that require that disk drivers be statically + linked ? + and having all disk drivers in separate tasks (which is what we + prefer to blobs as you put it) seems to pretty much forbid using static + linking + hm actually, i don't see how any solution could work without + static linking, as it would create a recursion + and the only one required is the one used by the root file system + others can be run from the dynamically linked version + antrik: i agree, it's a good approach, requiring only a slightly + more complicated boot script/sequence + bddebian: at some point we have to access the real disk so we + don't have to work exclusively with stuff loaded by grub... but there is + no specific point where it *has* to happen. generally speaking, the + sooner the better + braunr: why wouldn't that work with a dynamically linked disk + driver? we only need to make sure all required libraries are loaded by + grub too + antrik: i have a problem with that approach :p + antrik: it would probably require a reboot when those libraries + are upgraded, wouldn't it ? + I'd actually wish we could run with a dynamically linked ext2fs as + well... but that would require a separated boot filesystem and some kind + of handoff approach, which would be much more complicated I fear... + and if a driver is restarted, would it use those libraries too ? + and if so, how to find them ? + but how can you run a dynamically linked root file system ? + unless the libraries it uses are provided by something else, as + you said + braunr: well, if you upgrade the libraries, *and* want the disk + driver to use the upgraded libraries, you are obviously in a tricky + situation ;-) + yes + perhaps you could tell ext2 to preload the new libraries before + restarting the disk driver... + but that's a minor quibble anyways IMHO + but that case isn't that important actually, since upgrading these + libraries usually means we're upgrading the system, which can imply a + reoobt + i don't think it is + it looks very complicated to me + think of restart as after a crash :p + you can't preload stuff in that case + uh? I don't see anything particularily complicated. but my point + was more that it's not a big thing if that's not implemented IMHO + right + it's not that important + but i still think statically linking is better + although i'm not sure about some details + oh, you mean how to make the root filesystem use new libraries + without a reboot? that would be tricky indeed... but this is not possible + right now either, so that's not a regression + i assume that, when statically linking, only the .o providing the + required symbols are included, right ? + making the root filesystem restartable is a whole different epic + story ;-) + antrik: not the root file system, but the disk driver + but i guess it's the same + no, it's not + ah + for the disk driver it's really not that hard I believe + still some extra effort, but definitely doable + with the preload you mentioned + yes + i see + i don't think it's worth the trouble actually + statically linking looks way simpler and should make for smaller + binaries than if libraries were loaded by grub + no, I really don't want statically linked disk drivers + why ? + again, I'd prefer even ext2fs to be dynamic -- only that would be + much more complicated + the point of dynamically linking is sharing + while dynamic disk drivers do not require any extra effort beyond + loading the libraries with grub + but if it means sharing big files that are seldom used (i assume + there is a lot of code that simply isn't used by hurd servers), i don't + see the point + right. and with the approach I proposed that will work just as it + should + err... what big files? + glibc ? + I don't get your point + you prefer statically linking everything needed before the disk + driver runs (which BTW is much more than only the disk driver itself) to + using normal shared libraries like the rest of the system?... + it's not "like the rest of the system" + the libraries loaded by grub wouldn't be back by the ext2fs server + they would be wired in memory + you'd have two copies of them, the one loaded by grub, and the one + shared by normal executables + no + i prefer static linking because, if done correctly, the combined + size of the root file system and the disk driver should be smaller than + that of the rootfs+disk driver and libraries loaded by grub + apparently I was not quite clear how my approach would work :-( + probably not + (preventing that is actually the reason why I do *not* want as + simple boot filesystem+chroot approach) + and initramfs can be easily freed after init + an* + it wouldn't be a chroot but something a bit more involved like + switch_root in linux + not if various servers use files provided by that init filesystem + yes, that's the complex handoff I'm talking about + yes + that's one approach + as I said, that would be a quite elegant approach (allowing a + dynamically linked ext2); but it would be much more complicated to + implement I believe + how would it allow a dynamically linked ext2 ? + how can the root file system be linked with code backed by itself + ? + unless it requires wiring all its memory ? + it would be loaded from the init filesystem before the handoff + init sn't the problem here + i understand how it would boot + but then, you need to make sure the root fs is never used to + service page faults on its own address space + or any address space it depends on, like the disk driver + so this basically requires wiring all the system libraries, glibc + included + why not + ah. yes, that's something I covered in a separate section in my + thesis ;-) + eh :) + we have to do that anyways, if we want *any* dynamically linked + components (such as the disk driver) in the paging path + yes + and it should make swapping more reliable too + so that adds a couple MiB of wired memory... I guess we will just + have to live with that + yes it seems acceptable + thanks + (it is actually one reason why I want to avoid static linking as + much as possible... so at least we have to wire these libraries only + *once*) + anyways, back to my "simpler" approach + the idea is that a (static) ext2fs would still be the first task + running, and immediately able to serve filesystem access requests -- only + it would serve these requests from files preloaded by GRUB rather than + the actual disk driver + i understand now + until a switch is flipped telling it that now the disk driver (and + anything it depends upon) is operational + you still need to make sure all this is wired + yes + that's orthogonal + which is why I have a separate section about it :-) + what was the relation with ggi ? + none strictly speaking + i'll rephrase it: how did it end up in your thesis ? + I just covered all aspects of userspace drivers in one of the + "introduction" sections of my thesis + ok + before going into specifics of KGI + (and throwing in along the way that most of the issues described + do not matter for KGI ;-) ) + hehe + i'm wondering, do we have mlockall on the hurd ? it seems not + that's something deeply missing in mach + well, bootstrap in general *is* actually relevant for KGI as well, + because of console messages during boot... but the filesystem bootstrap + is mostly irrelevant there ;-) + braunr: oh? that's a problem then... I just assumed we have it + well, it's possible to implement MCL_CURRENT, but not MCL_FUTURE + or at least, it would be a bit difficult + every allocation would need to be aware of that property + it's better to have it managed by the vm system + mach-defpager has its own version of vm_allocate for that + braunr: I don't think we care about MCL_FUTURE here + hm, wait... MCL_CURRENT is fine for code, but it might indeed be a + problem for dynamically allocated memory :-( + yes + + # Plan * Examine what other systems are doing. @@ -116,6 +438,112 @@ Also see [[device drivers and IO systems]]. and parallel port drivers, using `libtrivfs`. +## I/O Server + +### IRC, freenode, #hurd, 2012-08-10 + + usually you'd have an I/O server, and serveral device drivers + using it + Well maybe that's my question. Should there be unique servers + for say ISA, PCI, etc or could all of that be served by one "server"? + forget about ISA + How? Oh because the ISA bus is now served via a PCI bridge? + the I/O server would merely be there to help device drivers map + only what they require, and avoid conflicts + because it's a relic of the past :p + and because it requires too high privileges + But still exists in several PCs :) + so usually, you'd directly ask the kernel for the I/O ports you + need + so do floppy drives + :) + if i'm right, even the l4 guys do it that way + he's right, some devices are still considered ISA + But that is where my confusion lies. Something has to figure + out what/where those I/O ports are + and that's why i tell you to forget about it + ISA has both statically allocated ports (the historical ones) and + others usually detected through PnP, when it works + PCI is much cleaner, and memory mapped I/O is both better and much + more popular currently + So let's say I have a PCI SCSI card. I need some device driver + to know how to talk to that, right? + something is going to enumerate all the PCI devices and map them + to and address space + bddebian: that would be the I/O server + we'll call it the PCI server + OK, that is where I am headed. What if everything isn't PCI? + Is the "I/O server" generic enough? + nowadays everything is PCI + So we are completely ignoring legacy hardware? + we could have separate servers using a shared library that would + provide allocation routines like resource maps + yes + for what is not, the translator just needs to be run as root + to get i/o perm from the kernel + the idea for projects like ours, where the user base is very small + is: don't implement what you can't test + bddebian: legacy can not be supported in a nice way, so for them we + can just afford a bad solution + i.e. leave the driver in kernel + right + e.g. the keyboard + Well what if I have a USB keyboard? :-P + that's a different matter + USB keyboard is not legacy hardware + it's usb + which can be enumerated like pci + and USB uses PCI + and pci could be on usb :) + so it's just a separate stack on top of the PCI server + Sure so would SCSI in my example above but is still a seperate + bus + netbsd has a very nice way of attaching drivers to buses + bddebian: also, yes, and it can be enumerated + Which was my original question. This magic I/O server handles + all of the buses? + no, just PCI, and then you'd have other servers for other busses + i didn't mean that there would be *one* I/O server instance + So then it isn't a generic I/O server is it? + Ahhhh + that way you can even put scsi over ppp or other crazy things + it's more of an idea + there would probably be a generic interface for basic stuff + and i assume it could be augmented with specific (e.g. USB) + interfaces for servers that need more detailed communication + (well, i'm pretty sure of it) + So the I/O server generalizes all functions, say read and write, + and then the PCI, USB, SCIS, whatever servers are contacted by it? + no, not read and write + resource allocation rather + and enumeration + probing perhaps + bddebian: the goal of the I/O server is to make it possible for + device drivers to access the resources they need without a chance to + interfere with other device drivers + (at least, that's one of the goals) + so a driver would request the bus space matching the device(s) and + obtain that through memory mapping + Shouldn't that be in the "global address space"? SOrry if I am + using the wrong terminology + well, the i/o server should also trigger the start of that driver + bddebian: address space is not a matter for drivers + bddebian: i'm not sure what you think of with "global address + space" + bddebian: it's just a matter for the pci enumerator when (and if) + it places the BARs in physical address space + drivers merely request mapping that, they don't need to know about + actual physical addresses + i'm almost sure you lost him at BARs + :( + youpi: that's what i meant with probing actually + Actually I know BARs I have been reading on PCI :) + I suppose physicall address space is more what I meant when I + used "global address space" + i see + bddebian: probably, yes + + # Documentation * [An Architecture for Device Drivers Executing as User-Level -- cgit v1.2.3