RumpDisk
The Hurd supports modern SATA devices like SSDs with RumpDisk. If you successfully installed the Hurd in real hardware, via toggling the "compatibility" mode in your BIOS, then the Hurd is probably using old Linux drivers to access your hard drive/SSD. Even more problematic, those drivers are baked into the GNU Mach kernel! With rumpdisk, you can use SSDs on the Hurd and enjoy a max partition size of 2 TiB!
If you want to test if the Hurd can boot with your SSD, change any
occurence of hdN
in /boot/grub/grub.cfg
to wdN
, where N
is a
number, and add the noide
option on the multiboot
line,
(which disables the old Linux disk drivers). Also change any occurence
of hdN
in your /etc/fstab
to wdN
.
/boot/grub/grub.cfg
# multiboot /boot/gnumach-1.8-486.gz root=part:2:device:hd0 console=com0
multiboot /boot/gnumach-1.8-486.gz root=part:2:device:wd0 console=com0 noide
/etc/fstab
#/dev/hd0s2 / ext2 defaults 0 1
/dev/wd0s2 / ext2 defaults 0 1
#/dev/hd0s1 none swap sw 0 0
/dev/wd0s1 none swap sw 0 0
#/dev/hd2 /media/cdrom0 iso9660 noauto 0 0
/dev/wd2 /media/cdrom0 iso9660 noauto 0 0
Then reboot your machine. Before Grub appears change "compatibility" in your BIOS to "AHCI" (not "RAID"). If you successfully boot, congrats! You are now using rumpdisk! You can permanently add in the "noide" option to grub:
/etc/default/grub
# make sure you add this next line somewhere in the file
GRUB_CMDLINE_GNUMACH="noide"
Now you can run update-grub
. That way when you update the kernel,
you can be sure to use rumpdisk.
rumpdisk is normally already set up on /dev/rumpdisk
.
$ showtrans /dev/rumpdisk
/hurd/rumpdisk
Samuel's email
I have been thinking about how to get rump running for the / filesystem.
Looking at how things go between ext2fs and exec: in grub.cfg we have roughly:
module ext2fs --exec-server-task='${exec-task}' '$(task-create)' '$(task-resume)'
module exec '$(exec-task=task-create)'
i.e. the kernel is told to create two tasks, to pass a reference to
the exec task to the ext2fs task, and to let only the ext2fs task to
run. What happens then is in diskfs_start_bootstrap
, which calls
start_execserver
, which uses task_set_special_port
to set the
TASK_BOOTSTRAP_PORT
special port to a send right to ext2fs, and resumes
the exec task. I.e. basically ext2fs tells exec where it is so that exec
can start the userland with /
available.
I'm thinking that the same can be used for the rump translator, something like:
module rump --fs-server-task='${fs-task}' '$(task-create)' '$(task-resume)'
module ext2fs --exec-server-task='${exec-task}' '$(fs-task=task-create)'
module exec '$(exec-task=task-create)'
and we'd make rump's initialization use task_set_special_port
to set
the TASK_BOOTSTRAP_PORT
special port of ext2fs to a send right to rump,
and resume it. When ext2fs sees that this port is set, it would use it
instead of the gnumach-provided _hurd_device_master
port to open
devices.
And we can nest this yet more for the pci-arbiter:
module pci-arbiter --disk-server-task='${disk-task}' '$(task-create)' '$(task-resume)'
module rump --fs-server-task='${fs-task}' '$(disk-task=task-create)'
module ext2fs --exec-server-task='${exec-task}' '$(fs-task=task-create)'
module exec '$(exec-task=task-create)'
and we'd make pci-arbiter
's initialization use task_set_special_port
to set the TASK_BOOTSTRAP_PORT
special port of rump to a send right to
pci-arbiter and resume it. When libpciaccess
sees that this port is set,
it would use it instead of looking up /server/bus/pci
.
Damien's follow up email
In my own words, the changes that are needed:
libpciaccess needs to check if pci-arbiter's
task_bootstrap_port
is set and if it is, use it instead of/servers/bus/pci
pci-arbiter needs to call
task_set_special_port
on rump'sTASK_BOOTSTRAP_PORT
if--disk-server-task
flag is detected.rumpdisk needs to call
task_set_special_port
on ext2fs'sTASK_BOOTSTRAP_PORT
if--fs-server-task
flag is detected.ext2fs will just work in this configuration