Weblogs from Hurd programmers and enthusiasts.
I just read on the hurd IRC channel (chat: #hurd at irc.freenode.net), that people consider my work valuable (I knew that, and I think that myself, but it is still nice to hear), so I want to dispell any possible myth about it
What I do is not hard - at least not anymore, since I created a simple structure for it (But it still takes time).
First I open up the relevant mailing lists for the quarter. I get them from writing the qoth. Normally I just use the following:
- http://lists.gnu.org/archive/html/bug-hurd/YYYY-MM/threads.html
- http://lists.debian.org/debian-hurd/YYYY/MM/
Then I copy them 3 times and use M-x replace-string (in emacs) to adjust them to the correct months.
Additionally I open the Arch Hurd news:
Having all those news at hand, I read every thread-starter and every news-item. For each of them I first check if I understand them (no use trying to explain something I don’t get myself) and if they provide a way for people to test what they improved (however complex that might be), then I
- note the name of the main contributor(-s),
- write a line of text what it does (often partly copied from the news-item),
- add a link to the news-item, a code-repo or a patch and
- a note how that new development helps achieve the goalsofthe_Hurd (see writing the qoth for details).
With that list of short news I go into qoth next.
Now I identify 2 to 4 main news items by some kind of “helps the Hurd most when more people know it”, “biggest change” and similar fudgery
Finally I sort all the news items by intuition, crude logic I develop on-the-fly writing and the goal of making the qoth read somewhat like nice prose.
On the way to that I commit every little to medium step. I never know when I have to abort due to an interruption (I’m sure tschwinge loves my super-non-atomic horrible-to-review commits - but better that than losing work == time, and I try to prefix the commit-messages with “news:” so he knows that it’s useless to review them as in-flight-patches…).
Having finished the text (usually after 3 to 6 hours of overall work), I send it by mail to bug-hurd: http://lists.gnu.org/archive/html/bug-hurd/
After about a week I incorporate the comments from there and publish the qoth as described in writing the qoth.
Then tschwinge reviews it, does some last-minute changes and pushes it from the staging wiki to the website.
And that’s it.
I hope this small insight was interesting to you. Happy hacking and have fun with the Hurd!
-- Arne Babenhauserheide
PS: Writing this blog entry took about 20 minutes. The raw text is longer than a qoth, but it is much faster to write, because it avoids the main time-eater: Gathering the info with the necessary references to make sure that people can test what’s in here.
Just ideas for more elegant implementations of dbus and akonadi/nepomuk using Hurd interfaces
tagging:
settrans ~/ /hurd/nsmux
ls ~/file,,metadata
store in ~/.metadata
network store: search for .metadata
All metadata:
settrans meta /hurd/metadata --show-store
dbus:
settrans -a /dbus /hurd/dbus
Programs just add an active translator in /dbus: /dbus/org.… → receives dbus calls in-process.
Some technical advantages of the Hurd
→ An answer to just accept it, truth hurds, where Flameeyes told his reasons for not liking the Hurd and asked for technical advantages (and claimed, that the Hurd does not offer a concept which got incorporated into other free software, contributing to other projects). Note: These are the points I see. Very likely there are more technical advantages which I don’t see well enough to explain them. Please feel free to point them out.
Information for potential testers: The Hurd is already usable, but it is not yet in production state. It progressed a lot during the recent years, though. Have a look at the status report if you want to see if it’s already interesting for you.
Thanks for explaining your reasons. As answer:
Firstoff: FUSE is essentially an implementation of parts of the translator system (which is the main building block of the Hurd) to Linux, and NetBSD recently got a port of the translators system of the Hurd. That’s the main contribution to other projects that I see.
On the bare technical side, the translator-based filesystem stands out: The filesystem allows for making arbitrary programs responsible for displaying a given node (which can also be a directory tree) and to start these programs on demand. To make them persistent over reboots, you only need to add them to the filesystem node (for which you need the right to change that node). Also you can start translators on any node without having to change the node itself, but then they are not persistent and only affect your view of the filesystem without affecting other users. These translators are called active, and you don’t need write permissions on a node to add them. The filesystem implements stuff like Gnome VFS (gvfs) and KDE network transparency on the filesystem level, so those are available for all programs. And you can add a new filesystem as simple user, just as if you’d just write into a file “instead of this node, show the filesystem you get by interpreting file X with filesystem Y” (this is what you actually do when setting a translator but not yet starting it (passive translator)).
One practical advantage of this is that the following works:
settrans -a ftp\: /hurd/hostmux /hurd/ftpfs /
dpkg -i ftp://ftp.gnu.org/path/to/*.deb
This installs all deb-packages in the folder path/to
on the FTP server. The shell sees normal directories (beginning with the directory “ftp:”), so shell expressions just work.
You could even define a Gentoo mirror translator (settrans mirror\: /hurd/gentoo-mirror
), so every program could just access mirror://gentoo/portage-2.2.0_alpha31.tar.bz2 and get the data from a mirror automatically: wget mirror://gentoo/portage-2.2.0_alpha31.tar.bz2
Or you could add a unionmount translator to root which makes writes happen at another place. Every user is able to make a readonly system readwrite by just specifying where the writes should go. But the writes only affect his view of the filesystem.
Starting a network process is done by a translator, too: The first time something accesses the network card, the network translator starts up and actually provides the device. This replaces most initscripts in the Hurd: Just add a translator to a node, and the service will persist over restarts.
It’s a surprisingly simple concept, which reduces the complexity of many basic tasks needed for desktop systems.
And at its most basic level, Hurd is a set of protocols for messages which allow using the filesystem to coordinate and connect processes (along with helper libraries to make that easy).
Also it adds POSIX compatibility to Mach (while still providing access to the capabilities-based access rights underneath, if you need them). You can give a process permissions at runtime and take them away at will. For example you can start all programs without permission to use the network (or write to any file) and add the permissions when you need them.
groups # → root
addauth -p $(ps -L) -g mail
groups # → root mail
And then there are subhurds (essentially lightweight virtualization which allows cutting off processes from other processes without the overhead of creating a virtual machine for each process). But that’s an entire post of its own…
And the fact that a translator is just a simple standalone program means that these can be shared and tested much more easily, opening up completely new options for lowlevel hacking, because it massively lowers the barrier of entry.
And then there is the possibility of subdividing memory management and using different microkernels (by porting the Hurd layer, as partly done in the NetBSD port), but that is purely academic right now (search for Viengoos to see what its about).
So in short: The translator system in the Hurd is a simple concept which makes many tasks easy, which are complex with Linux (like init, network transparency, new filesystems, …). Additionally there are capabilities, subhurds and (academic) memory management.
Best wishes,
Arne
PS: I decided to read flameeyes’ post as “please give me technical reasons to dispell my emotional impression”.
PPS: If you liked this post, it would be cool if you’d flattr it:
PPPS: Additional information can be found in Gaël Le Mignot’s talk notes, in niches for the Hurd and the GNU Hurd documentation pages.
Quick porting guide for simple packages
If you want to help port a package with simple issues to the Hurd, please read on.
just imagine joe C-doodler stumbling over some GNU philosophy and thinking “hey, I’ve got 2 free hours, why not help the Hurd?” for him I’d like to have a guide (and for me, the faulty-memory-does-too-many-things )
a short guide “how to do simple ports” broken down to command line level: how to get the list of simple packages (youpi told me that here), how to get the source, how to test the fix, how to submit the fix.
Setup an instant Hurd development environment
See Instant Development Environment - just follow the command to get a Hurd running in Qemu.
Getting the list of failed packages
wget http://people.debian.org/~sthibault/hurd-i386/failed_packages.txt.gz
gunzip failed_packages.txt.gz
Finding a simple task
grep PATH_MAX failed_packages.txt -B 2
Each of these packages is likely to be simple to fix. The output looks like this:
…
--
tex/lilypond_2.12.3-7 by buildd_hurd-i386-mozart [optional:uncompiled:bp{-100}:calprio{-63}:days{258}]
Reasons for failing:
> file-name.cc:88: error: 'PATH_MAX' was not declared in this scope
--
…
in this case, lilypond is the package.
Other simple tasks can be found on guidelines.
Downloading the package source and installing dependencies
apt source PACKAGE
apt build-dep PACKAGE
For example
apt source lilypond
apt build-dep lilypond
Fix the package
See guidelines for help on fixing the package.
Notes:
- char path[4096] is evil. Use dynamic allocation (if you can).
- use stuff like if (x < sysconf(SCPATH_MAX)) {}
- if need be, make it conditional
#ifdef PATH_MAX old, POSIX-violating code #else GNU, better code #endif
Test the fix (compile, run tests)
cd PACKAGE
dpkg-buildpackage -B
Also check the packages README for instructions.
Submit the fix
See patch submission.
Cancelled. See 2011-04-06-application-pyhurd instead.
Python Bindings for the Hurd (PyHurd)
Contact information
- Name: Arne Babenhauserheide
- E-Mail Address: arne_bab@web.de
- IRC-nick: ArneBab @ freenode
- Jabber-ID: arne@jabber.fsfe.org
- Phone-number: XXXXXXXXX
- GnuPG key: http://draketo.de/inhalt/ich/pubkey.txt
Who I am
I am a physics student from Heidelberg, Germany, a passionate free software user and roleplayer, and I started contributing to the Hurd in minor ways about 5 years ago. Now my coding skills are good enough (and I have enough time) that I feel ready to tackle a GSoC project - and I want to take the chance GSoC offers and do a focussed effort for contributing to free software before I am no longer a student. I married 4 years ago and now have a 5½ month old son whoose happy laughing can make you forget everything around you - or at least it does that to me, but what else could you expect to hear from his father about him
Project
For this years GSoC I want to turn the currently rudimentary Python Bindings of the Hurd into a complete Python-library for low-level Hurd and Mach hacking with high level functionality to allow for easy creation of complex applications. Particularly it should make it possible to utilize the whole Python standard library for translators.
Preliminary Schedule
- Community bonding period. Read up on the current C-interface to the Hurd and Cython. Especially grok the Hurd hacking guide. Add docstrings to all existing source files (where they are missing) explaining what they do. Add auto-generated API-docs. Deliverable: Easy to understand API-docs of the current PyHurd, a simple way to generate them from the sources automatically.
- May 23. Coding starts.
- May 30.
Finished a basic Hello World translator, naively implementing the necessary Mach parts directly in the translator.
1. A simple program which gets a Mach port and can receive messages on that port. It has to get and hold its port at startup and send a reply port, needs to use mach_msg to get the messages, should be able to deallocate the port and must have a kill condition (for example 10 received messages).
2. stdout functionality, to print all Mach messages (for debugging and to make sure that they really get received entirely).
3. a parser for the Mach read file message similar to trivfs_S_io_read June 6. Moved the functionality for reading into a simple API using decorators to define actions and ported Hello World to use it:
"""Show Hello World."""
from translator import repres
@repres.text.event
def ontextread(size):
return "Hello, World!"[:size]- June 13. Implemented single file read-write in the API. Added a simple writethrough translator. The API code is nicely commented.
- June 20. Access Control and file attributes. Added lock_file translator which just adjusts the effective file access codes and can be used to lock a file as long as the translator is used. Might be useful for quick testing.
- June 27. Translator commandline arguments and testing.
- July 4. Translator: Overlay with backend store: write changes to a different file. Makes any file writeable, but keeps the changes only visible for the user who set up the translator. Effectively single-file unionmount.
- July 11. Mid-term: trivfs in python works: It is possible to write translators in Python with relative ease.
- July 18. More complex, specialized and helper translator libraries, along with example translators. This should recreate some of the Hurd libraries for Python and add convenience options.
- July 25. Full featured setttrans in Python.
- August 1. Redesigned and realized an updated controlling API with the existing direct Cython bindings.
- August 8. More translators and integrating into the build system.
- August 15. Suggested Pencils down. The translator API is easy to use, there are many example translators and there is a full featured settrans command in Python using the easier controlling API which shows how to control the Hurd directly from Python. The code is pushed to https://github.com/ArneBab/PyHurd and a git repo at https://git.savannah.gnu.org/cgit/hurd and integrated into the build system with a switch to enable building PyHurd.
- August 22. Firm pencils down.
Initial Fix
Initial Fix: Making PyHurd build again under Cython 0.14.1. Sent as patch series to bug-hurd@gnu.org
Detailed answers
What I have to learn, and what I already know
I need to dive into the detailed interfaces of the Hurd to get a better understanding of the exact requirements for a well usable Python interface, especially for higher level functionality, and read up more on working with Cython.
I already know Python and I did design my share of interfaces for my own hobby projects (TextRPG, Fungus, evolve-keyboard-layout and others).
Also I know the functionality and design of the Hurd from a user perspective and can code in C and C++.
Why did you choose this project idea? What do you consider most appealing about it?
FIrstoff: It is about making it possible for me to hack on the Hurd using my favorite programming language.
Also I can learn more about accessing low-level interfaces directly (as opposed to just using higher level abstractions) and grok the ins and outs of creating Python extensions - into which I wanted to dive for a long time now.
And I helped getting the project running and am intrigued by how far it can be pushed.
Have you been involved in any free software ("Open Source") projects yet? Which projects, how long, and in what way have you been involved? Have you been active in the Hurd project/Hurd community before?
I worked on documentation and news for the Hurd, wrote two plugins and the usage guide for Mercurial and created a bunch of personal Python projects. Also I generally try to nudge other Hurd developers into the direction of actually getting the system useful for people (and communicating its strengths) - and do the same for the freenet project.
In my opinion, my major contribution to the Hurd is the Month of the Hurd, a try at fixing the Hurds reputation for never being finished. To achieve that goal, the Month of the Hurd only lists actually testable successes for which I can easily describe how they get the Hurd closer to its vision, ideally those which are already committed.
Please briefly describe the Hurd, including the goals, architecture etc. Also, what makes you interested in the Hurd? Why do you want to work on it? What is your vision of it's future development?
The Hurd offers much greater freedom for users compared to Linux, because every user can change his/her environment to a much greater extent.
Also it allows for easier low-level tinkering, making it possible for hobby-hackers to work on stuff which in linux requires dabbling with kernel-sources. Also it makes it much easier to test these low-level work, so a community can spawn which informally shares low-level hacks, giving a much bigger momentum for low-level work.
And it allows for containment of potentially dangerous applications using subhurds. As a very simple example, I can open a webbrowser without giving it access to the internet and just add that capability later, when I really want to go online (as opposed to just showing local files).
But mainly:
settrans -a ftp\: /hurd/hostmux /hurd/ftpfs /
dpkg -i ftp://ftp.gnu.org/…/*.deb
And that’s only the beginning.
Are you subscribed to the bug-hurd@gnu.org mailing list? (See http://lists.gnu.org/mailman/listinfo/bug-hurd )
Yes
Do you have a permanent internet connection, especially during the time of the summer session? Are you able and willing to hang out on the Hurd IRC channel regularly? (As in: Running the IRC client more or less permanently and checking for activity now and then.) If it turns out that your mentor lives in a different time zone, could you shift your day/night rhythm to better match that of your mentor and other Hurd developers?
Yes, a permanent internet connection as well as a permanently running computer. Since I’m used to also work later in the evening (on hobby projects), the time zone should not be a major issue.
When does your university term end, when are your exams, and when does the next term begin?
I have a clean timetable for the summer: No exams anymore.
How much time do you intend to spend on your GSoC project per day/week during the summer months?
I plan to spend at least 40 hours per week on the PyHurd.
What other major activities will you engage in during the summer? (Moving apartments, longer vacations, other obligations, etc.) If any, how do you intend to make sure you will be able to dedicate sufficient time to your project nevertheless?
Finding a job for after the GSoC. This should not take too much time, all in all, but rather mean short out-times now and then.
How do you intend to make sure that your code will keep on being maintained and supported properly after the end of the GSoC program?
My main plan to keep it maintained is to comment it cleanly, and naturally to keep using the Hurd and PyHurd itself, so any breakage will bother me personally.
Also i want to get it merged into the main git repositories, so it is directly accessible for later developers.
Anything else you want to add to your application?
I’d love to work on PyHurd, because it grips me more and more. For example a high level API might get as simple as
from translator.source.text import *
from translator.repres.tree import *
def source_text_changed(text): … (adapt tree object)
def repres_tree_changed(tree): … (adapt text object)
→ 2-way connectingk,5
writeonly is then done by simply leaving out the definition for the source_<whatever>_changed.
source is the node below and repres is the translated node
Tasks for the Hurd
These tasks are compiled from the ?niches of the hurd and what we need. The first asked “where can the Hurd find niches where it is the biggest fish in the pond, and how?” while the second asked “what do we still need to make the Hurd usable for most of its developers as system for their day-to-day tasks?”.
This might be useful for the next GSoC. Please feel free to edit and/or migrate it mercilessly
Easy
- Port debian packages to the Hurd -> currently mainly tinkerers, but also any other niche. In the long run this is necessary for every user. Easy start for devs.
- Document easier access to low-level functions via translators, one function at a time. -> tinkerers.
- get nsmux ready for regular users by setting it up in the LiveCDs by default. -> show tinkerers what it can do.
- Test on modern machines. If it doesn’t work, file a bug: info.
Complex
- A filesystem-based package manager: Unionmounting packages. With filterfs from nsmux packages any user should be able to selectively disable any package without affecting the system of others. Simple active translators can add packages. -> clean design and more freedom for tinkerers to quickly setup test environments: “Does this also work with XY disabled?” ⇒ rapid testing for different base systems.
- Enable subhurds for regular users via a subdo command: A framework for confining individual applications. -> tinkerers for testing their work.
Define your personal environment via translators, so you can easily take it with you ⇒ system on a USB stick. Would work great with a filesystem based package manager -> use the capabilities of a system and all its installed packages without having to give up your own custom environment.
Implement USB support, maybe using DDE or DDEkit -> prerequisite to system on USB.
- Add Wireless support, maybe via DDE.
- Add sound support via a sound translator.
- Stabilize Xorg, so it can run fast for days.
- Add PPPoE capablilities.
- Debug NFS for climm, w3m and git.
- Port a full-featured browser (i.e. Firefox).
- (Graphical Desktop and switching between console and X) or full featured high-resultion console which doesn’t need X (and emacs ).
Huge
- Get Hurd/GNU Mach ready for efficient multicore usage. -> multicore
- Running parts of the Hurd on different computers, maybe even with shared servers on dedicated hardware (Cloud Computing when the servers can migrate between computers). -> multicore on steroids
You might wonder why this post is not titled "First Post" or anything similar showing off my arrival in the Hurd community.
Well, that's both easy and hard to explain — easy, because it doesn't take much words and hard because of their impact.
The thing is it may well be my first and last post here.
I am not making this decision lightly, because I care a lot for FOSS and although I'm new and not much of a coder (GOTO 10
), I can see how important GNU Hurd is and needs more advocates and contributors.
Sadly, as I stated on my normal blog, to be more help to the FOSS community, I actually have to help less. I have to make the painful choice to select from many FOSS-related things I care about deeply only a few I'm really good at and discard the rest. And since law is my forte, that's where I'll help and leave coding to those who are better at it.
That too is freedom and probably the biggest burden of it.
I'm pretty sure most people here haven't had the time to get to know me yet, but I'll still miss you. And thank you guys for your outstanding work in making the system that gives the user the most freedom possible! Please, keep up the work!
hook out → just out (hopefully not forever)
P.S. 10 IANAC IAAL
I’m currently preparing a qemu image for the Hurd which allows testing the capabilities of the Hurd with as little effort as possible.
Work in progress. These are my in-development notes.
For that I want to use:
- An up to date debian image (no longer online, but I have a copy here).
- My Hurd Intro,
- Translators from hurd-extras and the incubator, and naturally
- a lot of apt update; apt upgrade and apt dist-upgrade (all worked flawlessly).
Working
Generally
# ssh with public key
ssh-keygen
# build tools
apt install build-essential
StoreIO
# mount an iso image
mount foo.iso bar -t iso9660fs
# see myfile as device
settrans foo /hurd/storeio myfile
# so that means I can pack a complete chroot (300MB) into a file with storeio and ext2fs — giselher
# nfs mount anywhere (TODO: check this with antrik)
mount server:/home /home -t nfs
settrans /home /hurd/nfs server:/home
In Progress
Hurdextras
hg clone <hurdextras repo>
httpfs
# pkg-config is needed to avoid “PKG_CHECK_MODULES syntax error near unexpected token `HTTPFS,'”
# pkg-config must be installed before you run autoreconf.
apt install autoconf autoconf-archive libxml2-dev pkg-config
autoreconf -i -f
./configure
make
make install
settrans -ac gnu /usr/local/httpfs www.gnu.org/
# (breaks, because libxml2 needs pthreads → work to do.)
# (what we need: pthreads in translators. → see the [work of Barry](https://savannah.gnu.org/task/?func=detailitem&item_id=5487))
# check: for i in `objdump -x /usr/local/bin/httpfs |grep NEEDED| sed s/.*NEEDED//`; do echo $i; objdump -x /usr/lib/$i | grep pthread; objdump -x /lib/$i | grep pthread; done
Tarfs
apt install zip libz-dev libbz2-dev
git clone git://git.sv.gnu.org/hurd/incubator.git tarfs
cd tarfs/
git checkout tarfs/master
cd tarfs
make
make install
# works, though with warnings.
settrans -ca new /hurd/tarfs -cz test/intro.tar.gz
cp repos/intro/README new/
settrans -g new
tar -tf test/intro.tar.gz
# works
tar -cf test/intro.tar repos/intro
settrans -ac t /hurd/tarfs test/intro.tar
# (settrans: /hurd/tarfs: Translator died :( ⇒ more work to do )
nsmux
git clone git://git.sv.gnu.org/hurd/incubator.git nsmux
cd nsmux/
git checkout -b nsmux origin/nsmux
apt install autoconf autoconf-archive
autoreconf -i -f
./configure
make
make install
cd ../..
mkdir test
touch test/hello
settrans -ca test2 /usr/local/bin/nsmux test
# tar -cvf test/intro.tar repos/hurd_intro
cat test2/hello
cat test2/hello,,hello
# Hello, World!
clisp
git clone git://git.sv.gnu.org/hurd/incubator.git clisp
cd clisp/
git checkout -b clisp origin/clisp
apt install texi2html
make
make install
debugging Translators
rpctrace
We created a list of the things we still need for using the Hurd for in our day-to-day activities (work or hobby).
As soon as these issues are taken care of, the Hurd offers everything we need for fullfilling most of our computing needs on at least one of our devices:
- USB (5): Arne, ms, Michael, Emilio, antrik²³
- Wireless (5): Arne, ms, Carl Fredrik, Michael (netbook), antrik (notebook). working version with DDE in 2010.
Sound (4): ms, Carl Fredrik, Michael, antrik²
SATA (2): Michael, (Emilio). Done, see sata disk drives.
- Tested for modern machines°¹ (2): Emilio, antrik (notebook)
- Stable Xorg° (2): Emilio, antrik
PPPoE (2): Carl Fredrik, antrik²
Graphical Desktop (1): Emilio
- Full featured high-resultion console which doesn’t need X (1): antrik
- Switching between console and X° (1): antrik
- full-featured browser (i.e. Firefox)°⁵ (1): antrik
- NFS working for climm, w3m and git (1): antrik⁴
- mplayer with win32codecs (1): antrik³
- gnash or alternatives (1): antrik³
°: Very likely needed by more people, but not named as most pressing issue.
¹: It’s unclear on which processors the Hurd would have problems. Please report it if you have one!
→ info
²: Would be OK to use a router box instead.
³: Not critical but would be convenient.
⁴: Only while not using Hurd as the only machine.
⁵: We’re close to that.
So, if one of these issues seems to be interesting for you, or you think “I can do that easily”, why not become a Hurd hacker and add your touch?
You can reach us in the mailing lists and in irc.
The sourcecode is in our source repositories (git). When you want to check sources relevant for you, DDE might be a good place to start for wireless and sound. USB on the other hand might need work in gnumach (hacking info).
Besides: “The great next stuff” is in the incubator git repo, including (among others) clisp (translators in lisp) and nsmux (dynamically setting translators on files for one command by accessing file,,translator
).
Happy hacking!
I thought a bit about what I’d need from Hurd to use it for some of my real life tasks.
My desktop has to be able to do everything it does now, and that under high load, so it currently is no useful target for the Hurd.
But then I have an OLPC XO sitting here, and I use it mostly for work and for clearly defined tasks. As such it seems natural to me to check, what the Hurd would have to be able to do to support my workflow on the OLPC.
What I need
- Writing text and programming Python with emacs. - works.
- Use Mercurial for my versiontracked stuff. - works.
- Reading websites with emacs and w3m or with lynx. - works.
- Use SSH to go on my desktop and on the university machine. - should work.
- Run X11 with dwm and emacs. - should work.
- Boot Hurd on the OLPC from a USB stick. - not yet?
- Support networking over wlan and wpa_supplicant. - not yet? Might DDE kit help?
- Listen to music with Quod Libet in X11. - not yet. Needs audio support.
What would be nice
- Run a Gentoo system. - not really needed, but nice to update my system with the same tools.
- Watch videos with mplayer. - unlikely. Even with Linux as kernel watching videos pushes my XO to the limit. But this is not essential.
So, as soon as Debian GNU/Hurd (or Arch Hurd) supports the things I need, I’ll put it on a USB-stick and use it for coding and writing.
To be frank: I’d likely even use it without audio-support. I have an mp3 player and can feed it via USB. So the essential features for me are:
Essential features
- Writing text and programming Python with emacs. - works.
- Use Mercurial for my versiontracked stuff. - works.
- Use SSH to go on my desktop and on the university machine. - should work.
- Boot Hurd on the OLPC from a USB stick. - not yet?
- Support networking over wlan and wpa_supplicant. - not yet? Might DDE kit help?
Conclusion
The Hurd doesn’t yet do everything I need for my OLPC, but it isn’t that far away either. Grub already gets ported to OLPC, so what’s missing to make the Hurd a work system for me are just booting on OLPC from USB stick and wlan-support on OLPC.
All the rest I need for work is already in place.
There are some similarities between the Hurd and Plan 9 regarding the file system handling -- but there are also very fundamental differences which go far beyond monolithic vs. microkernel design:
The Hurd is UNIX (POSIX) compatible
While (almost) all services are attached to the file system tree, not all services actually export a file system interface!
Personally, I advocate using FS-based interfaces as much as possible. Yet, there are some cases where they get very awkward and/or inefficient, and domain-specific interfaces simply make a lot more sense.
Also, some Hurd services are indeed used to implement the file systems in the first place -- they work below the FS level, and obviously can't use an FS interface!
File systems are completely decentralized -- clients always talk to the FS servers directly, without any central VFS layer. (I don't think that's the case in Plan 9?)
This offers much more flexibility -- the way the FS interfaces themselves work can be modified. Many things can be implemented as normal translators, that would require special VFS support on other systems. (Extended attributes, VFS-based union mounts, local namespaces, firmlink, magic file name suffixes etc.)
The system design allows users and applications to change almost all aspects of the system functionality in the local environment easily and without affecting other parts of the system.
(This is possible with Plan 9 to some extent; but the Hurd allows it at a much lower level -- including stuff like the filesystem interfaces, access control mechanisms, program execution and process management, and so on.)
I hope I didn't forget any major differences...
I wanted to import an old GNU arch repository into Git, but only had HTTP
access via ArchZoom. I spent quite some time to try teaching git archimport
to use HTTP access to that repository, but this didn't work out. Too bad --
but at least, using ArchZoom, I was able to get the individual revisions'
tarballs:
$ ls -1 *.tar.gz
bpf--devel--0.0--base-0.tar.gz
bpf--devel--0.0--patch-1.tar.gz
bpf--devel--0.0--patch-10.tar.gz
bpf--devel--0.0--patch-11.tar.gz
bpf--devel--0.0--patch-12.tar.gz
bpf--devel--0.0--patch-2.tar.gz
bpf--devel--0.0--patch-3.tar.gz
[...]
bpf--devel--0.0--patch-9.tar.gz
bpf--release--0.1--base-0.tar.gz
bpf--release--0.1--patch-1.tar.gz
bpf--release--0.1--patch-2.tar.gz
[...]
bpf--release--0.1--patch-8.tar.gz
I unpacked these:
$ for f in *.tar.gz; do tar -xz < "$f" || echo >&2 "$f" failed; done
The last revision's tree apparently contains all previous revisions' commit information (author, date, message), so use that:
$ cp -a ↩
bpf--release--0.1--patch-8/{arch}/bpf/bpf--devel/bpf--devel--0.0/info@hurdfr.org--hurdfr/patch-log ↩
d-patch-log
$ cp -a ↩
bpf--release--0.1--patch-8/{arch}/bpf/bpf--release/bpf--release--0.1/info@hurdfr.org--hurdfr/patch-log ↩
r-patch-log
... and extract the information that we need:
$ base=bpf--devel--0.0-- && ↩
for f in d-patch-log/*; do ↩
grep < "$f" ^Creator: | head -n 1 ↩
| { read j c && ↩
echo "$c" | sed s%' <.*'%% ↩
> "$base""$(basename "$f")".author_name && ↩
echo "$c" | sed -e 's%.*<%%' -e 's%>.*%%' ↩
> "$base""$(basename "$f")".author_email; } && ↩
grep < "$f" ^Standard-date: | head -n 1 | { read j d && echo "$d" ↩
> "$base""$(basename "$f")".author_date; } && ↩
{ grep < "$f" ^Summary: | head -n 1 | { read j m && echo "$m"; } && ↩
echo && sed < "$f" '1,/^$/d'; } ↩
> "$base""$(basename "$f")".log ↩
|| echo >&2 "$f" failed; ↩
done
$ base=bpf--release--0.1-- && ↩
for f in r-patch-log/*; [...]
(Of course, I could have used something more elaborate than shell scripting...)
Remove the GNU arch stuff that we don't need anymore:
$ find bpf--*/ -type d \( -name {arch} -o -name .arch-ids \) -print0 ↩
| xargs -0 rm -r
The base-0
revisions are actually either empty (the devel
one) or
equivalent to the previous revision (the release
one), so remove these:
$ rm -rf bpf--devel--0.0--base-0 bpf--release--0.1--base-0
Finally, import all the other ones:
$ mkdir g && ( cd g/ && git init )
$ for d in bpf--d*-? bpf--d*-?? bpf--r*; do ↩
test -d "$d"/ || continue && ↩
( cd g/ && ↩
rsync -a --delete --exclude=/.git ../"$d"/ ./ && ↩
git add . && ↩
GIT_AUTHOR_NAME="$(cat ../"$d".author_name)" ↩
GIT_AUTHOR_EMAIL="$(cat ../"$d".author_email)" ↩
GIT_AUTHOR_DATE="$(cat ../"$d".author_date)" ↩
git commit -F ../"$d".log -a ); ↩
done
Voilà!
Update 2009-06-25:
Half a day later, ?HurdFr published a git archimport
-converted repository
-- which was identical to my hand-crafted one (apart from having
git-archimport-id:
tags in the commit messages, and the first (empty) commit
not being stripped off).
I was revisiting the issue of getting the Hurd's code base compiled with recent
versions of GCC. Specifically, there were a lot of duplicate symbols shown at
linking time, and all these were related to inline
functions. Originally, in
2007, we had solved this problem already (or rather, shifted it) by using GCC's
-fgnu89-inline
option, but as we saw now,
that one obviously doesn't help anymore if third-party code is using the Hurd's
unfixed header files.
So I was revisiting this issue. I was already prepared that this would take some hours, with lots of editing, compiling cycles, plus some analyzing of the binaries. So I made up a fresh repository for this work.
$ mkdir hurd-ei
$ cd hurd-ei/
$ git init
[...]
$ git remote add savannah git://git.savannah.gnu.org/hurd/hurd.git
$ git fetch
[...]
Switch to a new topic-branch.
$ git checkout -b master-ei savannah/master
Branch master-ei set up to track remote branch master from savannah.
Switched to a new branch 'master-ei'
(ei
is short for extern inline
.)
The first thing to do was to disable that -fgnu89-inline
option, so I edited
Makeconf
where it was added to CFLAGS
.
I started editing, compiling, editing, compiling, and so on.
Finally, the tree was in a shape where everything was building fine and the resulting libraries contained the symbols they should, etc.
I committed the whole junk as one big blob commit, to store it in a safe place (you never know with these Hurd machines...), and to continue working on it the next day.
$ git commit -a
For the commit message, I already mostly assembled a ChangeLog
-style log.
Then:
$ git format-patch savannah/master..
0001-Bla.patch
... and here is 0001-Bla.patch.bz2 (compressed).
The next day, a.k.a. today, in a different Git repository.
$ git checkout -b master-fix_inline savannah/master
Branch master-fix_inline set up to track remote branch master from savannah.
Switched to a new branch 'master-fix_inline'
$ bunzip2 < ../some/where/0001-Bla.patch.bz2 | git am
Applying: Bla.
The big blob is now on top of savannah/master (which was
2772f5c6a6a51cf946fd95bf6ffe254273157a21
, by the way -- in case that you want
to reproduce this tutorial later, simply substitute savannah/master
with
2772...
).
By then, I had come to the conclusion that the commit essentially was fine, but
should be split into two, and the configure
hunk shouldn't be in there. So
be it.
So, the HEAD
of the active branch is our big blob commit that we want to
work on. Check with git show HEAD
:
$ git show HEAD
commit 93e97f3351337c349e2926f4041e61bc487ef9e6
Author: Thomas Schwinge <tschwinge@gnu.org>
Date: Tue Jun 23 00:27:28 2009 +0200
Bla.
* console-client/timer.h (fetch_jiffies): Use static inline instead of extern
inline.
* ext2fs/ext2fs.h (test_bit, set_bit, clear_bit, dino, global_block_modified)
(record_global_poke, sync_global_ptr, record_indir_poke, sync_global)
(alloc_sync): Likewise.
* libftpconn/priv.h (unexpected_reply): Likewise.
* term/term.h (qsize, qavail, clear_queue, dequeue_quote, dequeue)
(enqueue_internal, enqueue, enqueue_quote, unquote_char, char_quoted_p)
(queue_erase): Likewise.
* ufs/ufs.h (dino, indir_block, cg_locate, sync_disk_blocks, sync_dinode)
(swab_short, swab_long, swab_long_long): Likewise.
* term/munge.c (poutput): Use static inline instead of inline.
* libdiskfs/diskfs.h: Apply inline optimization only ifdef
[__USE_EXTERN_INLINES]. Use __extern_inline instead of extern inline.
* libftpconn/ftpconn.h: Likewise.
* libpipe/pipe.h: Likewise.
* libpipe/pq.h: Likewise.
* libshouldbeinlibc/idvec.h: Likewise.
* libshouldbeinlibc/maptime.h: Likewise.
* libshouldbeinlibc/ugids.h: Likewise.
* libstore/store.h: Likewise.
* libthreads/rwlock.h: Likewise.
* libdiskfs/extern-inline.c: Adapt to these changes.
* libftpconn/xinl.c: Likewise. And don't #include "priv.h".
* libpipe/pipe-funcs.c: Likewise.
* libpipe/pq-funcs.c: Likewise.
* libshouldbeinlibc/maptime-funcs.c: Likewise. And remove superfluous
includes.
* libstore/xinl.c: Likewise.
* libthreads/rwlock.c: Likewise.
* Makeconf (CFLAGS): Don't append $(gnu89-inline-CFLAGS).
* pfinet/Makefile (CFLAGS): Append $(gnu89-inline-CFLAGS).
diff --git a/Makeconf b/Makeconf
index e9b2045..236f1ec 100644
--- a/Makeconf
+++ b/Makeconf
@@ -65,7 +65,7 @@ INCLUDES += -I$(..)include -I$(top_srcdir)/include
CPPFLAGS += $(INCLUDES) \
-D_GNU_SOURCE -D_IO_MTSAFE_IO -D_FILE_OFFSET_BITS=64 \
$($*-CPPFLAGS)
-CFLAGS += -std=gnu99 $(gnu89-inline-CFLAGS) -Wall -g -O3 \
+CFLAGS += -std=gnu99 -Wall -g -O3 \
[...]
We want to undo this one commit, but preserve its changes in the working directory.
$ git reset HEAD^
Makeconf: locally modified
configure: locally modified
console-client/timer.h: locally modified
ext2fs/ext2fs.h: locally modified
libdiskfs/diskfs.h: locally modified
libdiskfs/extern-inline.c: locally modified
libftpconn/ftpconn.h: locally modified
libftpconn/priv.h: locally modified
libftpconn/xinl.c: locally modified
libpipe/pipe-funcs.c: locally modified
libpipe/pipe.h: locally modified
libpipe/pq-funcs.c: locally modified
libpipe/pq.h: locally modified
libshouldbeinlibc/idvec.h: locally modified
libshouldbeinlibc/maptime-funcs.c: locally modified
libshouldbeinlibc/maptime.h: locally modified
libshouldbeinlibc/ugids.h: locally modified
libstore/store.h: locally modified
libstore/xinl.c: locally modified
libthreads/rwlock.c: locally modified
libthreads/rwlock.h: locally modified
pfinet/Makefile: locally modified
term/munge.c: locally modified
term/term.h: locally modified
ufs/ufs.h: locally modified
Now, HEAD
points to the commit before the previous HEAD
, i.e. HEAD^
.
Again, check with git show HEAD
:
$ git show HEAD
commit 2772f5c6a6a51cf946fd95bf6ffe254273157a21
Author: Samuel Thibault <samuel.thibault@ens-lyon.org>
Date: Thu Apr 2 23:06:37 2009 +0000
2009-04-03 Samuel Thibault <samuel.thibault@ens-lyon.org>
* exec.c (prepare): Call PREPARE_STREAM earlier to permit calling
finish_mapping on E even after errors, as is already done in do_exec.
diff --git a/exec/ChangeLog b/exec/ChangeLog
index 5a0ad1d..a9300bf 100644
--- a/exec/ChangeLog
+++ b/exec/ChangeLog
@@ -1,3 +1,8 @@
+2009-04-03 Samuel Thibault <samuel.thibault@ens-lyon.org>
+
+ * exec.c (prepare): Call PREPARE_STREAM earlier to permit calling
+ finish_mapping on E even after errors, as is already done in do_exec.
+
2008-06-10 Samuel Thibault <samuel.thibault@ens-lyon.org>
* elfcore.c (TIME_VALUE_TO_TIMESPEC): Completely implement instead of
diff --git a/exec/exec.c b/exec/exec.c
index 05dc883..cb3d741 100644
--- a/exec/exec.c
+++ b/exec/exec.c
@@ -726,6 +726,9 @@ prepare (file_t file, struct execdata *e)
e->interp.section = NULL;
+ /* Initialize E's stdio stream. */
+ prepare_stream (e);
[...]
Luckily, Git saves the previous (i.e. before the git reset
) HEAD
reference
as ORIG_HEAD
. Have a look at it with git show ORIG_HEAD
-- it contains the
big blob commit, including the preliminary commit message -- just what HEAD
was before:
$ git show ORIG_HEAD
commit 93e97f3351337c349e2926f4041e61bc487ef9e6
Author: Thomas Schwinge <tschwinge@gnu.org>
Date: Tue Jun 23 00:27:28 2009 +0200
Bla.
* console-client/timer.h (fetch_jiffies): Use static inline instead of extern
inline.
[...]
diff --git a/Makeconf b/Makeconf
index e9b2045..236f1ec 100644
--- a/Makeconf
+++ b/Makeconf
@@ -65,7 +65,7 @@ INCLUDES += -I$(..)include -I$(top_srcdir)/include
CPPFLAGS += $(INCLUDES) \
-D_GNU_SOURCE -D_IO_MTSAFE_IO -D_FILE_OFFSET_BITS=64 \
$($*-CPPFLAGS)
-CFLAGS += -std=gnu99 $(gnu89-inline-CFLAGS) -Wall -g -O3 \
+CFLAGS += -std=gnu99 -Wall -g -O3 \
[...]
OK, now let's pick the files that we want to have in the first of the envisioned two commits: these are the static inline instead of extern inline and apply inline optimization only... sections.
$ git add console-client/timer.h ext2fs/ext2fs.h [...] libthreads/rwlock.c
Oh, we forgot something: now that we're preparing this stuff to go into the master repository, update the copyright years. Edit, edit, edit, and then, again:
$ git add console-client/timer.h ext2fs/ext2fs.h [...] libthreads/rwlock.c
Now Git's staging area contains the changes that we want to commit (and the
working directory contains the rest of the big blob). Commit these add
ed
files, and use big blob's commit message as a template for the new one, as it
already contains most of what we want (don't forget to chop off the unneeded
parts).
$ git commit -c ORIG_HEAD
Waiting for Emacs...
[master-fix_inline 51c15bc] Use static inline where appropriate.
6 files changed, 50 insertions(+), 51 deletions(-)
$ git show HEAD
commit c6c9d7a69dea26e04bba7010582e7bcd612e710c
Author: Thomas Schwinge <tschwinge@gnu.org>
Date: Tue Jun 23 00:27:28 2009 +0200
Use static inline where appropriate and use glibc's __extern_inline machinery.
* console-client/timer.h (fetch_jiffies): Use static inline instead of extern
inline.
* ext2fs/ext2fs.h (test_bit, set_bit, clear_bit, dino, global_block_modified)
(record_global_poke, sync_global_ptr, record_indir_poke, sync_global)
(alloc_sync): Likewise.
* libftpconn/priv.h (unexpected_reply): Likewise.
* term/term.h (qsize, qavail, clear_queue, dequeue_quote, dequeue)
(enqueue_internal, enqueue, enqueue_quote, unquote_char, char_quoted_p)
(queue_erase): Likewise.
* ufs/ufs.h (dino, indir_block, cg_locate, sync_disk_blocks, sync_dinode)
(swab_short, swab_long, swab_long_long): Likewise.
* term/munge.c (poutput): Use static inline instead of inline.
* libdiskfs/diskfs.h: Apply inline optimization only ifdef
[__USE_EXTERN_INLINES]. Use __extern_inline instead of extern inline.
* libftpconn/ftpconn.h: Likewise.
* libpipe/pipe.h: Likewise.
* libpipe/pq.h: Likewise.
* libshouldbeinlibc/idvec.h: Likewise.
* libshouldbeinlibc/maptime.h: Likewise.
* libshouldbeinlibc/ugids.h: Likewise.
* libstore/store.h: Likewise.
* libthreads/rwlock.h: Likewise.
* libdiskfs/extern-inline.c: Adapt to these changes.
* libftpconn/xinl.c: Likewise. And don't #include "priv.h".
* libpipe/pipe-funcs.c: Likewise.
* libpipe/pq-funcs.c: Likewise.
* libshouldbeinlibc/maptime-funcs.c: Likewise. And remove superfluous
includes.
* libstore/xinl.c: Likewise.
* libthreads/rwlock.c: Likewise.
diff --git a/console-client/timer.h b/console-client/timer.h
index 4204192..5e64e97 100644
--- a/console-client/timer.h
+++ b/console-client/timer.h
@@ -1,5 +1,7 @@
/* timer.h - Interface to a timer module for Mach.
- Copyright (C) 1995,96,2000,02 Free Software Foundation, Inc.
+
+ Copyright (C) 1995, 1996, 2000, 2002, 2009 Free Software Foundation, Inc.
+
Written by Michael I. Bushnell, p/BSG and Marcus Brinkmann.
This file is part of the GNU Hurd.
@@ -54,7 +56,7 @@ int timer_remove (struct timer_list *timer);
/* Change the expiration time of the timer TIMER to EXPIRES. */
void timer_change (struct timer_list *timer, long long expires);
-extern inline long long
+static inline long long
[...]
As you can see, HEAD
now points to the new commit on top of the current
branch. (ORIG_HEAD
doesn't change.)
On to the next, and last one, only two changes should be left: the Makeconf
and pfinet/Makefile
ones.
$ git status
# On branch master-fix_inline
# Your branch is ahead of 'savannah/master' by 1 commit.
#
# Changed but not updated:
# (use "git add <file>..." to update what will be committed)
# (use "git checkout -- <file>..." to discard changes in working directory)
#
# modified: Makeconf
# modified: configure
# modified: pfinet/Makefile
#
# Untracked files:
# (use "git add <file>..." to include in what will be committed)
#
# 0001-Bla.patch
# autom4te.cache/
# hurd_extern_inline_fix.patch?file_id=18191
no changes added to commit (use "git add" and/or "git commit -a")
Alright, there is as well still the configure
hunk that we want to get rid
of. But first for the real second commit, after editing for again adding the
copyright year update:
$ git add Makeconf pfinet/Makefile
$ git commit -c ORIG_HEAD
Waiting for Emacs...
[master-fix_inline 6a967d1] We're now C99 inline safe -- apart from the Linux code in pfinet.
2 files changed, 6 insertions(+), 3 deletions(-)
Check that we're in a clean state now:
$ git status
# On branch master-fix_inline
# Your branch is ahead of 'savannah/master' by 2 commits.
#
# Changed but not updated:
# (use "git add <file>..." to update what will be committed)
# (use "git checkout -- <file>..." to discard changes in working directory)
#
# modified: configure
#
# Untracked files:
# (use "git add <file>..." to include in what will be committed)
#
# 0001-Bla.patch
# autom4te.cache/
# hurd_extern_inline_fix.patch?file_id=18191
no changes added to commit (use "git add" and/or "git commit -a")
Oops, we forgot something...
$ git checkout -- configure
Now, our tree is clean again. (Check with git status
.)
By now, we came to the conclusion that the first of the two commits should have been further split into two separate ones. Of course, essentially we would do the same splitting again that we've done just now -- but how to easily modify the first commit, now that we have another one on top of it?
Alright, git rebase --interactive
to the rescue -- let's interactively
rebase
the last two commits, to modify them as wanted.
$ git rebase --interactive HEAD~2
Waiting for Emacs...
Emacs wants us to tell which commits we want to keep as they are (pick
),
which should be merged into others (squash
), and which we want to edit
. In
our scenario, we want to edit
the first one and pick
the second one.
Change the file thusly and close it.
Stopped at 5becbb5... Use static inline where appropriate and use...
You can amend the commit now, with
git commit --amend
Once you are satisfied with your changes, run
git rebase --continue
We want to undo this first commit to split it into two. Again, use git reset
for that, while preserving the commit's changes in the working directory.
$ git reset HEAD^
console-client/timer.h: locally modified
[...]
Pick the set of files that we want to have in the first of the envisioned two commits: the static inline instead of extern inline section, and commit them, again using the previous commit message as a template for the new one:
$ git add console-client/timer.h ext2fs/ext2fs.h [...] term/munge.c
$ git commit -c ORIG_HEAD
Waiting for Emacs...
[detached HEAD 51c15bc] Use static inline where appropriate.
6 files changed, 50 insertions(+), 51 deletions(-)
Next part: apply inline optimization only.... Again, git add
those files
that shall be part of the next commit, i.e. all remaining ones. As before, use
the previous commit message as a template.
$ git add libdiskfs/diskfs.h [...] libthreads/rwlock.c
$ git commit -c ORIG_HEAD
Waiting for Emacs...
[detached HEAD 8ac30ea] [__USE_EXTERN_INLINES]: Use glibc's __extern_inline machinery.
16 files changed, 508 insertions(+), 356 deletions(-)
Now we're done with splitting that commit into two. (Check with git status
that we didn't forget anything.) What's missing is getting back the other
commit on top of the two now-split ones:
$ git rebase --continue
Successfully rebased and updated refs/heads/master-fix_inline.
Here we go. The other commit has been applied on top of the two new ones.
Due to time-honored tradition, I always double-check what I have just committed, before distributing it to the world:
$ git log --reverse -p -C --cc savannah/master..
... and promptly, I recognize some changes that shouldn't be in there: when
using it on some files, Emacs' copyright-fix-years
, aside from indeed fixing
the list of copyright years, and adding the current year, also changed GPL
... version 2 into version 3, which would be nice, but which we can't do for
the moment. The error is present only in the first and second commit. If it
were in only in the third (the last) one, simply editing the files, and then
using git commit --amend
would be the solution. But again there is the
problem about how to modify the first (HEAD~2
) and second (HEAD~1
, or
HEAD^
) commit now that there is another one on top of it. By now, we know
the solution:
$ git rebase --interactive HEAD~3
Waiting for Emacs...
This time, we need to edit
the first and second commits, and pick
the third
one.
Stopped at ffd215b... Use static inline where appropriate.
You can amend the commit now, with
git commit --amend
Once you are satisfied with your changes, run
git rebase --continue
git show
(which defaults to showing HEAD
, by the way) can again be used to
have a look at the current HEAD
(which is the first of the three commits),
and then we revert the unwanted changes in the editor, resulting with the
following changed files:
$ git status
# Not currently on any branch.
# Changed but not updated:
# (use "git add <file>..." to update what will be committed)
# (use "git checkout -- <file>..." to discard changes in working directory)
#
# modified: ext2fs/ext2fs.h
# modified: libftpconn/priv.h
# modified: term/munge.c
# modified: term/term.h
# modified: ufs/ufs.h
#
# Untracked files:
# (use "git add <file>..." to include in what will be committed)
#
# 0001-Bla.patch
# autom4te.cache/
# hurd_extern_inline_fix.patch?file_id=18191
no changes added to commit (use "git add" and/or "git commit -a")
Then, we can -- as git rebase
suggested above -- amend the existing HEAD
commit with these changes (--amend
and --all
), reusing HEAD
's commit
message without spawning an editor (-C HEAD
):
$ git commit --amend -C HEAD --all
[detached HEAD c6c9d7a] Use static inline where appropriate.
6 files changed, 45 insertions(+), 46 deletions(-)
Continue with the next commit:
$ git rebase --continue
Stopped at 8ac30ea... [__USE_EXTERN_INLINES]: Use glibc's __extern_inline machinery.
You can amend the commit now, with
git commit --amend
Once you are satisfied with your changes, run
git rebase --continue
Again, have a look at the commit (git show
), revert the unwanted changes,
amend HEAD
, and continue to the next commit:
$ git commit --amend -C HEAD --all
[detached HEAD 9990dc6] [__USE_EXTERN_INLINES]: Use glibc's __extern_inline machinery.
16 files changed, 500 insertions(+), 348 deletions(-)
$ git rebase --continue
Stopped at 6a967d1... We're now C99 inline safe -- apart from the Linux code in pfinet.
You can amend the commit now, with
git commit --amend
Once you are satisfied with your changes, run
git rebase --continue
Two files are left to be edited (git show
, etc., again), and finally:
$ git commit --amend -C HEAD --all
[detached HEAD 241c605] We're now C99 inline safe -- apart from the Linux code in pfinet.
2 files changed, 5 insertions(+), 2 deletions(-)
$ git rebase --continue
Successfully rebased and updated refs/heads/master-fix_inline.
That's it. git log --reverse -p -C --cc savannah/master..
now looks as nice
as can be.
Of course, this is only a small insight of what is possible to do with git
rebase
and friends -- see the manual for further explanations.
For a while I have been thinking about the lack of a roadmap for the Hurd; but now I realized that we lack something even more fundamental: a simple mission statement -- i.e. saying where we want to go, rather than how we want to get there. I think many of the problems we have are directly or indirectly related to that.
As we didn't have such a mission statement so far, the people currently involved have vastly different ideas about the mission, which of course makes it a bit hard to come up with a suitable one now. However, I managed to come up with something that I believe is generic enough so all contributors can subscribe to it:
The mission of the Hurd project is: to create a general-purpose kernel suitable for the GNU operating system, which is viable for everyday use, and gives users and programs as much control over their computing environment as possible.
"Suitable for GNU" in the first part implies a number of things. I explicitely mentioned "general-purpose", because this an important feature that sets the Hurd apart from many other microkernel projects, but isn't immediately obvious.
I didn't mention that it must be entirely free software, as this should be obvious to anyone familiar with GNU.
Another thing I did not mention, because it's too controversial: how much UNIX do we need? I think that being suitable for GNU requires a pretty high degree of UNIX compatibility, and also that the default environment looks to the user more or less like UNIX. However, some people claimed in the past that GNU could do without UNIX -- the wording used here doesn't totally preclude such views.
The second part also leaves a lot of slack: I for my part still believe that a Mach-based Hurd can be viable for everyday use; but those who think that a microkernel change is required, should be happy with this wording as well.
The third part tries to express the major idea behind the Hurd design in the most compact and generic way possible.
Hurd GSoC 2008 code_swarm
I created a code_swarm of the work done in the Hurd project during this years Google Summer of Code.
I hope you enjoy it!
PS: Now also available on vimeo thanks to scolobb!
Niches for the Hurd
In the bug-hud mailinglist we did a search for niches where the Hurd is the biggest fish in the pond.
This search was segmented into four distinct phases, three of them major:
- Brainstorm
- Reality check: can already do vs. could be used for
- Turn ideas into applications
- Find a compromise -> About which niches should we talk in the wiki?
Brainstorm
"Which niches could there be for the Hurd?"
Basic Results
The result is a mix of target groups, nice features and options of the Hurd, reasons for running a Hurd and areas where the Hurd offers advantages:
Nice features and options the Hurd offers
- Give back power to users: arbitrary mounts, subhurds
- Nice features: dpkg -iO ftp://foo/bar/*.deb
- Easier access to low-level functions
- Advanced lightweight virtualization
- operating system study purposes as its done with minix
- The possibility to create more efficient and powerful desktop environments
- Having a complete GNU System
- All-in-one out-of-the-box distro running a webserver for crash-proof operation.
Target groups and strong environments
- Tinkerers who like its design.
- multicore-systems
The keyphrases in more detail or with additional ideas
Give back power to users: arbitrary mounts, subhurds
Simpler virtual computing environments - no need to setup XEN, everyone can just open up his/her computer for someone else by creating a new user account, and the other one can login and easily adapt the system for his/her own needs. If most systems just differ by the translators setup on them, people could even transfer their whole environment from one computer to another one without needing root access or more root interaction than creating a new user account. "I want my tools" -> "no problem, just setup your translators".
Also it would be possible to just open an account for stuff like joining the "World Community Grid" allowing for easier sharing of CPU time.
Easier access to low-level functions
"One important use is for very technical people, who don't always go with standard solutions, but rather use new approaches to best solve their problems, and will often find traditional kernels too limiting."
"Another interesting aspect is application development: With the easily customized/extended system functionality, and the ability to contain such customizations in subenvironments, I believe that Hurd offers a good platform for much more efficient development of complex applications. Application developers can just introduce the desired mechanisms on a very low level, instead of building around existing abstractions. The extensible filesystem in particular seems extremely helpful as a powerful, intuitive and transparent communication mechanism, which allows creating truly modular applications."
Advanced lightweight virtualization
"There is also the whole area I called "advanced lightweight virtualization" (see http://tri-ceps.blogspot.com/2007/10/advanced-lightweight-virtualization.html ), i.e. the ability to create various kinds of interesting subenvironments. Many use cases are covered by much bigger fish; but the flexibility we offer here could still be interesting: I think the middle grounds we cover between directly running applications, and full isolation through containers or VMs, are quite unique. This could simplify management of demanding applications for example, by partially isolating them from other applications and the main system, and thus reducing incompatibilities. Creating lightweight software appliances sounds like an interesting option."
The possibility to create more efficient and powerful desktop environments
"While I believe this can be applied to any kind of applications, I'm personally most interested in more efficient and powerful desktop environments -- these considerations are in fact what got me seriously interested in the Hurd.
Even more specifically, I've done most considerations (though by far not all) on modular web browsing environments. Those interested can read up some of my thoughts on this:
http://sourceforge.net/mailarchive/message.php?msg_name=20080909073154.GB821%40alien.local
(Just skip the text mode browsing stuff -- the relevant part is the long monologue at the end... I really should put these ideas into my blog.)"
Nice features
Another example of features which would be easily possible with the Hurd:
transparent ftp (already possible!):
- settrans -c ftp: /hurd/hostmux /hurd/ftpfs /
- ls ftp://ftp.gnu.org/
- # -> list the files on the FTP server.
media-player translator:
- settrans play /hurd/mediaplayer_play
- cp song1.ogg song2.ogg play
- # -> files get buffered and played.
or even:
- cp ftp://foo/bar/ogg play
that's KDEs fabled network transparency on the filesystem / shell level (where it belongs to be desktop agnostic).
add temporary filesystems anywhere via
settrans -a NODE /hurd/ext2fs
On-demand mounted filesystems via a passive translator which unmounts the filesystem when it isn’t used for some time.
make everything temporarily writeable without really changing it via unionfs. Store the changes on an external device.
Read tar archives and mbox files via
ls foo.tar.gz,,tarfs
andls foo.mbox,,mboxfs
, respectively → nsmux.Use stuff like the new akonady (personal information) framework in KDE more efficiently from the shell.
Reality check
Check which of the ideas can already be done easily with the Hurd in its current state, which ones are a bit more complex but already possible, which ones need a bit of coding (could be accomplished in a few months judging from the current speed of development), which ones need a lot of work (or fundamental changes) and which ones aren't possible.
Already possible and easy
Sample translators:
- hello world.
- transparently bind FTP into the filesystem
- hostmux + ftpfs -> connect to FTP automatically via asking for a dir named after the hostname -> fully transparent FTP filesystem: "touch ftp: ; settrans ftp: /hurd/hostmux /hurd/ftpfs / "
- bind any filesystem at any place in the directory tree (you have access to) without needing to be root.
- elegantly mount iso images and similar as unprivileged user.
Other useful stuff:
- Install deb-packages from an ftp server via 'dpkg -iO ftp://foo/bar/package.deb'
- remount a filesystem readonly as regular user: fsysopts /foo -r
- give a process additional group and user permissions at runtime:
$ groups
root
$ ps -L # gives me the PID of my login bash -> bashPID
...
$ addauth -p bashPID -g mail
$ groups
root mail
Having a complete GNU System (but not yet on every hardware, and only about half the software Debian offers has been ported).
Already possible but complex or underdocumented
Easier access to low-level functions via translators.
Operating system study purposes as it's done with minix.
Tinkering for fun - need documentation about the fun things which can be done.
Need a few months of coding
A filesystem-based package manager.
subhurds for regular users
- A framework for confining individual applications is really just one possible use case of the hurdish subenvironments. Writing the tools necessary for that should be quite doable in a few months. It's probably not really much coding -- most of the work would be figuring out how it should be set up exactly.
- subusers
- "subdo":
# Example: Let a virus run free, but any effect vanishes
# once the subhurd closes.
$ subdo --no-lasting-changes ./virus
subhurds for quickly adapting the whole system without bothering others.
Define your personal environment via translators, so you can easily take it with you (translators written in scripting laguages can make this easier - they could also for example be taken to each computer on USB stick).
A more powerful alternative to FUSE filesystems: While FUSE is limited to standard filesystem semantics, while Hurd translators can implement whatever they want. It is possible to change the behaviour in any aspect, including the way file name lookup works. Admittedly the only specific use case I know is the possibility to implement namespace-based translator selection with a set of normal translators, without any changes to the Hurd itself. It is also possible to extend the filesystem interfaces, adding new RPCs and options as needed. This allows using the filesystem for communication, yet implementing domain-specific interfaces where standard filesystems are too unefficient or cumbersome. A sound server would be one possible use case.
Namespace based translator selection (if you for example want to quickly check the contents of an iso image, just look at them via 'ls image.iso,,iso9660fs').
Need a lot of coding or fundamental changes
Effective resource management (For example via Viengoos on which Neal Walfield is working). The idea is that we could make a virtue out of necessity: Once we have a proper resource management framework, we should be able not only to catch up with traditional systems in this reagard, but in fact surpass them.
The possibility to create more efficient and powerful desktop environments.
Currently to offer CPU time to some project (like the World Community Grid), it is necessary to install a program from them, and they can then do only what that proram allows them to - which leads to reinventing a processing environment instead of just using the existing OS. With the Hurd people could just create a user for them, give that user specific permissions (like "you're always lowest priority"), add the public ssh keys of the project they want to donate CPU cycles to, and the project could just turn the computer into the environment it needs for the specific computation, without compromising the main system in any way (needs better resource management).
A shared MMORPG game world consisting simply of files for levels and person descriptions with access rights. All synchronizing is done on the translator level. Programs only have to display the given files and quickly update the state of their own files, so the programs stay very easy. The translator could notify the program when something changes.
Multicore systems (need to fixup Mach for SMP)
Running parts of the Hurd on different computers, maybe even with shared servers on dedicated hardware (Cloud Computing when the servers can be made to migrate from between computers). Maybe this should be placed in "need a lot of coding".
Unfeasible ideas
Applications
A minor phase, which will surely be interleaved with the others: Making the ideas tangible to turn them into ways how people can use the Hurd.
"Hey, look, this is the Hurd. You can use it like this to do that which you can't do as well/easily/elegantly in any other way."
Applications for private use
Applications for companies
How an application should be presented so people can easily test and digest it
We need stuff which gets people to say "hey that's cool!"
And it must be readily available. If I have to search for arcane command line parameters before I can use it, it's too hard.
From what I see, each direct cool application must be about as simple as
$ qemu hurd-is-cool.img
$ login root
$ settrans cool /hurd/cool
$ ls cool
One main focus in this example is: No command line parameters but the ones we really need. No "-a", if the example is also cool without it. No "--console" if it works otherwise.
Especially no "qemu --cd livecd --hda hurd.img ..." - that one is great for people who already know qemu or want to learn it, but the goal here isn't to teach people better usage of qemu, but to show them that the Hurd is cool, and only that.
All that interesting advanced stuff just gets newcomers confused.
The translator concept in itself is enough news to faze a mind - anything else can easily be too much.
If the application isn't as simple as the example above, then the best step would be to see if we can make it as simple - if that involves writing trivial scripts than be it so. They are trivial only to those who already understand the underlying concepts.
And now enough with rambling
The Hurd is cool, and the complex to use applications are cool, too. But they are hard to present in a way newcomers easily understand.
Compromise
For each niche:
- What do we have to do to conquer the niche?
- How many additional programmers can the Hurd get in this niche?
- How does choosing this niche limit the flexibility of further development (for example due to the goals of the people who join up)?
- Can we easily move on to conquering the next niche once we got this one?
- What should the Hurd accomplish on the long term (long term goals)? Which possible niches help that?
Each participant:
- Give your personal priorities to the niches:
- Must -> all of these of all developers must be included; remember that at most 3 to 4 ideas can be conveyed in any text.
- Should -> The number of shoulds can be used for ranking and similar.
("must", because in a community people can do what they perceive as important, and telling someone to stop what he's doing is no option (in my opinion))
Result: We talk about the niches we can already fulfill
Things to do
todo-item -> niches for which it is useful.
This might be useful for the next GSoC.
Easy
- Port debian packages to the Hurd -> currently mainly tinkerers, but also any other niche. In the long run this is necessary for every user. Easy start for devs.
- Document easier access to low-level functions via translators, one function at a time. -> tinkerers.
- get nsmux ready for regular users by setting it up in the LiveCDs by default. -> show tinkerers what it can do.
Complex
- A filesystem-based package manager: Unionmounting packages. With filterfs from nsmux packages any user should be able to selectively disable any package without affecting the system of others. Simple active translators can add packages. -> clean design and more freedom for tinkerers to setup test environments: “Does this also work with XY disabled?”
- Enable subhurds for regular users via a subdo command: A framework for confining individual applications. -> tinkerers for testing their work.
- Define your personal environment via translators, so you can easily take it with you ⇒ system on a USB stick. Would work great with a filesystem based package manager. -> ?
Huge
- Get Hurd/GNU Mach ready for efficient multicore usage. -> multicore
- Running parts of the Hurd on different computers, maybe even with shared servers on dedicated hardware (Cloud Computing when the servers can be made to migrate from between computers). -> multicore on steroids
Getting X to work on the GNU/Hurd
This is a try to get X to work in my qemu GNU/Hurd.
This is a first try, my next one will be with the ?guide from this wiki.
Firstoff: I used the following guides:
What I did
I worked as root.
First I installed xorg, x-window-system-code, rxvt and twm:
apt install xserver-xorg x-window-system-core rxvt twm
Then I set the LDLIBRARYPATH and DISPLAY
export LD_LIBRARY_PATH=/usr/X11R6/lib
export DISPLAY=localhost:0.0
After that I set the mouse and keyboard translator.
settrans /dev/kbd /hurd/kbd /dev/kbd
settrans -c /dev/mouse /hurd/mouse --protocol=ps/2
Then I started x
startx
It didn't work yet - but watch the blog for updates - I'll post once I get it working.
In the past few months the Hurd got quite many commits.
I want to write a bit about the changes they brought, and what they mean to the Hurd.
If some of my comments seem too 'simple' to you, just ignore them
First we got many Bug fixes from Samuel Thibault, mainly in libpthread (multithreading), ext2fs and libdiskfs (both filesystem interaction).
Then hurd-l4 (the port of the Hurd on the L4 kernel) seems to get quite much love by Neal H. Walfield (neal) at the moment. Quite much is saying a bit to little: hurd-l4 looks steamingly active in the commits
And there is the PyHurd project. It attempts to create a full binding to the GNU/Hurd API, so people should someday be able to, for example, create translators in Python.
There's been more - a lot more in fact, but much of it is above my coding horizon, and this entry shall end someplace (it's late - too late ).
Best wishes, Arne
Today (OK, this night) I created some codeswarm movies to visualize the code history of the Hurd.
What's particularly interesting to me in gnumach is the tschwinge - sthibaul effect in march 2008, where development suddenly seems to speed up enormeously.
It clearly shows how much impact just two developers can have - you can have that kind of an impact, too!
The code movies are created from the history of the cvs branches gnumach, hurd-l4 and hurd.
The movies:
in gnumach, red is the "kern", while in "hurd" red is stuff in "release".
.doc. is dark blue and any stuff named .linux. is shown in a blue-green in both. In Hurd-L4 is annotated: It shows libc, gcc, Hurd and L4 kernel commits in different colors.
The hurd wiki movie shows all web commits as "web-hurd@gnu.org", and you can clearly see that most changes are being done via the version control system. There's a way to split the web-commits, but since there aren't many, I leave that for another day - article on the ikiwiki page.
Best wishes, Arne
Yesterday I spent a few hours trying to get my german keyboard to let me use my umlauts (and to let me type without having to hunt down the right keys), but without much luck.
I got xkb installed after following this FaQ answer:
and this info:
(you can find the second under /etc/default/hurd-console ).
But I didn't get it to work.
What I did in short:
First I got the needed apt-sources:
Then I installed the xkb console:
apt install console-driver-xkb
And set it in the file /etc/default/hurd-console
Sadly it didn't work, but maybe this posts will give You the needed headstart for success (I'd be glad to see a guide from you!).
Some additional info: