1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
|
[[!meta copyright="Copyright © 2000, 2008, 2013, 2014 Free Software Foundation,
Inc."]]
[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable
id="license" text="Permission is granted to copy, distribute and/or modify this
document under the terms of the GNU Free Documentation License, Version 1.2 or
any later version published by the Free Software Foundation; with no Invariant
Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license
is included in the section entitled [[GNU Free Documentation
License|/fdl]]."]]"""]]
The `pfinet` server is a hacked Linux internet implementation with a glue layer
translating between the Hurd [[RPC]]s and the middle layer of the Linux
implementation.
# Bugs
## Those Listed on [[Service_solahart_jakarta_selatan__082122541663]]
## [[IPv6]]
## IRC, freenode, #hurd, 2013-04-03
<braunr> youpi: there are indeed historical bugs with small packets and
tcp_nodelay in linux 2.0/2.2 tcp/ip
<youpi> oh
<braunr> http://jl-icase.home.comcast.net/~jl-icase/LinuxTCP2.html
## MAC Addresses
[[!tag open_issue_hurd]]
### IRC, freenode, #hurd, 2013-09-21
<jproulx> what command will show me the MAC address of an interface?
<youpi> ah, too bad inetutils-ifconfig doesn't seem to be showing it
<youpi> I don't think we already have a tool for that
<youpi> it would be a matter of patching inetutils-ifconfig
## Routing Tables
[[!tag open_issue_hurd]]
### IRC, freenode, #hurd, 2013-09-21
<jproulx> Hmmm, OK I can work around that, what about routing tables, can I
see them? can I add routes besides the pfinet -g default route?
<youpi> I don't think there is a tool for that yet
<youpi> it's not plugged inside pfinet anyway
## IRC, freenode, #hurd, 2014-01-15
<braunr> 2014-01-14 23:06:38 IPv4 socket creation failed: Computer bought
the farm
<braunr> :O
<youpi> hum :)
<youpi> perhaps related with your change for "lo" performance?
<braunr> unlikely
<youpi> I don't see what would have changed in pfinet otherwise
<braunr> mig generated code if i'm right
<braunr> lib*fs
<braunr> libfshelp
<braunr> looks plenty enough
<braunr> teythoon's output has been quite high, it's not so suprising to
spot such integration issues
### IRC, freenode, #hurd, 2014-01-16
<braunr> teythoon: so, did you see we have bugs on the latest hurd packages
:)
<braunr> for some reason, exim4 starts nicely on darnassus, but not on
another test vm
<braunr> and there is a "deallocate invalid name" error at boot time
<braunr> it's also present with your packages
<teythoon> yes
<braunr> not being able to start exim4 and other servers on some machines,
apparently randomly, is a bit frightening to me
<braunr> as the message says, "most probably a bug"
<teythoon> yes
<braunr> so we have to get rid of it as soon as possible so we can get to
the more interesting stuff
<teythoon> but there is no way to attribute this message to a process
<braunr> well, for those at boot time, there is
<teythoon> ?
<braunr> if i disable exim, i don't get it :p
<teythoon> oh ?
<braunr> but again, it doesn't occur on all machines
<braunr> and that part is the one i don't like at all
<teythoon> still, is it in exim, pfinet, pflocal, ... ?
<teythoon> no way to answer that
<braunr> ah right sorry
<braunr> it's probably pfinet, since exim says computer bought the farm on
a socket
<braunr> pflocal had its same pid
<teythoon> ok
<braunr> and after an upgrade, i don't reproduce that
<braunr> good, in a way
<braunr> there still is the one, after auth
<teythoon> yes
<teythoon> i'm seeing that too
<braunr> (as in "exec init proc auth"
<braunr> shouldn't be too hard to fix
<braunr> i'll settle on this one after i'm done with libps
<gnu_srs> (15:21:34) braunr: it's probably pfinet, since exim says computer
bought the farm on a socket:
<gnu_srs> remember my having problems with removing a socket file, maybe
related, probably not pfinet then?
<braunr> gnu_srs: unlikely
<braunr> that pfinet bug may have been completely transient
<braunr> fixed by upgrading packages
<gnu_srs> braunr: k!
<braunr> and exim4 keeps crashing on my hurd instance at home
<braunr> (pfinet actually)
<braunr> uh, ok, a stupid typo ..
<braunr> teythoon: --natmask in the /servers/socket/2 node, but correct
options in the 26 one .... :)
### IRC, freenode, #hurd, 2014-01-17
<teythoon> braunr: *phew*
# Reimplementation, [[!GNU_Savannah_task 5469]]
## [[community/gsoc/project_ideas/tcp_ip_stack]]
## IRC, freenode, #hurd, 2013-04-03
[[!tag open_issue_hurd]]
<youpi> I was thinking about just using liblwip this afternoon, btw
<braunr> what is it ?
<braunr> hm, why not
<braunr> i would still prefer using code from netbsd
<braunr> especially now with the rump kernel project making it even easier
[[service_solahart_jakarta_selatan__082122541663/user-space_device_drivers]], *External Projects*, *The Anykernel
and Rump Kernels*.
<youpi> well, whatever is easy to maintain up to date actually
<braunr> netbsd's focus on general portability normally makes it easy to
maintain
<braunr> the author of the rump project was able to make netbsd code run in
browsers :)
<braunr> and he actually showed clients using the networking stack on
windows, remotely (not in the same process)
<braunr> so that's very close to what we want
<youpi> indeed
<youpi> though liblwip is exactly the same portability focus :)
<braunr> apparently, for embedded systems
<youpi> but bsd's code is probably better
<youpi> yes
<braunr> i think so, more general purpose, larger user base
<youpi> I used it for the stubdomains in Xen
<youpi> (it = lwip)
<braunr> ok
Cloudius OSv apparently have isolated/re-used a BSD networking stack,
<http://www.osv.io/>, <https://github.com/cloudius-systems/osv>.
## IRC, freenode, #hurd, 2014-02-06
<akshay1994> Hello Everyone! Just set up my Hurd system. I need some help
now, in selecting a project on which i can work, and delving further into
this.
<braunr> akshay1994: what are you interested in ?
<akshay1994> I was going through the project ideas. Found TCP/IP Stack, and
CD Audio grabbing interesting.
<braunr> cd audio grabbing ?
<braunr> hm why not
<braunr> akshay1994: you have to understand that, when it come to drivers,
we prefer reusing existing implementations in contained servers than
rewriting ourselves
<braunr> the networking stack project would be very interesting, yes
<akshay1994> Yes. I was indeed reading about the network stack.
<akshay1994> So we need an easily modularise-able userspace stack, which we
can run as a server for now.
<akshay1994> And split into different protocol layers later.
<braunr> hum no
<braunr> we probably want to stick to the model we're currently using with
pfinet
<braunr> for network drivers, yes
<braunr> i strongly suspect we want the whole IPv4/IPv6 networking stack in
a single server
<braunr> and writing glue code so that it works on the hurd
<braunr> then, you may want to add hooks for firewall/qos/etc...
<braunr> (although i think qos should be embedded to)
<braunr> sjbalaji: i also suggest reusing the netbsd networking stack,
since netbsd is well known for its clean portable code, and it has a
rather large user base (compared to us or other less known projects) and
is well maintained
<braunr> the rump project might make porting even easier
[[service_solahart_jakarta_selatan__082122541663/user-space_device_drivers]], *External Projects*, *The Anykernel
and Rump Kernels*.
<akshay1994> okay! I was reading the project idea, where they mention that
a true hurdish stack will use a set of translator processes , each
implementing a different protocol layer
<braunr> a true hurdish stack would
<braunr> i strongly doubt we'll ever have the man power to write one
<braunr> i don't really see the point either to be honest :/
<akshay1994> haha!
<braunr> but others have better vision than me for these things so i don't
know
<akshay1994> So, what are the problems we are facing with the current
pfinet implementation?
<braunr> it's old
<braunr> meaning it doesn't implement some features that may be important,
and has a few bugs due to lack of maintenance
<braunr> maintenance here being updating the code base with a newer
version, and we don't particularly want to continue grabbing code from
linux 2.2 :)
<akshay1994> I see. I was just skimming through google, about userspace
network stacks, but I think I might need to first understand how the
current one works and interacts with the system, before proceeding
further!
<braunr> yes
<braunr> the very idea of a userspace stack itself has little implications
<braunr> it basically means it doesn't run in system mode, and instead of
directly calling functions, it uses RPCs to communicate with other parts
of the system
<akshay1994> braunr: I looked at the netBSD net-stack, and also how hurd
(and mach) work. I'm starting with the hacking guide. Seems a little
difficult :p
<akshay1994> But i feel, I'll get over it. Any tips?
<braunr> akshay1994: it's not straightforward
<akshay1994> I know. Browsing through pfinet gave me an idea, how complex a
thing I'm trying to deal with in first try :p
### IRC, freenode, #hurd, 2014-02-09
<antrik> braunr: the point of a hurdisch network stack is the same as a
hurdish block layer, and in fact anything hurdish: you can do things like
tunelling in a natural manner, rather than needing special loopback code
and complex control interfaces
<braunr> antrik: i don't see how something like the current pfinet would
prevent that
# IRC, freenode, #hurd, 2013-10-31
<braunr> looks like there is a deadlock in pfinet
<braunr> maybe triggered or caused by my thread destruction patch
[[open_issues/libpthread/t/fix_have_kernel_resources]].
|