From 9667351422dec0ca40a784a08dec7ce128482aba Mon Sep 17 00:00:00 2001 From: Thomas Schwinge Date: Wed, 10 Jul 2013 23:39:29 +0200 Subject: IRC. --- open_issues/profiling.mdwn | 105 +++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 105 insertions(+) (limited to 'open_issues/profiling.mdwn') diff --git a/open_issues/profiling.mdwn b/open_issues/profiling.mdwn index 26e6c97c..545edcf6 100644 --- a/open_issues/profiling.mdwn +++ b/open_issues/profiling.mdwn @@ -9,10 +9,14 @@ Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled [[GNU Free Documentation License|/fdl]]."]]"""]] +[[!meta title="Profiling, Tracing"]] + *Profiling* ([[!wikipedia Profiling_(computer_programming) desc="Wikipedia article"]]) is a tool for tracing where CPU time is spent. This is usually done for [[performance analysis|performance]] reasons. + * [[hurd/debugging/rpctrace]] + * [[gprof]] Should be working, but some issues have been reported, regarding GCC spec @@ -33,3 +37,104 @@ done for [[performance analysis|performance]] reasons. * [[SystemTap]] * ... or some other Linux thing. + + +# IRC, freenode, #hurd, 2013-06-17 + + is that possible we develop rpc msg analyse tool? make it clear + view system at different level? + hurd was dynamic system, how can we just read log line by line + congzhang: well, you can use rpctrace and then analyze the logs, + but rpctrace is quite intrusive and will slow down things (like strace or + similar) + congzhang: I don't know if a low-overhead solution could be made + or not + that's the problem + when real system run, the msg cross different server, and then + the debug action should not intrusive the process itself + we observe the system and analyse os + when rms choose microkernel, it's expect to accelerate the + progress, but not + microkernel make debug a litter hard + well, it's not limited to microkernels, debugging/tracing is + intrusive and slow things down, it's an universal law of compsci + no, it makes debugging easier + I don't think so + you can gdb the various services (like ext2fs or pfinet) more + easily + and rpctrace isn't any worse than strace + how easy when debug lpc + lpc ? + because cross context + classic function call + when find the bug source, I don't care performance, I wan't to + know it's right or wrong by design, If it work as I expect + I optimize it latter + I have an idea, but don't know weather it's usefull or not + rpctrace is a lot less instrusive than ptrace based tools + congzhang: debugging is not made hard by the design choice, but by + implementation details + as a simple counter example, someone often cited usb development + on l3 being made a lot easier than on a monolithic kernel + Collect the trace information first, and then layout the msg by + graph, when something wrong, I focus the trouble rpc, and found what + happen around + "by graph" ? + yes + braunr: directed graph or something similar + and not caring about performance when debugging is actually stupid + i've seen it on many occasions, people not being able to use + debugging tools because they were far too inefficient and slow + why a graph ? + what you want is the complete trace, taking into account cross + address space boundaries + yes + well it's linear + switching server + by independent process view it's linear + it's linear on cpu's view too + yes, I need complete trace, and dynamic control at microkernel + level + os, if server crash, and then I know what's other doing, from + the graph + graph needn't to be one, if the are not connect together, time + sort them + when hurd was complete ok, some tools may be help too + i don't get what you want on that graph + sorry, I need a context + like uml sequence diagram, I need what happen one by one + from server's view and from the function's view + that's still linear + so please stop using the word graph + you want a trace + a simple call trace + yes, and a tool + with some work gdb could do it + you mean under some microkernel infrastructure help + ? + if needed + braunr: will that be easy? + not too hard + i've had this idea for a long time actually + another reason i insist on migrating threads (or rather, binding + server and client threads) + braunr: that's great + the current problem we have when using gdb is that we don't know + which server thread is handling the request of which client + we can guess it + but it's not always obvious + I read the talk, know some of your idea + make things happen like classic kernel, just from function + ,sure:) + that's it + I think you and other do a lot of work to improve the mach and + hurd, buT we lack the design document and the diagram, one diagram was + great than one thousand words + diagrams are made after the prototypes that prove they're doable + i'm not a researcher + and we have little time + the prototype is the true spec + that's why i wan't cllector the trace info and show, you can + know what happen and how happen, maybe just suitable for newbie, hope + more young hack like it + once it's done, everything else is just sugar candy around it -- cgit v1.2.3