[[!meta copyright="Copyright © 2010, 2011, 2013 Free Software Foundation, Inc."]] [[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable id="license" text="Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled [[GNU Free Documentation License|/fdl]]."]]"""]] [[!meta title="Profiling, Tracing"]] *Profiling* ([[!wikipedia Profiling_(computer_programming) desc="Wikipedia article"]]) is a tool for tracing where CPU time is spent. This is usually done for [[performance analysis|performance]] reasons. * [[hurd/debugging/rpctrace]] * [[gprof]] Should be working, but some issues have been reported, regarding GCC spec files. Should be possible to fix (if not yet done) easily. * [[glibc]]'s sotruss * [[ltrace]] * [[latrace]] * [[community/gsoc/project_ideas/dtrace]] Have a look at this, integrate it into the main trees. * [[LTTng]] * [[SystemTap]] * ... or some other Linux thing. # IRC, freenode, #hurd, 2013-06-17 is that possible we develop rpc msg analyse tool? make it clear view system at different level? hurd was dynamic system, how can we just read log line by line congzhang: well, you can use rpctrace and then analyze the logs, but rpctrace is quite intrusive and will slow down things (like strace or similar) congzhang: I don't know if a low-overhead solution could be made or not that's the problem when real system run, the msg cross different server, and then the debug action should not intrusive the process itself we observe the system and analyse os when rms choose microkernel, it's expect to accelerate the progress, but not microkernel make debug a litter hard well, it's not limited to microkernels, debugging/tracing is intrusive and slow things down, it's an universal law of compsci no, it makes debugging easier I don't think so you can gdb the various services (like ext2fs or pfinet) more easily and rpctrace isn't any worse than strace how easy when debug lpc lpc ? because cross context classic function call when find the bug source, I don't care performance, I wan't to know it's right or wrong by design, If it work as I expect I optimize it latter I have an idea, but don't know weather it's usefull or not rpctrace is a lot less instrusive than ptrace based tools congzhang: debugging is not made hard by the design choice, but by implementation details as a simple counter example, someone often cited usb development on l3 being made a lot easier than on a monolithic kernel Collect the trace information first, and then layout the msg by graph, when something wrong, I focus the trouble rpc, and found what happen around "by graph" ? yes braunr: directed graph or something similar and not caring about performance when debugging is actually stupid i've seen it on many occasions, people not being able to use debugging tools because they were far too inefficient and slow why a graph ? what you want is the complete trace, taking into account cross address space boundaries yes well it's linear switching server by independent process view it's linear it's linear on cpu's view too yes, I need complete trace, and dynamic control at microkernel level os, if server crash, and then I know what's other doing, from the graph graph needn't to be one, if the are not connect together, time sort them when hurd was complete ok, some tools may be help too i don't get what you want on that graph sorry, I need a context like uml sequence diagram, I need what happen one by one from server's view and from the function's view that's still linear so please stop using the word graph you want a trace a simple call trace yes, and a tool with some work gdb could do it you mean under some microkernel infrastructure help ? if needed braunr: will that be easy? not too hard i've had this idea for a long time actually another reason i insist on migrating threads (or rather, binding server and client threads) braunr: that's great the current problem we have when using gdb is that we don't know which server thread is handling the request of which client we can guess it but it's not always obvious I read the talk, know some of your idea make things happen like classic kernel, just from function ,sure:) that's it I think you and other do a lot of work to improve the mach and hurd, buT we lack the design document and the diagram, one diagram was great than one thousand words diagrams are made after the prototypes that prove they're doable i'm not a researcher and we have little time the prototype is the true spec that's why i wan't cllector the trace info and show, you can know what happen and how happen, maybe just suitable for newbie, hope more young hack like it once it's done, everything else is just sugar candy around it