summaryrefslogtreecommitdiff
path: root/Hurd
diff options
context:
space:
mode:
authorPierre Thierry <nowhere.man@levallois.eu.org>2007-01-10 00:19:00 +0000
committerPierre Thierry <nowhere.man@levallois.eu.org>2007-01-10 00:19:00 +0000
commite1bd9475232b3b4dcf17809cc7a9ed67afc6924f (patch)
tree80329237450e30dad386f8f6eac099b46e1b5777 /Hurd
parent0ae71bda3754e99cf964d07fcce1aec5aa38ad52 (diff)
none
Diffstat (limited to 'Hurd')
-rw-r--r--Hurd/Part1OwnershipAndContracts.mdwn163
1 files changed, 163 insertions, 0 deletions
diff --git a/Hurd/Part1OwnershipAndContracts.mdwn b/Hurd/Part1OwnershipAndContracts.mdwn
new file mode 100644
index 00000000..a5260bc6
--- /dev/null
+++ b/Hurd/Part1OwnershipAndContracts.mdwn
@@ -0,0 +1,163 @@
+# <a name="Part_1_Ownership_And_Contracts"> Part 1: Ownership And Contracts </a>
+
+This is the first in a series of notes that will serve to formulate my position from ground-up. The way will not be straight-forward. I can not give you one particular, technical argument that addresses all my concerns. Instead, the evaluation involves a step of personal judgement (but only one). In this note, I will explain why I believe that this is necessarily the case, what this step is and what my stance on it is.
+
+This mail took me 5 hours to write, which means 1.5 lines per minute. It contains only a tiny part of my argument. I hope that this removes any doubts about my sincerity to address all issues, but also makes apparent the technical constraints in doing so "immediately", as has been requested from me several times. I have to ask for patience. As everybody else, I am doing this in my spare time.
+
+Let me jump right in at the technical level: I claim that every relationship between two processes falls into one of four categories. These four coarse categories provide a differentiation that is sufficient for my argument:
+
+## <a name="Process_Relationship_Categories"> Process Relationship Categories </a>
+
+0. It is not possible for the processes to communicate via direct IPC.
+
+In all other categories, it is possible for the processes to communicate via direct IPC, because one of the processes, let me call it process A, has a capability to the other process, let me call it B.
+
+1. The collective authority of process B, immediately[1] after the time it was instantiated, is necessarily a strict subset of the collective authority hold by process A at that time.
+
+[1] See my challenge-email to find a definition of the window of time that gives meaning to the word "immediately" in the case where process B is instantiated indirectly or directly because of an action in process A. If process B is instantiated independent of process A, just assume that the collective authority hold by process A is the empty set.
+
+2. The set of collective authority of process B, immediately after the time it was instantiated, minus the collective authority of process A (if it existed), is necessarily not empty. Some of the capabilities in this non-empty set provide the ability to write-out.
+
+3. The set of collective authority of process B, immediately after the time it was instantiated, minus the collective authority of process A (if it existed), is necessarily not empty. None of the capabilities in this non-empty set provide the ability to write-out.
+
+This categorization does not say anything about encapsulation. However, it is to be understood from the description that in category 0, 2 and 3, process B is encapsulated. If it were not, the collective authority that is hold by A would include the authority of B by transition. In category 1, it is to be understood that process B, in principle, can not be successfully encapsulated (to see this, pay attention to the fact that process A could pre-arrange its authority so that no capability it has provides the possibility for encapsulation).
+
+This categories provide a complete categorization for two important system structures: The EROS/Coyotos model, which relies on categories 0, 2 and 3, while making category 1 possible. And my recursive system structure model, which relies on categories 0, 1, and 2, but rejects 3.
+
+## <a name="Agenda"> Agenda </a>
+
+This provides the basis for a goal-based analysis. The agenda can be:
+
+1) It has to be demonstrated that the goals of the Hurd can be met by relying on the process relationships described by 0, 1 and 2. This of course must include an analysis of the goals of the Hurd.
+
+2) It is useful to reason about the rejection of category 3. What do we lose by omitting it? What goals can not be achieved by the Hurd if it rejects category 3?
+
+This sets the background. I will start with the second item on the list, and then work my way up. It would not be unreasonable to do it the other way around: I could state the goals of the Hurd, then demonstrate that we can achieve them by using the model I described earlier, and then look at the interaction with category 3 relationships. This would be the straighter way. However, a discussion of the goals of the Hurd can be easier followed if the background is set. So, let me finish this note with some general arguments about the properties of category 3, and what the factors can be that determine how you think about it.
+
+## <a name="Encapsulation_and_Confinement"> Encapsulation and Confinement </a>
+
+What is the exact nature of the relationship between process A and process B, where communication can (and does) occur, but process B is both encapsulated and confined? To discuss this, we have to define what we mean by the nature of process relationships. Two concepts come into my mind: Ownership and contracts. What do these words mean?
+
+In the course of the discussion, I will make use of citations from Hegel's Philosophy of Right. I am not relying on his argumentation, it is just a convenient source for some definitions, from which I want to work.
+
+## <a name="Ownership"> Ownership </a>
+
+Ownership is not a complicated concept. You can look it up in encyclopedias or dictionaries, or you can study philosophy. Hegel defines ownership this way (Paragraph 61):
+
+"Since the substance of the thing which is my property is, if we take the thing by itself, its externality, i.e. its non-substantiality --- in contrast with me it is not an end in itself (see � 42) and since in my use or employment of it this externality is realised, it follows that my full use or employment of a thing is the thing in its entirety, so that if I have the full use of the thing I am its owner. Over and above the entirety of its use, there is nothing left of the thing which could be the property of another."
+
+A shorter definition is that ownership is the exclusive right of a person to possess, use and dispose of a thing. Note that the right must be exclusive. It must also be complete. Also note that ownership refers to human beings, not things. Things can not own other things. Paragraph 42 in Hegel's work defines:
+
+"What is immediately different from free mind is that which, both for mind and in itself, is the external pure and simple, a thing, something not free, not personal, without rights."
+
+## <a name="Contracts"> Contracts </a>
+
+Hegel describes the transition from ownership to contracts in paragraph 71 this way:
+
+"One aspect of property is that it is an existent as an external thing, and in this respect property exists for other external things and is connected with their necessity and contingency. But it is also an existent as an embodiment of the will, and from this point of view the 'other' for which it exists can only be the will of another person. This relation of will to will is the true and proper ground in which freedom is existent. --- The sphere of contract is made up of this mediation whereby I hold property not merely by means of a thing and my subjective will, but by means of another person's will as well and so hold it in virtue of my participation in a common will."
+
+A contract is thus an agreement among agents to hold a property by means of a common will.
+
+## <a name="Mediating_Actors"> Mediating Actors </a>
+
+In the case of confinement and encapsulation, there are not just two agents engaging in a contract, there are three (at least). There must be three, because encapsulation and confinement means that neither the party that is encapsulated, nor the party that is confined comes to hold the other parties property. So, there must be a third agent which does hold both parties property, and which implements the common will of the participants.
+
+To find this agent, we just have to look for somebody who comes to hold the other parties property. In computer systems without "trusted computing" components, this is the owner of the machine (and/or the system administrator). In computer systems with "trusted computing" components, the mediating agent are the people or companies designing and building the "trusted computing" hardware.
+
+In either case, the mediating agent uses tools to implement the common will. In either case, the mediating agent has, not exclusive, but still quite complete ownership over the property that is part of the contract (possession, use and disposal). In either case, implementation of the common will depends on the well-behaviour of the mediating agent.
+
+## <a name="Contract_Requires_Consent"> Contract Requires Consent </a>
+
+If the mediating agent is supposed to implement the common will of the participants in a contract, it needs to know what the common will is. If a participant wants to engage in a contract, it needs to know what the contract means before the participant can make a proper judgement about participation.
+
+In the process of entering a contract, you are giving up, at least temporarily, possession of a thing you own. This is why entering a contract requires careful consideration and explicit consent.
+
+## <a name="Contracts_Are_Not_Private"> Contracts Are Not Private </a>
+
+I can not make the transition here from the rights of individuals to the structure and legitimation of civil societies. This is the subject matter of state philosophy. However, it suffices to say that the universal rights of individuals find (often, not always) expression in the laws of the civil society, and that it is civil society which is bestowed to resolve conflicts between the perceived particular rights of individuals.
+
+Because civil societies exist, and we live in them, and contracts are fundamental to the function of a society, every society has extensive and complex laws about how contracts are made, and what they can entail, and what their limitations are. The German Civil Code contains 2385 articles on 433 pages, and this is only one of the many laws that has something to say about the matter. There are other laws specific to contractual labor, anti-trust, publicly traded companies, publications, etc etc.
+
+## <a name="A_Matter_Of_Judgement"> A Matter Of Judgement </a>
+
+It is now appropriate to look again at the proposed system structures in their extremes (there are shades of gray, but they have not been seriously discussed so far).
+
+In my model, the computer remains the property of its owner. This property right is strongly protected. The system will not allow, by default, operations that let the owner unintentionally enter into a contract between two parties. Any such contract requires explicit consent. It also requires, every time that a contract is made, to explicitly choose the mediator and scope of the contract. In other words, the owner must be explicit about his particular will that should be part of the common will of the contract.
+
+In the EROS/Coyotos model employing "trusted computing", only part of the computer is the property of the owner. Another part of the computer is owned by the manufacturer of the "trusted computing" component. The system will, by design, perpetually give away possession of parts of the computer to other agents, by engaging constantly in contracts with them. The nature of these contracts is built firmly into the system structure: The mediator is always the agent that designed and implemented the "trusted computing" component. The default "common will" is to alienate all rights to the property, except the right to destroy it.
+
+These seem to me the only serious proposals. I recognize that my model makes it harder for people to engage into contracts when they want to. In my opinion, this is justified: Negotiating and implementing a contract is a fundamental process that can not be overly simplified. In fact, in any serious business, developing the contracts between collaborating agents is a very serious and essential part of the process. Business holders are acutely aware of the risks involved in engaging into a contract, and spent significant personnel and financial resources to limit their risks.
+
+There may be, in principle, a system that makes it convenient for users to engage in standard contracts selectively, explicitely and safely. For this, however, the mechanisms involved must allow for a broad range of expressions that reflects the structure of the existing society, and the user must be able to decide if the contract actually reflects the common will of the involved agents. This is far beyond what we can technically achieve, at least today, maybe forever.
+
+## <a name="On_The_Non_Technicality_Of_The_C"> On The Non-Technicality Of The Choice </a>
+
+Currently, we only know about the two possible extreme positions described above. There is an outstanding description of the properties of my model, and how they can be achieved. However, my claim is that the choice between these two options can not be made on technical grounds. Each system is self-consistent and provides an adequate solution to the task that it tries to solve.
+
+The choice therefore comes down to a personal preference, which may either be based on personal needs, or on a speculation on the future.
+
+However, let me raise some cautions that illustrate why I have made my choice the way I did. These cautions do not constitute an exhaustive list of my arguments. It is not necessary for me to give an exhaustive list. In the end, what system one would prefer to use is a personal decision that everybody has to make on their own grounds.
+
+## <a name="On_The_Effect_Of_Perpetual_Alien"> On The Effect Of Perpetual Alienation </a>
+
+Hegel remarks on the effect of perpetual alienation (my terminology) (paragraph 67):
+
+"Single products of my particular physical and mental skill and of my power to act I can alienate to someone else and I can give him the use of my abilities for a restricted period, because, on the strength of this restriction, my abilities acquire an external relation to the totality and universality of my being. By alienating the whole of my time, as crystallised in my work, and everything I produced, I would be making into another's property the substance of my being, my universal activity and actuality, my personality."
+
+He then continues to add a comparison to the nature of being a slave:
+
+"The distinction here explained is that between a slave and a modern domestic servant or day-labourer. The Athenian slave perhaps had an easier occupation and more intellectual work than is usually the case with our servants, but he was still a slave, because he bad alienated to his master the whole range of his activity."
+
+It is undisputed (I hope) that computers occupy more and more of our personal life. By doing so, they start to embody significant parts of our personality. We, as domain experts, are miles ahead of the general public in this regard, and it is our obligation to foresee such developments. By losing control over our computers, we risk losing the ability to act universally. This finds correspondence in the risk of losing general-purpose computers to [[TiVo]]-ized locked down embedded systems.
+
+## <a name="Passive_Defense_Is_Not_Sufficien"> Passive Defense Is Not Sufficient </a>
+
+The passive defense against this risk is not sufficient. You may hold the opinion that the "trusted computing" component is optional. The machine owner can switch it off, and ignore it. This is true, but it is true in the same way that people are free not to click on email attachments if they do not want to risk getting a virus. Security threats, and the risk of losing the substance of one's being is probably the biggest security threat of them all, requires active defense at all levels of the computer system.
+
+There have already been proposals for US law to require all computers to support "trusted computing", and to enforce its use when connecting to the internet. There are other methods of coercion as well. One method is to introduce a less harmful variant of control, and then change the conditions after it is widely established. Another method is the exploitation of a monopoly, or conspirations among large companies to ensure that there is no feasible alternative. Yet another method is to spread false information on how the technique will be used. All of these techniques and more have already been used, so these are not speculations, they are facts.
+
+Once you accept the loss of the substance of one's being as a security threat (I am not saying you need to accept that, but if you do, you will be able to follow my argument), all the same techniques and considerations apply to this security threat as to other security threats. And it is universally recognized (I hope) that passive defense is not sufficient in the context of active security threats.
+
+## <a name="Radical_Paradigm_Shifts"> Radical Paradigm Shifts </a>
+
+The "trusted computing" model embodies radical paradigm shifts in how some people think about ownership and contracts. Richard Stallman remarks (<http://www.gnu.org/philosophy/can-you-trust.html>):
+
+"A previous statement by the palladium developers stated the basic premise that whoever developed or collected information should have total control of how you use it. This would represent a revolutionary overturn of past ideas of ethics and of the legal system, and create an unprecedented system of control. The specific problems of these systems are no accident; they result from the basic goal. It is the goal we must reject."
+
+The idea that the agent who developed or collected information should be the sole arbitrator of how the information is used by other agents is in direct conflict with several social contracts on fair use, temporal boundaries on copyright protection, obligation to preserve information (for example audits, or evidence of a crime), and more.
+
+In short, the mediating agent (the implementors of the "trusted computing" component) is overreaching, in direct conflict to established laws. At the same time, for most people, organizations, businesses and in fact, quite a number of governments as well, the mediating agent will be unaccountable, because not only it will be represented by large companies that have assets at their disposal comparable to some of the smaller nations on the globe, but also, because the way the technology is implemented, it will be able to convincingly deny its own involvement (also, nominally, it is the only party which could have been involved in the matter at all).
+
+## <a name="On_The_Imbalance_Of_Defaults"> On The Imbalance Of Defaults </a>
+
+In the encapsulated, confined example, the confined party risks, by default, nothing, and the encapsulated party risks, by default, all the resources that is giving up temporarily, for the whole time of the contract, without any guarantee for a result.
+
+This is an imbalance of defaults from which a balanced, negotiated contract is difficult to achieve. I see no reason why it should be easier or harder to achieve a balanced, negotiated contract in either system. They start from two extremes, and the right solution is somewhere in the middle. However, my system does not contain a comparable mechanism which is imbalanced by default. Instead, every agent is in the same situation. Practically, I think that a balanced contract is more likely to be the result of equal starting conditions than from unequal starting positions.
+
+## <a name="Conservative_Choices"> Conservative Choices </a>
+
+In the above sense, my model is really ultra-conservative. The only assumption is that it is the owner of the computer who should be in control over it. This is in fact, a logical tautology. I do not make any further assumptions about what should be imposed.
+
+## <a name="The_Choice_Of_A_GNU_Generation"> </a> The Choice Of A GNU Generation
+
+If you read carefully the text by RMS on <http://www.gnu.org/philosophy/can-you-trust.html> you will find out that, although the text focusses on DRM, it really anticipates a much broader class of problems. The free software movement depends on the free software philosophy, it is its heart and soul. Even if you do not subscribe to the free software philosophy, you should be able to agree with the following statement:
+
+Every person on earth should be able to write useful computer programs and share them with their friends without fees.
+
+For this, several things are required: We must have access to hardware that obeys our command. If it doesn't, or even if it only makes it very hard, we can not write programs for it. We must have access to information about how to write useful programs. For this, we must learn, and one way to learn is to observe how the programs that our friends wrote work. Also, to write programs that are useful in the real world, we must be able to reverse-engineer other proprietary programs. We must be able to publish our own, original work unencumbered by legal problems like patents.
+
+All of these things must be easy, or otherwise our ability to do our work is in danger. In the context of this discussion, my model supports these operations easily. The "trusted computing" model puts them at such a high risk that it threatens the mere survival of free software.
+
+This is something that is very important to understand. It is highly unlikely that the GNU project would accept a technology that threatens its own survival. So, if you want to propose a use case for this technology for a GNU project, you have to demonstrate more than just that there are people who want to do this. You would have to demonstrate that the benefits compensate the risk. Because the risk is very serious and very great, the compensating benefit would have to be equally big. Because this is a GNU mailing list spawned off a GNU project with the intent to write an operating system for the GNU project, I think it is appropriate to point this out.
+
+This does not mean that I am not, personally, interested in hearing your ideas. Furthermore, and this is also important to understand, I do not believe anymore that there is a conflict between the free software philosophy and the goal of writing a secure and useful operating system. The possibility that there might be such a conflict has been a great concern of mine in the last half year. However, once I had resolved two important use cases (suid programs and cut&amp;paste), I was able to see what parts of the security infrastructure were actually important to me, and which parts I think are a separable concern. From there, it was not difficult to generalize to the above analysis of ownership and contracts.
+
+## <a name="Outlook"> Outlook </a>
+
+This then, is my motivation for closely examining how (1) my model can be technically described, and (2) what its properties are, and (3) what its justifying design principles are. This sets the agenda for upcoming mails, so let me insert a breaking point here.
+
+Thanks, Marcus
+
+----
+
+Note: this document has an [[Part1OwnershipAndContractsAddendum]].