summaryrefslogtreecommitdiff
path: root/Hurd/Part1OwnershipAndContractsAddendum.mdwn
blob: d64d91791ba1fe68f4031328d3c78d7570e20318 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
this is an addendum to my first note. It provides one more concern about the "trusted computing" model, an important clarification of the nature of my objection, and a retraction on the need for a new design principle.

## <a name="Monoculture_Of_Service"> Monoculture Of Service </a>

In the "trusted computing" model, it is suggested that all contracts, by default, use the same mediating agent. This introduces a single point of failure into the system architecture. It also concentrates social and political power into the hands of the mediating agent, which can (and will) be abused.

In my model, all contracts have to be established explicitely, and there is no default mechanism. This will naturally cause people to choose a variety of contracts. For example, many contracts do not require a mediating agent at all, but the common will can be implemented by either of the involved parties. Often, there already is a suitable, local mediating agent available.

## <a name="Quantitative_Differences_Cause_Q"> Quantitative Differences Cause Qualitative Differences </a>

My main objection is thus that the pervasive use of the confined+encapsulated design pattern in the system architecture leads to a new qualitative difference between the systems. Every single contract in isolation may appear innocent. Their sum creates emerging behaviour that I consider a threat.

It is an open question to me if the individual contracts indeed are innocent. In every civil society, there are some contracts that are invalid, even if you sign on to them. Some rights are well-recognized as inalienable. If such an inalienable right is contained in the confined+encapsulated design pattern or not is a difficult question that requires a much more careful analysis than I have attempted so far.

However, even if every such individual contract is innocent, my objection still stands, because it is grounded not in the nature of the individual contract, but in the cummulative effect if it is used pervasively in the system architecture.

In fact, it is not hard to see that if we take an individual contract of the confined+encapsulated sort, it can be straightforwardly implemented in my model with only one requirement beyond what I have already planned for: The user would need to be able to create a space bank that provides encapsulation. (Some more features are required to make it possible to create such services automatically without human inspection, but that is a minor point). However, the presence of this feature is a local property of the system architecture, not a global property.

It is thus difficult for me to understand why it has been argued that not using this form of contract in the system architecture constitutes a de-facto ban of the ability to engage in such a contract. Quite the opposite, I think that engaging in such a contract is very well possible with only local, non-intrusive changes to the operating system to support some very specific functions. Maybe (I have not analyzed this) it does not make sense to engage in only one such contract with limited scope, maybe the very nature of the contract requires its pervasive use in the system architecture. If this is true (again, I do not know if this is true or not), this could be a first indication that this particular form of contract is not as innocent as it appears.

## <a name="A_New_Design_Principle_"> A New Design Principle? </a>

I have suggested before that I have formed a new design principle that provides a taxometer for the use of the confined+encapsulated design pattern. However, after having written my note I do not think that a new design principle is necessary.

The only decision that has to be made is if the risk of losing the "substance of ones being", as it applies to the realm of one's computers, is a security threat or not. The rest follows quite logically from standard principles on how to design secure operating systems.

The substance of my argument can thus be summarized in four simple words:

**Ownership must be secured**.

Thanks, Marcus