UP

We describe some general tools and concrete security policies and describe how to implement those policies.

{arcane}See (p3,escrow) for some other policies.

Keeping Secrets

What does it mean to keep a secret in a computer? We need to define this idea more precisely in order to make arguments and even proofs about keeping secrets. See (p3,ref-mon) too.

The fence analogy

It seems that keeping secrets involves fences. Arguments about keeping secrets seem to involve assertions that what happens on one side of a fence cannot affect what happens on the other side.

What do these fences divide? They divide sets of pages and nodes.

Who designs these fences? What are the fence tender's tools? Fence tenders are obviously "above" the rules they enforce.

A fence divides some world into two parts. Does one fence tender need to worry about more than two parts?

Does the fence tender live on one side of the fence or the other?

There seem always to be an inside and an outside to a given fence. Normally the inside is set up within a larger region as a new fence is being constructed. Some fences are to keep secrets in and others are to keep them out {(p3,bang)}. I think that there are other kinds too.

What does it mean for a secret to stay on one side of the fence? I think that it is necessary to consider the state of one side of the fence. By state we mean the contents of all of the pages and nodes on that side {existence of process too}. To make this more precise probably requires a more formal description of the Gnosis kernel.

In (p3,bang) we consider the possibility of determinate calculation. Where this is possible we might merely say that the state of side one of a fence must be determined uniquely from the previous state of that side {the secret being on the other side}. The points made in (p3,noise) preclude this approach for at least several years.

Another attempt is to say that the probabilities of states on one side of a fence are determined by the previous state.

I am afraid that this merely obscures and hides the problem. Arguments about such probabilities would require far more knowledge of the machine that even the designers have.

It may be possible to frame arguments about these probabilities even when we don't know how to calculate them!

To illustrate the difficulty it would probably be necessary to make arguments about the effect of the state of the cache at program switch time on the speed of the new program.

{arcane}General Security Tools - Information Diodes

This information was largely written before factories were invented. Except for the idea of one way signals there seems to be no ideas in the body of this text that are not directly available with factories. See (formative,diode) for the body of the text.

{arcane}Reference Monitors

This section describes some ideas for implementing a reference monitor under Gnosis.

A reference monitor is a technique for implementing a security policy. Logically it sits between the subjects of the security policy (in the Gnosis system these would be domains) and the objects of the security policy (in Gnosis domains and segments). On every access the reference monitor checks the access for validity and prevents the invalid ones.

In most cases a reference monitor will want to allow some references to be read only, some to be write only, and some to be read write. In terms of information transfer this means that some references allow transfer of information from outside the domain into it, some allow transfer of information from inside the domain out of it, and some allow transfer in both directions.

Basic applicable Gnosis facilities.

Segment keys with the read-only and no-call bits set provide a way of creating read only access to data pages. The only way of transfering information in the other direction would be with the use of wall banging techniques. See {(p3,bang)} for some solutions.

Domains and their start keys, as provided by the Gnosis kernel, inherently transfer information in both directions. This means that a reference monitor that wishes to grant read only access to a domain via a start key will either have to trust the domain not to retain information, or filter all calls thru a trusted front end that minimizes the information leakage.

One way of creating a write only domain is to use a trusted front end. This front end would take the parameters provided by the calling domain and use then to fork off a call to the untrusted domain that is to be write only. It would have to ensure that any key parameters passed were read only (e.g. r/o segments). Some useful cases might be solved by not allowing key parameters. It would immediately return to the caller (passing no information). This would prevent the calling domain from getting information based on the time it takes to do a call.

A use of these facilities.

Assume it is desired to implement a policy where information is defined to be in a fixed number of compartments. For example, a simple personnel/payroll system might have the compartments: Names and addresses, Pay information, Performance reports, and EEOC data. The policy is that:

.(1) A user may access information from only those compartments which he is authorized to access.

.(2) Users are prevented from writing information from one compartment to another.

Some reports from the system may require access to information from more the one compartment. When this occurs a derived compartment is created. Information can not flow from this new compartment to any of the compartments from which it was derived.

To implement this policy a reference monitor is created with the following facilities:

Information on users and their authorizations.

Information on the available resources and which compartments they are in.

When a user logs on, or when a user requests a new compartment {which might be for every command in a traditional timesharing system}, the reference monitor creates a new "regime" for the user's programs. The capabilities created inside this regime would be in a standard format similar to the womb {(p3,womb)}. It would provide capabilities to request access to all the resources that the user was authorized to access. Note: this is not access to the resource, but just to request access to the resource. For each of these regimes that the reference monitor creates, it maintains a record of what compartments have placed information into the regime.

When the user's program, running in the new regime, requests access to a new resource the request is checked by the reference monitor. This check has the following outcomes.

The context level of the requested object exactly matches the context level of the regime. - Read/Write access can be granted.

The context level of the object is a subset of the context level of the regime. - Read/only access can be granted.

The access is valid for this user {he held the request capability}, but the context level of the object is outside the context level of the regime. - The context level of the regime is upgraded by making all read/write links out of the context read only. Then the context level of the object is checked against the new context level of the regime. -

If they are the same, access is granted read/write.

If they are different, access is granted read/only.

We describe here an some oversimplified security policies. The idea is to work up to some real security policies.

Simply Ordered Secrets

We wish to divide the world into three levels, confidential, secret and top secret. We call these levels 1, 2 and 3. We wish to insure that the state of one level at one time does not depend on the states of higher levels at previous times. This is a restatement of some familiar military security policy.

This view of these levels as finite state machines may be unfamiliar but it recognizes that the information we are guarding can be represented in parts of the system {PSW's, capabilities, etc.} that were not designed to hold data.

This statement covers wall banging {(p3,bang)} as well as those means of communication that were designed to allow processes to communicate.

While we wish that all state transitions could be subject to these rules, section (p3,noise) describes some transitions that we have not yet managed to eliminate.

Perhaps this view is not applicable to techniques where a computing object moves from one security level to a higher level as it is affected by information from higher levels.

To establish this policy we design an initial trusted program {(p1,bb)} that sub-divides and partitions initial keys into three piles, one for each level.

This operation is done so that signals from domains built from components from one pile cannot affect domains built from other piles.

Then the initial program forms from each of these piles a virtual initial domain that behaves as an initial domain. Programs designed to run in initial domains can now start in each of these virtual initial domains to provide three worlds that cannot communicate with each other.

But we have not solved the original problem yet; we have not provided for the inter world communications that are to be allowed. To do this the real initial program creates from a fourth pile one or more instances of one of the following objects:

A one way segment. A read-only segment key is placed in pile i and a read-write segment key is placed in pile j where j < i.

If there is a segment keeper for this segment it must be trusted not to be influenced by the reader so as, in turn, to influence the writer.

A string queue without feed-back is a FIFO queue of strings such that the sender is never blocked.

If the size of the queue exceeds available space, strings are discarded. The sender is not informed of this but the receiver is.

A bolder approach is to provide a channel thru which segment keys may be passed and transformed so as to provide the recipient no way to influence the segment.

This may be done by creating a node, placing the provided key in the node, creating a segment key to the node with the read-only and no-call bits on.

If the resulting segment key is given to the top secret world, that world can have no influence on the segment even if the segment has a keeper in the secret world.

If the system has sense keys {(p1,sense)} these keys can be passed into the top secret world.

If we wish newly created objects in one level to be sensible at higher levels our security program must stay in the picture.

To recapitulate the above, we have a program IP that is responsible for enforcing the separation.

This program forms three piles of keys for the security levels and another pile for security administration. From the latter pile it builds (_sense key portals) leading from lower levels to higher levels. These portals have key queues for sense keys. The portals perform the sense transformation {(p1,sense-trans)} on each key as it is put in the queue.

There may be several such portals between a pair of levels, perhaps one each for each proprietary data type.

For instance the FSX segment keeper (which makes virtual copies) might have its own portal. That way it could deliver top secret virtual copies of secret segments.

It would presumably be commonplace for an individual with directories at two levels do have a portal between those directories.

In general, however, I don't see how such portals are distributed after creation.

I suspect that several of these questions will be answered when we have the external specs of a portal.

Lattices of Secrets

Sometimes one must divide information into compartments in order to limit the number of people with access to given information. Having done this, it shortly becomes necessary to allow combinations of these compartments to allow planning that depends on the information in several such compartments.

This requires, in general, ways to run programs with simultaneous access to information in several compartments. If such programs were merely given the combined capabilities of the respective programs of the several compartments, those programs might move information from one compartment to another.

If programs with such combined access can be made to run without influence on the constituent compartments the problem is solved.

Programs that must influence data from the compartments must inherently be trusted. We do not treat the administration of such programs here.

We call this new program environment another compartment. We say that this compartment is above the other two. This relationship between compartments obeys the axioms for lattices invented by mathematicians if we consider two compartments equivalent when they are above the same compartments.

The information diodes described in (p3,diode) can serve our purpose. It now becomes clear, however, that those worlds must be dynamically created.

What keys must a program have that allows it to dynamically create these new lattice members? How do the programs in the new compartment refer to the required information in the lower compartments?

Since the programs in the lower compartments don't know to what use their information is being put or when, they must serve higher compartments by providing factories {(p2,factory)} and signals {(formative,signal)}.

It would seem that the compartment producer would carry the factories into the new higher compartments. Information generated subsequent to the creations of the factory can be transmitted via the yield of the factory. I don't know whether it is desirable to introduce new factories after the creation of a new compartment. {It's safe; is it easy and useful?}

{arcane}Comments on Current Data Base Security Functions

Many current data base systems allow a data base administrator to limit certain certain database accessors to specific fields. These limits are even designed to limit the access of programs.

Three objections pertain to this method of security

They do limit the programs that access the data via the data base system. They do not limit programs that use their unintentioned but real authority to access the data directly.

The policing function is integrated with the already complex data base function so as not to allow independent audit or modification.

Only the classes of security policy anticipated by the data base system builder are available. How many data base systems can support a policy of showing median ages of employees whose salary is less than \$35,000 within a specified department?

It should be clear to the reader that systems such as Gnosis can solve all three of these problems.