The following is adapted from a message by Bill Frantz, originally posted on the SPKI list, and then reposted by Jonathan Shapiro on the EROS list. This web page adaptation and the title is by Mark Miller, and is posted with Bill's permission. I [Bill] was asked in private mail: What we intuitively call security is really made up of three things: keeping objects secret, protecting objects from modification, and preventing the misuse of objects. We can really only algorithmically control two of these things. We can keep an object secret by withholding the authority to access the object, or confining the computations which use the object. ACLs are a way of withholding access. We can protect objects from modification by withholding the authority to modify them. ACLs also are a way of withholding modification authority. We can't prevent something which has legitimate access to an object from miss-using that access. A simple example is a program which has the authority to erase files from a directory. There is no way we can prevent it from erasing a specific file in that directory. We must trust it not to erase that file whenever we run it. We have already been considering the actors in a system informally. More formally, the actors are either people or programs. While people can not directly act on the computational objects in a system, there is usually a program e.g. the Unix shell, which will help people do anything they have the authority to do. The logon process needs to identify people so it will give their shell the proper authority. Once logon has authenticated the user, we can consider that the only actors are programs. A critical issue is, what are the loyalties of the actors? People's loyalties are hard to determine, and most computer fraud is due to people failures rather than to technical security breaches. IMHO the best technical solution to people problems is tamper resistant audit trails. The loyalty of the program is easier to determine. It is most closely aligned with the people who wrote it, not the person running it. (However, bugs make this alignment only approximate.) The fact the programs are loyal to their authors is what allows hostile viruses and Trojan horses to exist. The fact of bugs allows the never ending list of system penetrations based on overrunning some storage buffer. Systems like NT ACLs fail because they have the implicit assumption that the loyalty of the program is to the person running it, so they consider the security problem solved when they can control people's access to objects. In these systems, when a user executes a program, a program instance is created which inherits all the authority of the user. If it is loyal to the user, there is no problem. However when its loyalty to its author causes it to abuse the authority of its user, there is a security problem. We have several tools to reduce security exposures in our systems. We can use the principle of least privilege to sharply limit the authority programs have when they run. For example, we can run the program with erase authority described above with only the authority to erase specific files. By only giving it access to the computational objects it needs to do its job, we can prevent it from accessing or modifying other objects in the system. We can also control where programs can transfer the data they can access by controlling their communication channels. This technique is called confinement. (See Butler Lampson's paper, "A Note on the Confinement Problem", Communications of the ACM, V 16, N 10, October, 1973. for a discussion of its limitations.) Systems which allow programs to write files in essentially any directory can't confine those programs. It should be noted that in a free society, people can not be confined as a way of keeping them from communicating the secrets they know. It is not clear you can even confine people, short of killing them, in the most repressive of societies. To keep secrets, you must trust the people who know the secret. The last tool we have is trust. However, in general we want to follow the dictum of, "Trust, but verify." For people we use techniques such as non-disclosure agreements, background checks, and audit trails. For programs we perform security audits. Since auditing every program we run is unfeasible, so we really want to limit the number of programs we need to audit. Here are the techniques we have related to the actor and the property we are concerned with:
An additional set of problems occur when programs have authority which their users do not have. The common technique for protecting the integrity of databases produces this situation. Users have authority to invoke certain database transactions which ensure the rules are followed. The transactions in turn have the authority to read and write the database. The kludges systems implement to support this requirement are responsible for what Norm Hardy calls the Confused Deputy Problem. The program instance has the authority, but it doesn't know whether it is from the user or associated with the program. As a result, the user can persuade the program to misuse its authority. A solution to the problem of programs running with all the user's authority is to create a new security domain for the program instance, and give it only the authority it needs to do its job. Consider how we would do this in an ACL system. Assume Alice is executing program Foo. We need to create a new security domain, lets call it Alice'sFooInstance. We need to add Alice'sFooInstance to the ACLs for all the resources that the program needs during its execution. This solution lets us enforce the principle of least privilege, but still leaves us exposed to the Confused Deputy Problem. To also solve the Confused Deputy Problem, we need to create three security domains. We create Alice'sInstance and add it to the ACLs of objects Alice is authorizing, after checking that Alice is on those ACLs. We create Foo'sInstance and add it to the ACLs of the objects Foo is authorizing, after checking that Foo is on those ACLs. We then create Alice'sFooInstance and put it on the ACL for Alice'sInstance and Foo'sInstance. Since it must go indirectly thru either Alice'sInstance or Foo'sInstance to access objects, it can ensure that it is invoking the correct authority for each access. These solutions produce many security domains and have significant performance problems, even on one computer. When we contemplate them in a distributed, mutually suspicious system like the Internet, the performance problems get worse, and we introduce distributed authority issues inherent in changing ACLs. Capabilities allow us to solve both these problems. One view of capabilities is as distributed ACLs. Instead of keeping the ACL next to the object it is protecting, we can distribute the entries, and authenticate them by a digital signature. A capability can be implemented as an object pointer signed by the authority which grants access to that object. Since the Internet is inherently insecure, this implementation would fall to the first packet sniffer. In SPKI, we protect our capabilities (called certs) by including information which can authenticate the holder. To allow the holder to create new, restricted security domains, we allow delegation. With capabilities, we create a new security domain, and give it the capabilities Alice is authorizing by getting them from Alice. We also give the capabilities Foo is authorizing by getting them from Foo. Since the capabilities combine naming with authorization we avoid the Confused Deputy Problem. Since the capabilities include the authority, we don't have to update ACLs. In a distributed SPKI system, Alice would generate delegation certs to whatever machine/security domain was going to run Foo using that machine's public key. If Foo was to run on a "random" machine in the network, the provider of the Foo program would also generate delegation certs for the necessary authority for that machine. Then that machine would have the authority to do its job. Capability systems can also provide confinement, but the details are beyond the scope of this rant. For further information, see the KeyKOS Page for information on a capability system which did provide confinement. While ACL systems like NT may be "the state of the art", we can do significantly better. |
||||||||||||||||||||||
Unless stated otherwise, all text on this page which is either unattributed or by Mark S. Miller is hereby placed in the public domain.
|