Mark Miller reports the following text from “Programming Languages for Distributed Applications” by Haridi, Van Roy, Per Brand, and Schulte:
Capabilities do not solve all problems in security [53]. They have inherent weaknesses. First, the authorization to do something is given very early, namely when the capability is given and not when the operation is attempted. Second, a capability can be forwarded to anyone and it will continue working. Therefore, a capability-based mechanism needs to be extended, for example with access control based on the identity of the capability's current possessor.
This seems to be a fairly clearly stated objection to a capability security pattern of which pattern I am fond. A similar more extensive citation is the section “Complete Mediation” from the paper “Extensible Security Architectures for Java” by Wallach, Balfanz, Dean and Felten.

Classically computer systems with any pretense to security achieved that security thru fixed code, in the form of a collection of modules that collectively protect themselves and impose access restrictions on all other code in the system. Usually this collection runs in privileged mode to the exclusion of all other software.

Capability systems are generally composed of a body of code, called the security kernel here, that imposes capability discipline on the rest of the code in the system. This code may employ specialized capability hardware if it is present. I presume that this is what the quoted papers refer to as the “system”. Keykos has extended capability discipline downwards to that code that provides several core OS functions such as how pages are grouped to form files. The factory logic provides a form of security unavailable in classic operating systems and is implemented “outside the kernel” and is itself fully limited to the capabilities it wields, and it holds no restricted capabilities.

If you define “system” to include factory logic then you can indeed confine and grant to untrusted programs, access to data that you would not entrust to the creators of those programs.

I extol to benefits of user mode security code here.

From the Complete Mediation section I quote:
This is the fundamental flaw in an unmodified capability system; two programs which can communicate object references can share their capabilities without system mediation. This means that any code which is granted a capability must be trusted to care for it properly.
Well said, but lets name the players: Two programs X and Y are feared to be indiscriminate about sending stuff to each other. Without loss of generality lets say that X holds a capability S to a secret and we cannot rely on X not to send S to Y. The quote says that X and Y can share capabilities without system mediation, but I have never seen a system in which this is true. For X to cause Y to hold S requires X to send S to Y in a message. That requires X to hold some sort of capability m to Y. The system mediates in that message as it interprets m. The solution is to ensure that X does not hold a capability such as m. Capability theory and practice is all about ensuring that code that cannot be relied upon lacks capabilities that can damage. You are safe because I lack capabilities to your vital components. I agree that the issue is one of confinement. The factory design capitalizes on this insight. It has been implemented and been in use for over a decade. It is so efficient that it is used even when its security properties are not required.

One more isolated reference to a “modified capability system” is in the Accountability section of the paper where the authors suggest a facility to help remember where capabilities have gotten to. I agree that this is a very good idea and even necessary for some application areas. I have advocated a similar scheme but Keykos has not implement anything very much like it. Our proposal for a system to meet the Orange Book requirements included a detailed design for such a function but this did not modify the capability underpinnings; it deployed them to create the new function. That proposal was directed towards the Orange Book requirements which would make it cumbersome for my use. Nonetheless, it imposed mandatory security without change to the underlying simple capability model.

Earl Boebert has written a recent interesting note to the Linuxjournal