Wikipedia has a definition of ‘attack surface’ which seems close to what I want to talk about, but I will define it slightly differently.
The attack surface of a digital system is that code that directly deals in the messages received from untrusted sources. I call such data merely untrusted data and the code surface code. There may be untrusted sources deep within the system, as when the system hosts arbitrary code from untrusted sources. The attack surface is thus not just the ‘world facing code’.
Perhaps the most famous class of surface vulnerabilities is buffer overrun where there is a public protocol that specifies that some field in a message has a documented upper limit, but the protocol refers to another standard to define the actual length of the foreign message. An example is the Unix convention that a string is terminated by a zero byte. Conforming messages will always terminate the string in conformance with the standard maximum length. ‘Maliciously crafted messages’ will include a longer string and a convenient Unix library routine strcpy will end up writing data that the surface code had allocated to another use.
The attack surface of a system is even more critical than the TCB. Code that serves correctly as part of the TCB may be incompetent to serve in the attack surface where it might receives message that its TCB clients would never send.
Attack surface is relative to some particular function, which may itself be buried deep within the system. In that regard it is like the TCB. A system may have multiple functions with varying degrees of assurance of correct operation. Their TCB’s will necessarily overlap but their attack surfaces may be disjoint.
Defense in depth may improve security if the surface code fails its job. When surface code succumbs to malicious data I assume that the authority of the surface code is made available to arbitrary code. The main contribution of capability theory is to greatly limit this authority. There are two interesting cases and a spectrum between: