In economic terms, we want to assign liability such that the optimal damage-mitigation strategy occurs. The victim of a breach will mitigate their damages where no damages for breach apply in respect of the optimal strategy and payoffs. The rule that creates the best incentives for both parties is the doctrine of avoidable consequences (marginal costs liability).
Mitigation of damages is concerned with both the post-breach behaviours of the victim and the actions of the party to minimise the impact of a breach. In a software parlays’, this would incur costs to the user of the software in order to adequately secure their systems. This again is a trade-off. Before the breach (through software failures and vulnerabilities that can lead to a violation of a system’s security) the user has an obligation to install and maintain the system in a secure state.
The user is likely to have the software products of several vendors installed on a single system. As a consequence of this, the interactions of the software selected and installed by the user span the range of multiple sources, and no single software vendor can account for all possible combinations and interactions.
As such, any pre-breach behaviour of the vendor and user of software needs to incorporate the capability of the vendors to not only minimise their own products but the interactions of other products installed on a system.
There are several options that can be deployed in order to minimise the effects of a breach due to a software problem prior to the discovery of a vulnerability. These include:
- The software vendor can implement protective controls (such as firewalls)
- The user can install protective controls
- The vendor can provide accounting and tracking functions
In addition, in order to minimise the effects of a software vulnerability the following may be done in addition to the previous steps:
- The vendor can employ more people to test software for vulnerabilities
- The software vendor can add additional controls
Where more time is expended on the provision of software security by the vendor (hiring more testers, more time writing code, etc.), the cost of the software needs to reflect this additional effort, that is the cost to the consumer increases. This cost can be divided more in the case of a widely deployed operating system (such as Microsoft Windows), where the incremental costs can be distributed across more users. Smaller vendors (such as small tailored vendors for the Hotel accounting market) do not have this luxury, and the additional controls could result in an substantial increase in the cost of the program.
This is not to say that no liability does or should apply to the software vendor. The vendor in particular faces a reputational cost (discussed later) if they fail to maintain a satisfactory level of controls, do not respond to security vulnerabilities quickly enough, or suffer too many problems.
The accumulation of a large number of software vulnerabilities by a vendor has both a reputational cost to the vendor and a direct cost to the user (time to install, and the associated downtime and lost productivity). As a consequence, the accumulation of software vulnerabilities and the associated difficulty of patching or otherwise mitigating these is a cost to the use that can be investigated prior to a purchase (and is hence a cost that is assigned to new vendors even if they experience exceptionally low rates of patching/vulnerabilities). As users are rational in their purchasing actions, they will incorporate the costs of patching their systems into the purchase price.
The probability of a vulnerability occurring in a software product will never approach zero. Gödel, Turning, and Distraka demonstrated that it is not possible to prove that a software product is bug free. As a consequence, the testing process by the vendor can be displayed as a hazard model. In this, it is optimal for the vendor to maximise their returns such that the costs of software testing is balanced against their reputation.
The cost of finding vulnerabilities can also be expressed as an optimal function through the provisions of a market for vulnerabilities. In this way, the software vendor maximises their testing through a market process. This will result in the vendor extending their own testing to the point where they cannot efficiently discover more bugs. Those bugs that are sold on market are costed, and the vendor has to pay to either purchase these from the vulnerability researcher (who has a specialisation in uncovering bugs) or increase their own testing. The vendor will continue to increase the amount of testing that they conduct, until the cost of their testing exceeds the cost of purchasing the vulnerability.
This market also acts as an efficient transaction process for the assignment of negligence costs. The user still has to maintain the optimal level of controls that are under their influence (installation, patching frequency, etc.), whilst the vendor is persuaded to pay the optimal level of costs for testing and mitigation.
The vendor should not be liable for avoidable consequences. Where the user has failed to patch, and to install and configure controls, and to otherwise mitigate the possible damages that they can suffer, the vendor has no responsibility. This cost of mitigation is a part of the total costs of ownership for the software.
In creating risk-based contracts, we allow the market to determine the optimal price for risks in software. This allows software hazards to be both modelled and also expensed, and the consumer can then make an informed decision based on a trade-off between features and security.
 For further information on this topic see behavioural economics and rational behaviour.
 It may be demonstrated that sub-optimal behaviour does exist where users limit maintenance (patching) in certain conditions.
 This can be demonstrated to fit to a Poisson distributed function.
 It has been demonstrated that reputation has value to a vendor. This has real world accounting applications in the notion of “good will” in business and capital transactions.
 The breaching party is never liable for the damage that could have been mitigated under the legal doctrine of avoidable consequences.