Quantify security under intelligent agents
Cyber attacks are different from natural disaster
In our physical world, security challenges usually result from natural disasters rather than terrorists. On the contrary, cyberspace is completely artificial and virtual, thus constantly facing man-made threats. The essential difference between natural and human threats is that the latter has the capacity to actively take actions and affect the outcome. Therefore, although we can predict floods, hurricanes, and even earthquakes with increased accuracy and timeliness, it is still challenging to predict, detect, and deter cyber attacks as the intelligent attackers can adapt their actions to the detection and defense methods.
Game theory is a necessity to characterize the interaction among intelligent agents.
To predict natural disasters, we need to understand the physical laws and ask ourselves how these laws lead to the outcomes? However, to predict cyber attacks, we need to understand the laws of intelligent agents, i.e., the preference, knowledge, capacity, and rationality of the adversaries who launch the attacks. To this end, we attempt to discern why these cyber attacks happen to these components at this time in this way? To answer this more complicated question, we need to characterize the interaction among multiple intelligent agents where game theory has a perfect fit.
Security by design vs. Security by defense
Equipped with the game-theoretical viewpoint, we know that unlike natural disasters, cyber attacks are not doomed to happen. The adversaries launch attacks to gain rewards that are implicitly contained in various outcomes such as data theft, the disruption of system operation, and even the spreading of fear and distrust. An attack does not happen, or at least is not sustainable, if its expenses outweigh its gain in the long run.
The ancient philosophy from Sunzi's the art of war also applies in the model cyber warfare. After adopting the economic viewpoint, the defender can exploit the following three advantages to achieve security by design, which serves as a complement to the traditional defense methodologies such as firewalls and intrusion detection systems.
[Designing attacker's cost] The defender has the advantage to design the system structure proactively to make it costly for the attacker to succeed. Examples include cyber DMZ to reduce the attack surface, layered defense and defense-in-depth to delay the penetration, and MTD to increase the attacker's cost to identify the valuable assets.
[Designing attacker's information] The defender can gain an information advantage by introducing defensive deception techniques such as honeypots and honeyfiles.
[Designing attacker's epistemology] The defender may have the advantage to exploit the human weakness of the adversaries and hack back. For example, the attackers may also experience cognition biases such as framing effect so that they become more conservative to launch attacks to avoid being detected.
Achieve security rather than deter all attacks
Similar to the adversaries, the network defender should evaluate the utility of potential security policies under different circumstance. Then, the defender's ultimate goal to achieve security should also be utility maximization rather than deterring all potential attacks. Apart from the increasing difficulty to achieve absolute security as the cyber space becomes more complicated and cyber attacks become more advanced, deterring attacks is just one way to achieve a higher utility. It is not the only way and also should not be the final goal.
Moreover, pursuing absolute security locally and temporarily can result in unexpected insecurity to the entire system in the long run as shown in the following two examples.
[Security vs. resilience] When we isolate a computer network from the external network, the air gap not only block attacks but also block a real-time update of the virus database and vulnerability patches. Then, once an attack gets around the air gap defense during the longstanding operation, they can remain in the isolated system without being detected.
[Security vs. usability] When the company's security team set up complicated password rule and requires a frequent password change, then the employee ends up writing down their passwords and put them next to their computers, which makes the entire company network vulnerable to insider threats and social engineering.
[Security vs. cognitve capability] When the IDS tries to report all the alerts, human operators get overloaded and cannot identify them timely (IDoS attacks). It is better to only show high-risk threats to fit the operators' limited cognitive capability.
Bear this achievable security in mind, the outcome of the cyber security game is no longer binary, i.e., win or loss, which is zero-sum in game-theory terminology. However, the outcome can be characterized by a Nash equilibrium as the result of a general-sum game. Moreover, as we change the goal from attack deterrence to utility maximization, we can introduce new elements such as cyber insurance to transfer risk rather than mitigate risk.
Why game theory for security will be the right trend in the long run?
On the other hand, there is an urgent need for automated and proactive defense mechanisms that possibly also rely on AI and machine learning techniques. The goal of the defense mechanism is to deter attacks or further create difficulties for the adversaries.
Thus, in the cyber field, the interaction between attackers and defenders is moving from human-versus-human to algorithm-versus-algorithm, which will be ultimately strategic and game-theoretic.