Anatomy of threat landscape and security games

Component of Security Games

We dissect security games into the following potential components and elaborate on each one in the context of cybersecurity. We aim to provide a multi-dimension explanation of how these components characterize the strategy interaction between agents.


Players

  • Attacker is the basic player in a security game. There are various types of attackers such as script kiddies, cyber punks, insider threats, and cyber terrorists. Their different motivation, capacity, and attack goals can affect their actions and utilities.

  • Defender is also an indispensable player in a security game. It can refer to the security team or security department of a company, third-party security company such as FireEye, research institution, and governmental department such as The department of Homeland Security.

  • Legitimate Users are sometimes overlooked in the security game yet it is also important participants of the cyber arena. As human is the weakest link, it is important to increase users' security awareness and obedience to prevent social engineering and reduce unintentional insider threats.

  • Network designer refers to the agent who can design the network structure, information available to other players, and possibly utility structure of other players. There can be an overlap between a defender and a network designer. But the latter one has the design power and a defender may not have the capacity to change the network topology.

  • Cyber insurance agent will be an important participant of security game to transfer risk. Game theory can help to quantify the cyber risk and design proper insurance policies.

Action and Policy

Each player in the security game can take actions to affect the system status, consequence, and their utility. Note that one agent can have multiple identities can thus take combined actions. For example, a defender who take actions against attackers can also be an insuree if he purchases a cyber insurance.

  • Attacker's actions include adversarial reconnaissance, initial access, privilege escalation, defense evasion, credential access, lateral movement, Command and control, and exfiltration. We can further specify the detail actions and techniques for each general action (see e.g. MITRE ATT&CK).

  • User's actions in the security game context are usually confined to the ones that lead to insider threats such as whether the user follows the security rules. Actions related to normal work can be considered if the security measures negatively affect the normal operation and we want to characterize the tradeoff between security and usability.

  • Defender and network designer's actions toward Attacker (see e.g. MITRE ATT&CK)

      • Prevention: data backup, sandbox, encryption, access control, network segmentation

      • Detection: audit, SSL/TLS Inspection, Antivirus/Antimalware, Exploit Protection,

      • Response: disable or remove feature or program, software patch, Restrict File and Directory Permissions

      • Proactive defense: penetration test, moving target defense, honeypot, deceptive signaling

  • Defender and Network Designer's actions toward User

      • Reduce human-induced attack: password policies, multi-factor authentication, behavior prevention on endpoint

      • Increase security awareness: security training

      • Increase obedience: provide penalty and reward

  • Insurer's possible actions can be continuous, i.e., the premiums and level of coverage. Also, the insurer can design a finite tiers of premiums and coverage level. Then, the insuree's action is to choose from these tiers for different types situation.

We do not limit our policy to be pure, i.e., instead of deciding which action to take, the player can decide which action to take with what probability. Intentionally introducing randomness can enlarge the policy space and capture a more general case of interaction. When a mixed security strategy is applied in cyber space, the player can first roll a die according to the probability specified by the policy and choose the realization as the action to implement.

Uncertainty

  • External Uncertainty resulted from nature and unconsidered factors:

As the model cyber space becomes increasingly complicated, we cannot and should not consider all contributing factors. Thus, the unconsidered factors will result in randomness to

      • outcome of actions (represented by the f function in the picture)

      • observation of the current system state (represented by the gᵢ function in the picture)

      • players' utilities.

  • Internal Uncertainty resulted from players:

Each player has different motivation, preference, knowledge, capacity, and rationality. A common approach is to introduce a random variable θᵢ as the `type' of player i. The support and the prior distribution of the random variable are assumed to be common knowledge. Take insider threat for an example, player i is a user yet its type can be malicious or legitimate. From the statistics, the proportion of malicious users is public information. Thus, here the user's type is binary and its prior distribution is commonly known by other players such as the defender and attacker.

Players' types can also represent the prior knowledge of external uncertainty and thus have a correlated prior distribution. Take the scenario of security as service (SECaaS) as an example, the network designer subscribe security services from N independent security companies to protect his target network. The system uncertainty can be whether an asset in the target network has been compromised and each security company has his own type based on the outcome of his own intrusion detection systems.

In general, player's types can affect the system transition (represented by the f function in the picture), the observation (represented by the gᵢ function in the picture), and players' utilities.

Utility

A player's utility is in general a function of all players' actions and types, the current system state, and the external uncertainty. The utility can be multi-dimensional and include

  1. Reward of threat intelligence such as attack tools, TTP, and attack goals.

  2. Loss of money, information, and reputation.

  3. Cost of attack/defense actions, human resources, and insurance premium.

The challenges are threefold

  1. How to quantify the above reward, loss, and cost?

      • Learn from offline collected data or from online interaction

      • Evaluated by experience and domain expert

  2. How to generate utility function of each player from the multi-dimensional measures?

      • Weighted sum

      • Stochastic order

  3. How to use utility in security games?

      • Expected utility theory

      • Cumulative prospect theory

      • Worst-case mitigation

Information and Rationality

Due to the interaction of the multiple players, information in game theory can be complicated. One concept that has appeared many times in our previous sections is common knowledge which is a special kind of knowledge for a group of agents. There is common knowledge of p in a group of agents G when all the agents in G know p, they all know that they know p, they all know that they all know that they know p, and so on ad infinitum.

Therefore, information in a security game specify what a player knows, what a player knows that they do not know, and what a player knows that they are uncertain of.

  • Information of other player's action or policy:

If the defender knows his action or policy will be known by the attacker after it is determine and he knows that attackers will best respond to that action or policy, he can choose the optimal action accordingly. In this two player game, the solution concept is Stakelberg equilibrium and the defender as the lead usually has the first-move advantage over the attacker who is the follower.

If all agents have to take actions or make policies without knowing others', then the solution concept is Nash equilibrium.

  • Information of statistics of internal uncertainty:

If the player knows the prior distribution of other players' types, then we can use Bayesian game to model it and obtain Bayesian Nash equilibrium.

  • Information of external uncertainty:

Players may obtain information or signals from the environment or other players to estimate the external uncertainty. If the signal is from other players, then deception can be introduced. Signaling game and information design game are usually used to model this scenario with the solution concept of Perfect Bayesian Nash equilibrium and Bayesian correlated equilibrium, respectively.

Rationality specify how players obtain their utilities, perceive the risk resulted from uncertainty, and react to signals and information. The benchmark security game models assume all players have perfect rationality which has incurred many criticism. Bounded rationality models, such as level-k thinking, non-bayesian update, and cumulative prospect theory for human factors, are thus introduced.

Dynamic and Timing

Dynamic security game usually models the current system status, such as the user's privilege level and the location of the attack in the attack graph, as a vector xᵏ. The superscript k represents the time index that can be either discrete stages or continuous times. The dynamic state transition can be stochastic as captured by the function f. The state can be either fully observable, not observable, or observed with uncertainty as captured by the function gᵢ at different stages.

Objective

Players formulate objective based on

  • Time differences

      • Player aims to maximize one-shot reward. For example, an attacker aims to directly compromise a computer and implement ransomware.

      • Player aims to maximize long-term reward. For example, a defender aims to protect his target asset in the long run.

  • Space differences

      • Players only consider local reward or partial reward of a complicated process. For example, a defender applies segmentation and only aims to protect its critical assets from attacks.

      • Players cares about the global reward or the complete reward of the entire process. For example, APT attacks and defense should care about the entire hacking stages.

  • Uncertainty differences

      • Players aim to optimize expected loss. This is applied if a player care about the average performance under attacks.

      • Players aim to mitigate the worst-case. This is applied if a player wants to estimate the worst-case loss.

Constraints

While players maximize their utility, they also need to consider the constraints of time, resources, stealthiness.

  • Time constraint: For example, the vulnerability may only exist for certain time, so the attack has to exploit the time window.

  • Resource constraint: For example, the defender can have limited security expert or computation power to detect and mitigate the attack.

  • Stealthiness constraint: For example, APT attacks may try to make their malicious behavior similar to the legitimate ones to remain stealthy.

Types of security games and solution concepts

  • Static Game

    • Complete information

    • Incomplete information

  • Dynamic Game

    • Complete information

Threat landscape

Incomplete informationWe visualize the threat landscape in the following three dimensions.

  • The x-axis has an increased sophistication in their Tactics, Techniques, and Procedures (TTPs).

  • The y-axis has an increased stealthiness or a delay of detection.

  • The size of the bubble increases as the attack is more likely to exploit human vulnerability.

Threats tackled by security games

Paper-wise Anatomy, we locate papers of security games in the plots. X-axis shows the sophistication of attackers and Y-axis shows the sophistication of defenders.