Shortcuts

league.player

player

Player

class ding.league.player.Player(cfg: EasyDict, category: str, init_payoff: BattleSharedPayoff, checkpoint_path: str, player_id: str, total_agent_step: int, rating: PlayerRating)[source]
Overview:

Base player class, player is the basic member of a league

Interfaces:

__init__

Property:

race, payoff, checkpoint_path, player_id, total_agent_step

__init__(cfg: EasyDict, category: str, init_payoff: BattleSharedPayoff, checkpoint_path: str, player_id: str, total_agent_step: int, rating: PlayerRating) None[source]
Overview:

Initialize base player metadata

Arguments:
  • cfg (EasyDict): Player config dict.

  • category (str): Player category, depending on the game, e.g. StarCraft has 3 races [‘terran’, ‘protoss’, ‘zerg’].

  • init_payoff (Union[BattleSharedPayoff, SoloSharedPayoff]): Payoff shared by all players.

  • checkpoint_path (str): The path to load player checkpoint.

  • player_id (str): Player id in string format.

  • total_agent_step (int): For active player, it should be 0; For historical player, it should be parent player’s _total_agent_step when snapshot.

  • rating (PlayerRating): player rating information in total league

HistoricalPlayer

class ding.league.player.HistoricalPlayer(*args, parent_id: str)[source]
Overview:

Historical player which is snapshotted from an active player, and is fixed with the checkpoint. Have a unique attribute parent_id.

Property:

race, payoff, checkpoint_path, player_id, total_agent_step, parent_id

__init__(*args, parent_id: str) None[source]
Overview:

Initialize _parent_id additionally

Arguments:
  • parent_id (str): id of historical player’s parent, should be an active player

ActivePlayer

class ding.league.player.ActivePlayer(*args, **kwargs)[source]
Overview:

Active player can be updated, or snapshotted to a historical player in the league training.

Interface:

__init__, is_trained_enough, snapshot, mutate, get_job

Property:

race, payoff, checkpoint_path, player_id, total_agent_step

__init__(*args, **kwargs) None[source]
Overview:

Initialize player metadata, depending on the game

Note:
  • one_phase_step (int): An active player will be considered trained enough for snapshot after two phase steps.

  • last_enough_step (int): Player’s last step number that satisfies _is_trained_enough.

  • strong_win_rate (float): If win rates between this player and all the opponents are greater than

    this value, this player can be regarded as strong enough to these opponents. If also already trained for one phase step, this player can be regarded as trained enough for snapshot.

  • branch_probs (namedtuple): A namedtuple of probabilities of selecting different opponent branch.

increment_eval_difficulty() bool[source]
Overview:

When evaluating, active player will choose a specific builtin opponent difficulty. This method is used to increment the difficulty. It is usually called after the easier builtin bot is already been beaten by this player.

Returns:
  • increment_or_not (bool): True means difficulty is incremented; False means difficulty is already the hardest.

NaiveSpPlayer

class ding.league.player.NaiveSpPlayer(*args, **kwargs)[source]
get_job(eval_flag: bool = False) dict
Overview:

Get a dict containing some info about the job to be launched, e.g. the selected opponent.

Arguments:
  • eval_flag (bool): Whether to select an opponent for evaluator task.

Returns:
  • ret (dict): The returned dict. Should contain key [‘opponent’].

increment_eval_difficulty() bool
Overview:

When evaluating, active player will choose a specific builtin opponent difficulty. This method is used to increment the difficulty. It is usually called after the easier builtin bot is already been beaten by this player.

Returns:
  • increment_or_not (bool): True means difficulty is incremented; False means difficulty is already the hardest.

is_trained_enough(select_fn: Callable | None = None) bool
Overview:

Judge whether this player is trained enough for further operations(e.g. snapshot, mutate…) according to past step count and overall win rates against opponents. If yes, set self._last_agent_step to self._total_agent_step and return True; otherwise return False.

Arguments:
  • select_fn (function): The function to select opponent players.

Returns:
  • flag (bool): Whether this player is trained enough

mutate(info: dict) str | None
Overview:

Mutate the current player, called in league’s _mutate_player.

Arguments:
  • info (dict): related information for the mutation

Returns:
  • mutation_result (str): if the player does the mutation operation then returns the

    corresponding model path, otherwise returns None

snapshot(metric_env: LeagueMetricEnv) HistoricalPlayer
Overview:

Generate a snapshot historical player from the current player, called in league’s _snapshot.

Argument:
  • metric_env (LeagueMetricEnv): player rating environment, one league one env

Returns:

Note

This method only generates a historical player object, but without saving the checkpoint, which should be done by league.

create_player

Overview:

Given the key (player_type), create a new player instance if in player_mapping’s values, or raise an KeyError. In other words, a derived player must first register then call create_player to get the instance object.

Arguments:
  • cfg (EasyDict): player config, necessary keys: [import_names]

  • player_type (str): the type of player to be created

Returns:
  • player (Player): the created new player, should be an instance of one of player_mapping’s values

MainPlayer

class ding.league.starcraft_player.MainPlayer(*args, **kwargs)[source]
Overview:

Main player in league training. Default branch (0.5 pfsp, 0.35 sp, 0.15 veri). Default snapshot every 2e9 steps. Default mutate prob = 0 (never mutate).

Interface:

__init__, is_trained_enough, snapshot, mutate, get_job

Property:

race, payoff, checkpoint_path, player_id, train_iteration

get_job(eval_flag: bool = False) dict
Overview:

Get a dict containing some info about the job to be launched, e.g. the selected opponent.

Arguments:
  • eval_flag (bool): Whether to select an opponent for evaluator task.

Returns:
  • ret (dict): The returned dict. Should contain key [‘opponent’].

is_trained_enough() bool[source]
Overview:

Judge whether this player is trained enough for further operations(e.g. snapshot, mutate…) according to past step count and overall win rates against opponents. If yes, set self._last_agent_step to self._total_agent_step and return True; otherwise return False.

Arguments:
  • select_fn (function): The function to select opponent players.

Returns:
  • flag (bool): Whether this player is trained enough

mutate(info: dict) None[source]
Overview:

MainPlayer does not mutate

snapshot(metric_env: LeagueMetricEnv) HistoricalPlayer
Overview:

Generate a snapshot historical player from the current player, called in league’s _snapshot.

Argument:
  • metric_env (LeagueMetricEnv): player rating environment, one league one env

Returns:
  • snapshot_player (HistoricalPlayer): new instantiated historical player

Note

This method only generates a historical player object, but without saving the checkpoint, which should be done by league.

MainExploiter

class ding.league.starcraft_player.MainExploiter(*args, **kwargs)[source]
Overview:

Main exploiter in league training. Can identify weaknesses of main agents, and consequently make them more robust. Default branch (1.0 main_players). Default snapshot when defeating all 3 main players in the league in more than 70% of games, or timeout of 4e9 steps. Default mutate prob = 1 (must mutate).

Interface:

__init__, is_trained_enough, snapshot, mutate, get_job

Property:

race, payoff, checkpoint_path, player_id, train_iteration

get_job(eval_flag: bool = False) dict
Overview:

Get a dict containing some info about the job to be launched, e.g. the selected opponent.

Arguments:
  • eval_flag (bool): Whether to select an opponent for evaluator task.

Returns:
  • ret (dict): The returned dict. Should contain key [‘opponent’].

is_trained_enough()[source]
Overview:

Judge whether this player is trained enough for further operations(e.g. snapshot, mutate…) according to past step count and overall win rates against opponents. If yes, set self._last_agent_step to self._total_agent_step and return True; otherwise return False.

Arguments:
  • select_fn (function): The function to select opponent players.

Returns:
  • flag (bool): Whether this player is trained enough

mutate(info: dict) str[source]
Overview:

Main exploiter is sure to mutate(reset) to the supervised learning player

Returns:
  • mutate_ckpt_path (str): mutation target checkpoint path

snapshot(metric_env: LeagueMetricEnv) HistoricalPlayer
Overview:

Generate a snapshot historical player from the current player, called in league’s _snapshot.

Argument:
  • metric_env (LeagueMetricEnv): player rating environment, one league one env

Returns:
  • snapshot_player (HistoricalPlayer): new instantiated historical player

Note

This method only generates a historical player object, but without saving the checkpoint, which should be done by league.

LeagueExploiter

class ding.league.starcraft_player.LeagueExploiter(*args, **kwargs)[source]
Overview:

League exploiter in league training. Can identify global blind spots in the league (strategies that no player in the league can beat, but that are not necessarily robust themselves). Default branch (1.0 pfsp). Default snapshot when defeating all players in the league in more than 70% of games, or timeout of 2e9 steps. Default mutate prob = 0.25.

Interface:

__init__, is_trained_enough, snapshot, mutate, get_job

Property:

race, payoff, checkpoint_path, player_id, train_iteration

get_job(eval_flag: bool = False) dict
Overview:

Get a dict containing some info about the job to be launched, e.g. the selected opponent.

Arguments:
  • eval_flag (bool): Whether to select an opponent for evaluator task.

Returns:
  • ret (dict): The returned dict. Should contain key [‘opponent’].

is_trained_enough() bool[source]
Overview:

Judge whether this player is trained enough for further operations(e.g. snapshot, mutate…) according to past step count and overall win rates against opponents. If yes, set self._last_agent_step to self._total_agent_step and return True; otherwise return False.

Arguments:
  • select_fn (function): The function to select opponent players.

Returns:
  • flag (bool): Whether this player is trained enough

mutate(info) str | None[source]
Overview:

League exploiter can mutate to the supervised learning player with 0.25 prob

Returns:
  • ckpt_path (Union[str, None]): with mutate_prob prob returns the pretrained model’s ckpt path, with left 1 - mutate_prob prob returns None, which means no mutation

snapshot(metric_env: LeagueMetricEnv) HistoricalPlayer
Overview:

Generate a snapshot historical player from the current player, called in league’s _snapshot.

Argument:
  • metric_env (LeagueMetricEnv): player rating environment, one league one env

Returns:
  • snapshot_player (HistoricalPlayer): new instantiated historical player

Note

This method only generates a historical player object, but without saving the checkpoint, which should be done by league.